text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
A clinically validated whole genome pipeline for structural variant detection and analysis
Background With the continuing decrease in cost of whole genome sequencing (WGS), we have already approached the point of inflection where WGS testing has become economically feasible, facilitating broader access to the benefits that are helping to define WGS as the new diagnostic standard. WGS provides unique opportunities for detection of structural variants; however, such analyses, despite being recognized by the research community, have not previously made their way into routine clinical practice. Results We have developed a clinically validated pipeline for highly specific and sensitive detection of structural variants basing on 30X PCR-free WGS. Using a combination of breakpoint analysis of split and discordant reads, and read depth analysis, the pipeline identifies structural variants down to single base pair resolution. False positives are minimized using calculations for loss of heterozygosity and bi-modal heterozygous variant allele frequencies to enhance heterozygous deletion and duplication detection respectively. Compound and potential compound combinations of structural variants and small sequence changes are automatically detected. To facilitate clinical interpretation, identified variants are annotated with phenotype information derived from HGMD Professional and population allele frequencies derived from public and Variantyx allele frequency databases. Single base pair resolution enables easy visual inspection of potentially causal variants using the IGV genome browser as well as easy biochemical validation via PCR. Analytical and clinical sensitivity and specificity of the pipeline has been validated using analysis of Genome in a Bottle reference genomes and known positive samples confirmed by orthogonal sequencing technologies. Conclusion Consistent read depth of PCR-free WGS enables reliable detection of structural variants of any size. Annotation both on gene and variant level allows clinicians to match reported patient phenotype with detected variants and confidently report causative finding in all clinical cases used for validation. Electronic supplementary material The online version of this article (10.1186/s12864-019-5866-z) contains supplementary material, which is available to authorized users.
Background
Short read based Whole Genome Sequencing (WGS) is slowly but surely becoming an integral part of the landscape of clinical diagnostic testing for rare genetic disorders. However, in current clinical practice WGS is mainly still used as 'enhanced' Whole Exome (WES). Indeed, due to its uniformity and lack of pull down or amplification artifacts, WGS typically provides better than WES coverage of coding and adjacent regulatory regions. This approach, however, ignores many of the advantages of WGS which provides unique opportunities for detection of structural variants (SVs), pathologic short tandem repeats and mitochondrial variants, which otherwise require separate assays. In particular, disease causing SVs in medical genetics are detected by karyotyping [1] and chromosomal microarrays (CMAs) [2]. However, these methods are limited in resolution and cannot identify all types of SVs.
SVs are a diverse group of variants which consists of copy number variants (CNVs), namely duplications or deletions of human genetic sequences resulting in an abnormal number of alleles; insertions of foreign genetic sequences, such as transposons; balanced translocations and inversions. A typical genome includes many thousands of such genetic aberrations [3,4], and it is challenging not only to identify them but also to determine which, if any, are causative to the patient's phenotype.
While detection of small sequence changes has become fairly standardized using "gold standard" tools such as BWA [5] and GATK [6] which are almost universally used for sequence alignment and variant calling, the situation for SV detection is quite different. There are multiple tools and pipelines designed for detection and reporting of SVs basing on short read WGS data ( [7][8][9][10] among others), however there has been no coalescence around a single consensus calling pipeline, and none of them have been utilized in clinical diagnostic testing.
Here we report a structural variant component of a comprehensive WGS-based clinical test for diagnostics of rare genetic disorders caused by germline genetic variants, developed by Variantyx. The test as a whole, including the structural variant part, underwent analytical and clinical validation, College of American Pathologists accreditation and proficiency testing, and is certified by CLIA. The SV component of Variantyx Genomic Intelligence pipeline uses a combination of breakpoint analysis (using split and discordant reads) and read depth analysis to identify structural variants, often down to single base pair resolution. False positives are minimized using ancillary calculations such as loss of heterozygosity and bi-modal heterozygous variant allele frequencies. Identified variants are annotated with phenotype information derived from HGMD Professional and population allele frequencies derived from DGV and Variantyx PAF database facilitating clinical interpretation. Single base pair resolution enables easy visual inspection of potentially causal variants using the IGV genome browser. U.S. board certified clinical geneticists use online Diagnostic Console to reviews results, select appropriate variants and generate clinical report.
Results and discussion
The SV component of Variantyx Genomic Intelligence Whole Genome Sequencing analysis workflow is comprised of three major parts: variant detection, annotation and filtering (Fig. 1).
Variant detection
The SV detection pipeline is generally organized as has been previously published [11,12]. While some of the tools are used as published, others are significantly In addition, the results of the raw variant calls have been augmented and filtered using in-house developed annotations, techniques, and data sources.
In general, structural variants can be divided into two categories: those resulting in unbalanced changes in number of copies of human DNA, and those resulting in balanced changes so the total number of copies remains the same. Copy number variants (CNVs), which include deletions, duplications and unbalanced translocations of different sizes, can be detected by two approaches: depth-based analysis and break point analysis [13]. The first is to identify regions in which read depth is significantly different from typical depth in same region in samples which are known not to have copy number variation in this region. The other is to look at variant edges and detect split reads (where two portions of a single read map to two distinct locations in the reference) and discordant reads (where paired reads map in positions or direction inconsistent with expected basing on insert size used). Only break point analysis can be used to identify balanced SVs, including inversions, translocations, as well as insertions of foreign DNA such as transposable elements.
Both read depth and break point signals are utilized by Variantyx Genomic Intelligence algorithms, while results on larger break point derived variants must be confirmed by depth signal to be considered a true positive. Structural variants are called with the use of Samblaster [11] for read extraction, LUMPY [14] for read-based SV calling and SVtyper [11] for genotyping, using default parameters. These calls are then combined with Variantyx depth caller CNVs to form a union of calls. The depth calling algorithm utilizes a proprietary model built with known true negative WGS samples sequenced and aligned under the same conditions. The rolling average read depth-based model rolls up 100 bp segments into buckets of 10,000 and 2500 bp. We found these sizes optimal while 10,000 bp bucket allows to detect uninterrupted stretches of read depth deviation in larger CNVs and 2500 bp bucket allows to detect smaller CNVs and improve exact position of larger ones. Break point analysis allows detection of smaller SVs and all types of balanced variants.
The most common SVs, deletions and tandem duplications, have a single break point which exhibits in the sample reads the unexpected juxtaposition of two noncontiguous reference coordinates marking the start and end of the structural event. These have a readily identified signature and are easy to classify. Other events such as insertion of DNA naturally have two break points, one at the start and the end of the inserted fragment. However, in such cases, one of the breakpoints may not be detected, because the number of split or discordant reads supporting the second breakpoint does not reach calling threshold, or the second breakpoint is located in a difficult to map region containing, for example, highly repeated sequences. This is especially true if the inserted element is a transposon. Translocation of chromosome arms also have one break point but are hard to distinguish from an insertion with an undetected second break point. Thus, even an unclassified single breakpoint could indicate a potentially disruptive SV and such variants are annotated and subsequently uploaded to the Diagnostic Console together with the pre-classified SVs.
While exact quantitate benchmarking of the variant calling results by Variantyx pipeline relative to other SV calling tools and algorithms has not been performed, some comparison could be made. Most available tools use either read-based or depth-based calling, while our approach is to merge calls from both read-based and depth-based callers to increase sensitivity. For example, SOAPsv [15] and LUMPY are breakpoint detection based. We use machine learning algorithm to detect CNVs based on large number of human genomes sequenced under same standard operating procedures, resulting in highly repeatable normalized read depth. This approach provides significantly better results than CNVnator or Control-FREEC [16] which run only one sample at a time and have no prior knowledge of expected coverages.
Raw output of the SV calling pipeline includes significant number of false positives that must be removed prior to introduction to the Diagnostic Console. Many of these false positive calls can be filtered out based on a number of criteria specific to variant type. In particular, all variants called based on break point analysis must be supported by at least 20 observations (combined split and discordant reads), out of which 5 must be split reads. In addition, CNVs over 5000 bp long called using break point analysis must have at least 30% overlap with those called using read depth analysis.
For further removal of false positive calls, we examine the detected Single Nucleotide Variants (SNVs) within the SV region, and apply the following thresholds for three types of SVs: 1. Homozygous deletion must overlap no more than 1 SNV per 1000 bp length, with minimum of 5 SNVs to apply the rule. This rule is based on the fact that in most regions of the genome (with notable exceptions such as sex chromosomes) the frequency of SNVs is higher, and if at least one allele is present the threshold will be exceeded. Fig. 2). Since natural 50% balance between alleles is shifted in case of duplication this parameter represents yet another reliable threshold.
Typically, 6 to 8 thousand SVs are called by Variantyx pipeline per genome, while approximately 70% of these SVs pass the default filtration settings. We have analyzed calling and filtration results using the best available to date true SV data set based on Genome in a Bottle sample NA24385 (See Tables 1, 2 and 3). The application of filtration allows to remove most of false positive calls, while leads to loss of relatively small number of true positive variants (removes 38 TP, 297 FP in the analyzed buckets). The most significant impact is on largest bucket, removing all 84 false positive and keeping all 5 true positive SVs. We have also analyzed the effect of individual filters (data not shown). The most impactful filter was the fraction of heterozygous SNVs overlapping heterozygous deletion. Its application removed 47 false positives from the 100,000+ bucket and 40 false positives from the 10,000-100,000 bucket. This filter also removed 4 true positive variants from the 10,000-100,000 bucket. It is important to note that truth data set has its limitations and some "true positives" and "true negatives" are not necessarily such.
Annotation
All the SVs are annotated with information on the variant and gene level. Variant level annotation is comprised of population frequency and pathogenicity data. PAF data is derived from DGV [17] and from Variantyx internal database. HGMD Professional [18] is used for annotation with overlapping pathogenic SVs. HGMD Professional database includes records of over 220,000 pathogenic genetic variants collected by manual curation of peer-reviewed literature. It is very well known and represents industry standard in clinical genetics of small sequence changes, however despite the fact it includes over 20,000 curated pathogenic SV currently it is not widely used in SV annotation. The reason for that is lack a The analysis has been performed using Genome in a Bottle truth dataset NA24385 (http://tinyurl.com/GIABSV06). Since the truth set was built on top of hg19, we had to lift over results from hg19 to hg38 which dropped some variants. In addition, in order to not be biased we have only compared SVs that were liftable from hg38 to hg19 (to make sure we don't have any false, false positives). We also removed calls made in non-unique areas and calls < 100 bp of genomic coordinates of the SVs included in HGMD Professional, making the data not readily available for annotation. We have revisited all SV records in HGMD Professional and have supplemented them with genomic coordinates where possible, which allowed utilization of this data in annotation. It the process of annotation SVs overlapping over 70% of known pathologic SVs are considered having variant level pathogenicity annotation. Same strategy is applied for pathogenic SVs reported in ClinVar [19]. It often happens that no SVs similar to one detected have been previously reported in peer-reviewed literature, however the SV intersects gene(s) or region with known pathogenic small sequence changes. Data on such changes is derived from HGMD Professional and from ClinVar and complemented with information on known pathogenic genes from OMIM [20] and Orphanet [21]. Annotation with this data is considered gene level and is noted as such. Gene level annotation play an important role in SV pathogenicity classification, particularly in cases of unclassified SVs. Example of pathologic SV with variant and gene level annotation as seen in Variantyx Diagnostic Console is shown in Fig. 3.
Recessive structural variants can be compound to small sequence changes, and detection of such combinatory compound heterozygous pairs is often challenging. To facilitate this process, we have included information on existing complementary SV in annotation of small sequence variants. Such compound (in case of family analysis when paternal and maternal alleles are identifiable) and potential compound (when one of the variants is de novo or patient is tested as a singleton) pairs are presented in a dedicated section of Diagnostic Console, along with compound pairs of small sequence variants.
Filtration
The annotated SVs are uploaded into Diagnostic Console where they can be filtered by the interpreting geneticist. Many parameters are available, but the most important and frequently used for the diagnostic process are variant and gene level phenotype association, population frequency and function location (see Fig. 4). Default parameters are below 2% population frequency (maximum between DGV and Variantyx internal database) and only include variants with associated phenotype on variant level. On second stage variants intersecting OMIM/Orphanet genes and Overlapping HGMD/ClinVar SNVs are analyzed. No changes in technical parameters such as number of split reads or depth call overlap are recommended as part of Unity test structural variants Diagnostic Process (Additional file 1: Method S1).
Validation
Typically, clinical genetic assay validation would include two phases, analytical and clinical validation. Analytical validation includes comparison of variants called by the assay with known true positive set of variants to determine sensitivity, specificity and positive predicted value.
In NGS based genetic test development the industry standard is to use Genome in a Bottle samples, such as GM12878 [22,23]. Indeed, in case of small sequence changes available true positive variant sets can be used for accurate benchmarking. See Additional file 1: Figure S2 for analytical validation statistics of small sequence changes component of Variantyx Unity test.
Unfortunately, no true positive variant set of acceptable quality is available for analytical validation of SVs. Different sequencing and variant calling methods applied by different research groups produce sets of "true positive" variants which vary between each other by nearly an order of magnitude in terms of quantity and overlap of detected variants [23,24]. Close examination of a representative group of "true positive" SVs called by different approaches revealed large number of false positives and false negatives, making use of this data unacceptable in analytical validation of clinical test. Thus, we have decided to directly pursue clinical validation of Unity test. To perform clinical validation, we have gathered a statistically significant number of true positive (those having causative pathogenic SV confirmed by orthogonal detection techniques) and true negative clinical samples (those of healthy individuals or affected but having causative genetic variants of different than SV types). Majority of the true positive samples were obtained from public collections, while some originated from different sources [25].
A total of 60 clinical validation cases underwent complete Unity test cycle, starting from de novo WGS sequencing all the way to clinical interpretation by board certified clinical geneticists and generation of patient report. No identifying details were disclosed beside patients' phenotypes, and for healthy controls realistic phenotypes and anamneses were added. In addition to these 60 patient samples, a synthetic sample with a variety of hard to detect pathogenic variants (including two SVs) has been analyzed. Due to large number of detected pathologic genetic variants it was impossible to pass the synthetic sample for real patient data; thus, it is not included in total statistics despite the fact that both pathologic SVs were successfully identified and reported as such. Additionally, three trisomy samples were included. However, since ploidy analysis is performed by Variantyx Genomic Intelligence pipeline on early stage of data analysis and data of patients positive for ploidy aberrations are not uploaded to Diagnostic Console, these samples are also not included in the total while all three were successfully identified by our system. Out of 60 clinical validation patients, 17 were true positive for pathogenic SV, in some cases with multiple SVs present, and in others the SV was a compound heterozygote with recessive small sequence changes. All but one SV were successfully identified and reported as such. The one missed has been identified by the Diagnostic Console but was not included in the report due to detection of two known pathogenic SNVs that could explain patient phenotype without involvement of SV. In general, significant number of true positive samples found in public repositories belong to cases diagnosed nearly two decades ago by rather narrow, by today's standards, Between all patient and synthetic samples which underwent clinical interpretation there were a total of 25 SVs, all of which were detected by Variantyx Genomic Intelligence platform and 24 were clinically reported, resulting in 96% clinical sensitivity for detection of pathogenic SV. It is important to note that true positive samples that include SVs beyond the scope of Variantyx Unity test, such as balanced translocations, were not included in clinical validation.
Conclusions
The uniformity and consistent read depth of PCR free WGS allows reliable detection of SVs and clinical utilization of SV workflow as part of comprehensive WGS based genetic testing that could be used as the first line diagnostic test. Unity test developed by Variantyx, CLIA certified and CAP accredited for High Complexity Testing, has been clinically validated to serve as clinical use medical genetics assay. The test successively detects SV variants of multiple types, with examples of reported pathologic variants range from 45 bp deletion (see Additional file 1: Figure S1) to complex rearrangement involving millions base pairs on three different chromosomes [26]. While some types of SVs, such as balanced translocations occurring in non-uniquely mappable areas, are still represent a challenge for short readbased test, better resolution and absence of variant size limitation, together with declining sequencing costs, allow WGS based test to be viable alternative to traditional array-based assays.
Methods
Patient blood or saliva is collected using Variantyx Unity collection kits. DNA is purified and NGS library is prepared using Illumina PCR-Free Truseq nano DNA WGS (550 bp insert size protocol) kit according to manufacturer's instructions. Sequencing is performed using CLIA approved protocols using Illumina HiSeq X and Novaseq Sequencing machines. FASTQ files are downloaded and processed with Variantyx Genomic Intelligence pipeline. Only tests passing quality threshold parameters for data integrity, contamination, mapping quality etc., (see Additional file 1: Table S2 for complete list of threshold parameters) undergo bioinformatic analysis and clinical interpretation. The interpretation is performed by US board certified clinical geneticists according to clinical diagnostics protocol approved by CAP (see Additional file 1: Method S1 for SV portion of the protocol). Causative variants fitting reporting criteria are included in clinical report, which is submitted to the ordering clinician. A synthetic DNA sample for Unity test validation was purchased from SeraCare (Seraseq Inherited Cancer DNA Mix v1).
Additional files
Additional file 1: Figure S1 Causative heterozygous deletion of 45 bp detected and reported by Variantyx Unity test. Figure S2 Analytical validation statistics of small sequence changes by Variantyx Unity test basing on combination of 3 different Genome in a Bottle samples. Method S1 Variantyx diagnostic procedure for reporting pathogenic structural variants. (DOCX 152 kb) | 4,824.4 | 2019-07-01T00:00:00.000 | [
"Biology"
] |
The Driver Time Memory Car-Following Model Simulating in Apollo Platform with GRU and Real Road Traffic Data
Car following is the most common phenomenon in single-lane traffic. (e accuracy of acceleration prediction can be effectively improved by the driver’s memory in car-following behaviour. In addition, the Apollo autonomous driving platform launched by Baidu Inc. provides fast test vehicle following vehicle models. (erefore, this paper proposes a car-following model (CFDT) with driver time memory based on real-world traffic data. (e CFDT model is firstly constructed by embedded gantry control unit storage capacity (GRU assisted) network. Secondly, the NGSIM dataset will be used to obtain the tracking data of small vehicles with similar driving behaviours from the common real road vehicle driving tracks for data preprocessing according to the response time of drivers. (en, the model is calibrated to obtain the driver’s driving memory and the optimal parameters of the model and structure. Finally, the Apollo simulation platform with high-speed automatic driving technology is used for 3D visualization interface verification. Comparative experiments on vehicle tracking characteristics show that the CFDT model is effective and robust, which improves the simulation accuracy. Meanwhile, the model is tested and validated using the Apollo simulation platform to ensure accuracy and utility of the model.
Introduction
Car-following (CF) behaviour is the most basic micro driving behaviour, referring to the interaction between two adjacent vehicles in a vehicle fleet driving on a single-lane road that does not allow passing [1]. e concept of CF originated in the early 1950s. Over the past six decades, CF models have been extensively and systematically studied, and fruitful achievements have been made [2]. Since the 1990s, research in related fields has gradually emerged in China. Researchers from various fields have attempted to interpret the observed microscopic phenomena from different perspectives [3].
ere are currently many types of CF models that can be divided into two categories based on their origins: theorydriven and data-driven CF models [4].
In the development of the theory-driven models, the stimulus-response models are the most classic CF models, of which the General Motors (GM) model [5] is the most important. e GM model has been gradually developed and used since the late 1950s; it is the basis of many of the subsequent stimulus-response models. e GM model clearly reflects the characteristics of CF behaviour; it has a simple form and a clear physical meaning based on its originality. However, this model is prone to change with changes in traffic operational conditions and hence lacks universality.
With the increasing popularity of artificial intelligence, data-driven models have gradually become a focus of research of CF models. In 1988, Rumelhart proposed the backpropagation neural network (BPNN) [6], which is a multilayer feedforward neural network (FNN) that uses the error back-propagation algorithm to adjust weights. It is the most widely used NN model. Chen et al. proposed a deep learning method for learning potentially complex and irregular probability distributions, which can accurately estimate the values of CDF and PDF [7].
With the wide application of NNs in the field of traffic simulation, in 1998, Kehtarnavaz [8] applied the BPNN model to CF behaviour modelling for the first time, using the speed of the following car and the distance between the two cars as the model inputs and the relative speed of the two cars as the model output, thus verifying the validity of the model. Zhang et al. [9] established a closed-loop driving following model based on BP neural network and verified the adaptability of the model to different driving groups through experiments. Support vector regression (SVR) [10] is a machine learning algorithm that converts the original problem into a convex quadratic programming problem and solves it using the optimality theory to obtain the global optimal solution. Wei and Liu [11] proposed a vehicle following model based on support vector regression and studied the asymmetric characteristics of vehicle following behaviour and its influence on the evolution of traffic flow. Parham et al. [12] propose car-following modelling using an efficient support vector regression method and prove that it has appropriate validity after inputting the driver's instantaneous reaction time. However, studies based on such models are in their infancy.
CF behaviour is a continuous behaviour, so a driver can make a corresponding decision based on the memory of the previous time period [13][14][15]. However, a large number of existing models do not fully consider the driver's memory effect and only consider the instantaneous interaction between the following and leading cars. To process the CF time series data to use their historical information, the model must have a memory capability; this is lacking in both the BPNN-and SVR-based models.
Recurrent neural network (RNN) [16] is a class of neural network with memory capability. Yang [17] proposed a carfollowing model based on recurrent neural network (RNN) to effectively describe the state changes of vehicles while driving and road traffic congestion.
When an input sequence is long, the gradient explosion and vanishing problems, also known as the long-term dependency problem, will occur. To solve this problem, various modifications have been made on RNNs; the most effective method is to introduce various gating mechanisms such as the long short-term memory (LSTM) [18,19] and the gated recurrent unit (GRU) [20] networks. Based on the previously described studies, Wang et al. [21] proposed the use of GRU to model CF behaviour and embed the driver's memory effect in the model, which used the speed of the following car, the relative speed of the two cars, and the distance between the two cars observed in the last several time intervals as inputs and the estimated speed of the following car at the next time point as the output. e test results showed that the proposed model has higher simulation accuracy than the existing CF models and provides a new concept for the study of traffic flow theory and simulation. However, the driver's decision-making and reflection time are not considered in the judgment process.
However, influenced by multiple sources of information, a driver's decision-making and judgment process exhibits a complex nonlinear modality during driving, and the driver's psychological decision cannot be described with a simple mathematical expression. Fuzzy theories and artificial neural networks show certain operational advantages in handling complex nonlinear issues and also exhibit a good learning capacity under big data samples. erefore, the fuzzy theory and artificial neural network are often used for simulating driving behaviours under different environments. However, the current schemes utilizing fuzzy theories and artificial neural networks only focus on the velocities and accelerations of the leading car and the following car, as well as the spacing there between, without considering driving environments [22]. In addition, how to obtain real-time traffic information (such as average speed, travel time, traffic flow, and traffic conditions) is also an important issue for unmanned driving. Many scholars have also done a lot of research on real-time traffic information. Chen proposed a cell probe (CP)-based method to analyse cellular network signals with an estimated accuracy of 97.63%, which is easier to obtain than traditional methods [23].
In April 2017, Baidu released its open platform Apollo for autonomous driving; after iterations of multiple versions, the platform has been enabled for localization, sensing, decision, and simulation. Apollo may help its partners in the automotive and autonomous driving industries to quickly develop a set of their own autonomous driving systems in consideration of vehicles and hardware systems. In the Apollo simulation environment, environment information including traffic signs, index lines, and the relationships with surrounding vehicles may be inputted into Dreamview via corresponding interfaces to thereby construct a driving environment. Besides, the Apollo platform is further enabled for validating the car-following model and optimizing the relevant algorithm through a 3D visual interface.
In this study, based on the previously described studies and combined with actual road conditions, we designed a CFDT model based on the data-driven model in combination with the improved RNN. In our model, the speed of the leading vehicle in the previous time interval, the speed of the following vehicle, and the distance between the two vehicles are used as inputs to predict the acceleration of the following vehicle at the next time point. Furthermore, the established model was calibrated using the CF data to determine the optimal parameters and optimal structure of the model, which were then verified through simulation. Finally, the proposed model was compared with the BPNN-and SVR-based models. It was confirmed that, compared with the traditional CF models, the RNN networkbased CF model has high robustness and improved simulation accuracy, providing a methodological basis for studying the car-following behaviour. e remainder of the paper is divided into four sections. Section 2 introduces conventional car-following models and the neural network based car-following model. Section 3 models CF behaviour mainly using the RNN network. Section 4 processes the data and briefly analyses the driver's response time. Section 5 trains the proposed model to obtain the optimal parameters, compares it with the other two existing models, and verifies that the proposed model can achieve better simulation results. Section 6 conducts an empirical study of the three types of CF models based on the data and describes in detail the models' verification experiments. Section 7 uses the Apollo simulation platform to verify the model and ensure the accuracy and practicability of the model. Section 8 introduces the summary and prospects.
Traditional Car-Following Models.
e stimulus-response framework is the most traditional modelling idea of the car-following behaviour, which embodies many essential characteristics of the car-following behaviour, while the GM model is the most important stimulus-response type of the traditional vehicle following model. e GM model assumes that a vehicle does not show passing or lane-changing behaviour when following. e driving dynamics theory is used to derive the basic equation: (1) In equation (1), a n+1 (t + T) is the instantaneous acceleration of a following car at time (t + T); v n+1 (t + T) is the instantaneous speed of a following car at time (t + T); Δv(t) is the relative speed of the two cars at time t; Δx(t) is the distance between the two cars at time t; T is the response time; λ is the sensitivity parameter to be calibrated; and m and l are additional parameters to be calibrated. Numerous studies have been focused on the parameter calibration and extension of the GM model. e GM model clearly reflects the characteristics of CF behaviour; it has a simple form and a clear physical meaning based on its originality. However, this model is prone to change with changes in traffic operational conditions and hence lacks universality.
Basics of NN.
NN is a highly nonlinear model with a neuron as its basic unit. When a neuron receives a set of input signals, it generates an output signal. A typical structure is shown in Figure 1.
e sum of the weighted inputs is described using the net input z; then, we have equation (2), as follows: where f(x) is the activation function. en, the output can be expressed as follows in equation (3): e commonly used activation functions, which are nonlinear, are as follows: Sigmoid function: Tanh function: ReLU function: Leaky ReLU function: In practical application, the appropriate activation function can be selected according to the actual situation.
Historically, various NN structures have been proposed; those most commonly used include the feedforward NN, the feedback NN, and the graph network. In this study, the feedback NN was adopted. e neurons in a feedback NN can receive not only the signals of other neurons but also their own. Compared with those in a feedforward NN, the neurons in a feedback NN have a memory function and have different states at different times. e basic structure of a feedback NN is shown in Figure 2.
In a feedback NN, signals can propagate in one or both directions.
is type of network includes RNN and the Boltzmann machine.
Gated RNN.
To solve the long-term dependence problem of RNNs in the long sequence of training, a gating mechanism is introduced to selectively add new information while selectively forgetting the retained information. Such networks are collectively referred to as gated RNNs; the most popular include the LSTM and GRU networks.
LSTM
Network. An LSTM network adds the new internal states c t and introduces three "gates"-the forgetting gate (f t ), input gate (i t ), and output gate (o t ). e value of a "gate" is within (0, 1); it is used to control the amount of information passed.
Specifically, the forgetting gate f t controls the amount of information to be forgotten by the internal state of the last time point (c t− 1 ); the input gate i t controls the amount of information to be retained by the candidate state c t of the current time point; and the output gate o t controls the Mathematical Problems in Engineering amount of information to be output to the external state h t by the internal state c t of the current time point. Figure 3 shows the internal structure of the LSTM unit, where × represents the multiplication of vector elements and + represents the addition of vector elements; σ(x) is the sigmoid activation function. us, the three gates of f t , i t , and o t can be calculated with equations (8) e methods for status updating are described in equations (11)-(13) for f t , i t , and o t , respectively: In equations (12) and (13), * represents the multiplication of vector elements.
In an LSTM network, the internal state of the unit c t can retain certain key information for a significant amount of time.
GRU Network.
e internal concept of a GRU network is similar to that of an LSTM network, and a GRU network can achieve a comparable effect. However, a GRU network has fewer parameters, lower training difficulty, and higher practicality.
Because the input and forgetting gates in the LSTM unit are complementary, in a GRU unit, they are combined into one gate, i.e., the update gate z t , while the output gate of the LSTM unit is deleted and a reset gate r t is added without introducing a new internal state. Its structure is shown in Figure 4.
Where the update gate controls the amount of information to be retained by the state of the current time point (h t ) from the state of the last time point (h t− 1 ), as well as the amount of new information received from the candidate state (h t ), the reset gate controls the amount of information to be retained from the state of the last time point (h t− 1 ) by the candidate state of the current time point (h t ). e methods for the calculation of the two gates are shown in equations (14) and (15). W * ( * � z, r, h) and U * ( * � z, r, h) stand for the weight matrix from the cell to gate, and b * ( * � z, r, h) denotes the vector of each gate.
e methods for updating the states are shown in equations (16) and (17): In the case where z t � 0 and r t � 1, the GRU network degenerates into a simple RNN.
Approach to the Proposed Model
Herein, we adopted the RNN network to model CF behaviour. e proper choice of model inputs and outputs can improve the simulation accuracy of the model. Based on the GM model, we use the speed of the leading car at time t (v n (t)), the speed of the following car at time t (v n+1 (t)), and the distance between the two cars (Δx(t)) as inputs and the acceleration of the following car at time t + T (a n+1 (t + T)) as the output. en, we have equation (18): where N is the length of the time interval of "memory." Figure 5 is the specific structure diagram of the model proposed in this paper.
To eliminate the influence of dimension on the simulation accuracy and convergence rate of the model, the paper normalizes the vehicle following data (leading vehicle speed, following vehicle speed, and following vehicle acceleration). In different traffic environments, the behaviour of following a car is easily affected by the propagation of the slight disturbance of the speed of the leading car, the habit of acceleration and deceleration, the driver's cognition of the environment, and the driver's cognition of driving behaviour. e distribution of the car-following data cannot judge whether it is close to a Gaussian distribution. erefore, the MinMaxScaler of the simplest method to eliminate the influence of dimensionality and data value range is selected in this paper to preserve the relationship existing in the original data. e formula for MinMaxScaler is as follows: where max and min are the maximum and minimum values of a given zoom range, respectively; i.e., the original data are scaled to the range of [max, min]. Specifically, the original data are scaled to the range of [0, 1]. en, we have the following: e CF data are constructed to the appropriate input and output shapes to comply with the input and output structures of the GRU model. Because the length of the "memory" interval (N) is closely related to the constructed data, it also directly affects the prediction accuracy of the model and so it was tested in detail in this study. en, the dataset was randomly divided into training and test sets. Next, ReLU was selected as the activation function of the output layer to establish the GRU model. Because the number of hidden layers and the number of neurons in each hidden layer can have numerous combinations, it is necessary to separately test each of the models with different structures to obtain the optimal structure of the model.
e mean square error (MSE) (21) was used to construct the loss function and the Adam optimizer [24][25][26], due to its excellent performance in most cases, was adopted.
In the input layer, the time step is set to N, and each step contains three input variables (v n , v n+1 , Δx). Since it is necessary to predict the acceleration of the following vehicle with a continuous time of m, the input value is constructed into a matrix X ∈ R m×N×3 through the normalization of formula (19). Finally, the predicted acceleration of d for a continuous period of time of the following car was constructed into output y ∈ R m×1 through the GRU gate unit of multiple hidden layers. e pseudocode for the construction of the RNN-based CF model is shown in Algorithm 1.
Processing of the Following Vehicle Data.
e Next Generation Simulation (NGSIM) program [27] was initiated by the U.S. Federal Highway Administration (FHWA).
rough the established synchronous digital camera network, detailed vehicle trajectory data were acquired at a time interval of 0.1 seconds from the US-101 Freeway and the southbound direction of Lankershim Boulevard in Los Angeles, California; Interstate I-80 in Emeryville, California; and the eastbound direction of Peachtree Street in Atlanta, Georgia.
In this study, the detailed trajectory data of the eastbound vehicles on Interstate I-80 in Emeryville, California, were used. e data were acquired at 10 frames per second by seven cameras mounted on the 30-story Pacific Park Plaza Building located at Christie Avenue. e study road section is 503 m long and has six lanes. Lane 1 is a high occupancy vehicle (HOV) lane and Lane 6 is a collector-distributor lane, as shown in Figure 6. Table 1.
In this study, we mainly analysed the CF behaviour of cars and their microscopic characteristics. e CF behaviour is closely related to the road conditions. To ensure the universality of the CF behaviour, we examined the vehicle trajectory data between 4:00 pm and 4:15 pm, which contain 1,028,575 trajectory records. Each record contains 18 fields, as fully described in Table 2.
Data Preprocessing.
First, data of the following vehicles were found and filtered according to the following rules: (1) To avoid the potential difference in CF behaviour between different types of vehicles, we only focused on the CF behaviour of small cars (i.e., v _ Class � 2) (2) Cars from the HOV lane (i.e., Lane _ ID � 1) and the collector-distributor lane (i.e., Lane _ ID � 6) were excluded to ensure that the vehicles under study are associated with similar driving behaviour, i.e., to ensure the consistency of driving behaviour (3) e single-lane data of driving vehicles were adopted to avoid the influence of lane-changing behaviour on CF behaviour (4) Only the data for cars with a following time greater than 45 s (i.e., 450 records) were retained to ensure ... ...
∆x (t)
Current time t Figure 5: Schematic structure of the GRU-based CF model.
GRU-based car-following model
(1) Data normalization using MinMaxScaler. (2) e input is constructed as X ∈ R m×N×3 , e output is constructed as y ∈ R m×1 (3) e constructed data is divided into training set and testing set. (4) Modelling Sequential(), add GRU Layer and Dense Layer (5) Compiler model, loss function � "MSE," optimizer � "Adam" (6) while model convergence do (7) Training model (8) Mathematical Problems in Engineering 7 the integrity of the CF process and to ensure an adequate number of samples for model training e pseudocode for the data processing is shown in Algorithm 2. Table 3 is a part of vehicle tracking data obtained through data preprocessing, whose lead vehicle ID is 66 and the following vehicle ID is 74.
Driver Reaction Time.
e driver reaction time refers to the time between the driver perceiving a change in the surrounding environment and responding [28,29]; it is also an important parameter in the CF model.
According to the restrictions of the CF, as the driving state of the leading car changes, the following car changes accordingly. However, changes in the driving states of the two are asynchronous.
is is because the driver of the following car must have a reaction process to respond to a change by the leading vehicle. is reaction process includes four parts: perception, judgment, reaction initiation, and reaction execution; the required time is referred to as the reaction time. Assuming that the reaction time is T when the leading car makes a change at time t, the following car can only make the corresponding change at time (t + T). e driver reaction time has been extensively investigated by many researchers. Kim et al. [30] analysed the braking reaction time of young and old drivers. Jin [31] studied the driver reaction time using least squares analysis via SPSS 13.0 and generated a reaction time distribution map (Figure 7). e calculation showed that the weighted average of driver reaction time is 1.077 s.
Lu et al. [32] obtained statistical information on driver reaction time through 63 samples, as shown in Table 4.
Based on the above discussion, we set the reaction time to 1 s in this study. Because the acquisition time interval of the adjacent two records of the CF data is 0.1 s, we excerpted one record for every 1 s (i.e., 10 records); the new dataset was saved for later use.
Evaluation Index.
In this paper, MSE is used as the evaluation index of CFDT model. MSE is a risk function, related to the expected value of the squared error loss or quadratic loss. MSE is arguably the most important criterion used to evaluate the performance of a predictor or an estimator. It measures how close a fitted line is to data points. For every x data point, take the distance vertically from the point to the corresponding y value on the curve fit (the error), and square the value. e lower estimation value of MSE represents the lower error [33,34]. e specific calculation formula of MSE has been shown in equation (21).
Training and Test
In this section, we used Keras, a Python-based deep learning library, to construct and train the CFDT model, and used TensorFlow as a back-end tool. e hardware environment of our experiment is as follows: processor Intel Xeon 2.10 GHz E5-2683 v4, memory 64 GB 2400 MHz, operating system Windows Server 2012 R2 standard, IDE: PyCharm. According to this model training process, the optimal length of a "memory" time interval and the optimal structure (9) end for (10) if time > 450 then (11) e Data that holds the data is stored in CSV format (12) end if (13) end for ALGORITHM 2: e pseudocode for the CF data processing. Mathematical Problems in Engineering of the model must be determined so the model has its best performance. e results of previous studies have shown that the optimal length of a "memory" time interval is not related to the structure of the model, and the NGSIM dataset does not need more than three hidden layers for model performance [21]. On this basis, we performed the following experiment.
First, we fixed the structure of the model. To reduce the training time, a simple structure with a single hidden layer and 20 neurons was adopted to separately perform the experiment with different N values. e results are shown in Figure 7. Figure 8 shows that when the length of the "memory" time interval was 10, i.e., when the driver of the following car only considers the historical information within the time period of the last 10T, the model had the best performance.
On this basis, assuming N � 10, we separately trained and tested the models with the nine structures listed in Table 5.
e results showed that, for models with one hidden layer (Structures 1 through 3), the performance value of the model with Structure 3 is the minimum; for models with two hidden layers (Structures 4 through 6), the performance value of the model with Structure 5 is the minimum; and for models with three hidden layers (Structures 7 through 9), the performance value of the model with Structure 8 is the minimum. In terms of model performance, the performance values of the models are ranked in an ascending order: the model with Structure 3< the model with Structure 5< the model with Structure 8.
Next, the CF data, in which the vehicle ID of the leading car is 66 and that of the following car is 74, were trained using the model with Structure 3, the model with Structure 5, and the model with Structure 8. e simulation results were visualized and are shown in Figures 9-11.
In each of the figures, the first subplot shows the actual data, the second subplot shows the simulation data cluster belonging to 100 different training models, and the third subplot shows the simulation data cluster with error bars, i.e., mean ± std.
In summary, for the RNN-based CF model, the model with a "memory" time interval length of N � 10 and three hidden layers that contain 30, 10, and 10 neurons had the highest prediction accuracy and generated satisfactory simulation results for a road section that had continuous acceleration and deceleration behaviours.
Comparison with Other CF Models
To test the simulation accuracy of the RNN-based CF model, two other models, i.e., BPNN and SVR, were selected from the data-driven models to conduct a comparative experiment. Similarly, to ensure the fairness of the comparison, the speed of the leading car at time t (v n (t)), the speed of the following car at time t (v n+1 (t)), and the distance between the two cars at time (Δx(t)) were used as the inputs to the model, and the acceleration of the following car at time (a n+1 (t + T)) was used as the output of the model. MSE continued to be adopted as the criterion for model evaluation. Figure 12 was constructed based on the BPNN. e model can have various structures in terms of the number of hidden layers and the number of neurons in each layer. According to Kolmogorov's theorem [35], a back-propagation neural network with three layers is sufficient to complete any mapping from n dimensions to m dimensions. erefore, we choose the BPNN network with two hidden layers as the structure of the BPNN-based CF model. After multiple tests, the optimal structure, which included two hidden layers containing 20 and 10 neurons, was selected.
BPNN-Based CF Model. First, a model as shown in
e Tan h function was used as the activation function of the neurons. e model was constructed and trained using Keras. Methods such as the holdout method were used to randomly create the training and test sets. Figure 13. Figure 12 shows that because this dataset had a significant amount of "noise" (i.e., the following car was constantly in an accelerating or decelerating state), the resulting simulation results of the model have a low accuracy (i.e., there is a certain error when compared to the real data). Nevertheless, the results reflect the variation trend in the acceleration of the following car. Moreover, the model has a relatively simple structure and fast convergence rate. e above experiment demonstrates the validity of the BPNNbased CF model.
SVR-Based CF Model.
we constructed an SVR-based CF model, as shown in Figure 14.
where the Gaussian kernel function was selected as the kernel function for the model.
To simplify the experimental process, we used the svm. SVR in the existing Scikit-learn framework for training and testing and assumed the parameter kernel of svm. SVR was "rbf." As in the case of the BPNN-based model, the dataset was randomly split. We again randomly selected a set of CF data that was associated with leading car 79 and following car 87. In addition, the following car exhibited frequent acceleration and deceleration behaviour. As shown in Figure 15, the simulation results indicate that the prediction error of the SVR-based model is small.
Comparison between Different CF Models.
To compare the above three CF models, we tested each of the models on the same CF dataset (i.e., leading car 1503 and following car 1507). e simulation results are shown in Figure 16.
In this CF dataset, the acceleration and deceleration behaviour of the following car was infrequent, and the following car was driving at a constant speed nearly 40% of the time. As reflected in Figure 16, the polyline of the real data is relatively smooth. erefore, all three models performed well in the simulation. However, in Figure 16, the GRU predicted acceleration is closer to the real acceleration than the BPNN model and the SVR model when the real acceleration of 0 to 5 seconds and 50 to 60 seconds has a significant continuous jump. erefore, the GRU model proposed in this paper is superior to the SVR model and the BPNN model. Figure 17 shows the MSE evaluation values of the above three models. e lower estimation value of MSE represents the lower error. rough the comparison of MSE evaluation index values in the above three model simulation experiments, it is found that the MSE value of BPNN model is the highest, while the model proposed in this paper is the lowest, which is only half of the MSE value of BPNN model. erefore, the comparison of the MSE evaluation index values in the above three model simulation experiments shows that the proposed model is superior to the SVR model and the BPNN model.
To conclude, the above three models can not only reflect the variation trend in the acceleration of the following car but also accurately predict the values. However, for vehicles driving at variable speeds over a long period of time, the proposed model, which includes a memory unit, had better simulation results but, correspondingly, a slower convergence rate than the other two models.
Test Verification on Apollo
Simulation Platform e proposed method for predicting a car-following behaviour is combined with the Apollo platform as follows: Differentiating scene information in an autonomous driving process of a vehicle into static information and dynamic information, and importing the static information and the dynamic information into Dreamview of the Apollo platform to construct a road scene, specifically including obtaining three-dimensional information of a traffic scene and motion information, wherein the three-dimensional information of the traffic scene refers to static information in the corresponding scene information and the motion information of the traffic scene refers to dynamic information in the scene information; preliminarily constructing a topological structure of the scene, wherein the topological information of the scene includes information such as the number of surrounding vehicles, the lanes occupied by surrounding vehicles, and the distance from a road edge; inputting such information into Dreamview via a corresponding interface of Apollo; configuring paths to specific modules based on the (Table 6) provided by the simulation environment, and performing, by respective modules in the Standard, environment construction with reference to the traffic flow and simulated environment information resulting from understanding of the scene, as shown in Table 6.
During the construction process, the dynamic information and the static information during vehicle driving are obtained through understanding of the scene; the desired distance and reaction time are obtained by capturing the driver's behaviour features, and the process of the car-following model is improved using RNN, computing the safetyand-comfort-based optimal solution of the following-car acceleration range. Meanwhile, the model is tested and validated using the Apollo simulation platform to ensure accuracy and utility of the model. e method for predicting a car-following behaviour under the Apollo platform as disclosed is tested using the Apollo simulation platform. After the Apollo software environment is configured, the output interface of the Apollo platform is docked with the method. After successfully predicting information such as the following-car acceleration using the proposed method, the method is docked with the decision planning module Planning of Apollo; finally, the Apollo software implements testing and validation of the method; during multiple times of simulation process, the parameters are constantly adjusted, and the algorithm is optimized over the Apollo visual platform, specifically: Deploying the environment (e.g., Docker environment), and pulling the container mirror of Apollo; entering the Apollo container, and compiling the simulation environment (e.g., Dreamview simulation environment); running the simulation environment after successful compilation; testing and validating the efficacy of the model using the corresponding simulation environment; the testing and validating interface refers to the simulation environment interface, wherein the interface is shown in Figure 18.
Docking the traffic flow and environment information outputted by Apollo with the input of the model; then, converting the predicted acceleration value obtained using the model into the Planning input simulation platform, wherein the specific docking path is shown in Table 7.
Docking the traffic flow and environment information outputted by Apollo with the input of the model; then, converting the predicted acceleration value obtained through the model into the Planning input simulation platform, wherein the specific docking path is shown in Figure 19. Figure 20 is part of the following process diagram of the vehicle following behaviour visualization on the Apollo platform using the model proposed in this paper. In the figure, the cube with green border is the leading car and its current speed is shown directly above it. e blue car is the following car and its current speed is shown at the top right of the figure. In Figure 20(a), the speed of the leading car is 5 m/s (equivalent to 18 km/s), and the speed of the following car is 20 km/s. In Figure 20(b), the speed of the leading car is 5 m/s (equivalent to 18 km/s), and the speed of the following car is 21 km/s. In Figure 20(c), the speed of the leading car is 5 m/s (equivalent to 18 km/s), and the speed of the following car is 21 km/s. In Figure 20(d), the speed of the leading car is 5 m/s (equivalent to 18 km/s), and the speed of the following car is 23 km/s. In the comparison of Figures 20(a)-20(d), it is found that the speed of the following car has been fluctuating from the speed of the leading car so that the speed of the following car is not longer than that of the leading car. Moreover, a relatively long distance is maintained between the following car and the leading car, which provides sufficient braking distance for the following car to ensure the driving safety of the following car.
rough the visual simulation of the model proposed in this paper on the Apollo simulation platform, it was found that there would be no collision between the following car and the guiding car during the long time of following. erefore, by using Apollo simulation platform to test the model visually, it is found that the model proposed in this paper is valid and practical.
Conclusions
In this study, we used high-precision vehicle trajectory data from Interstate I-80 in the NGSIM dataset to obtain the CF data through data preprocessing. Based on the characteristics of the CF behaviour, we verified the correctness of the data through experiments and chose the reaction time of T �1 s to further filter the CF data. We modelled the CF behaviour based on the RNN network, using the speed of the leading car, the speed of the following car, and the distance between the two cars, all at time t, as inputs and the acceleration of the following car at time (t + T) as output. Finally, we performed in-depth examination and experiments on the length of the "memory" time interval and the network structure of the constructed model and obtained the parameters that enabled the model to have its best performance. Further, we compared the simulation results of the constructed model with those of the BPNN-and SVR-based models and demonstrated that the constructed model with a memory unit added had higher simulation accuracy. Both the BPNN-and SVR-based CF models were unstable; i.e., when the following vehicle had frequent acceleration and deceleration behaviour, their simulation results were poor. In comparison, the RNN-based CF model was able to make more accurate predictions because it considered the relevant information of the last several time intervals. At the same time, Apollo simulation platform was used to test and verify the model, ensuring the accuracy and practicability of the model. e RNN-based CF model established in this study was only used to study the CF behaviour between small cars. In reality, the CF behaviours between different types of vehicles are different. In a follow-up study, we will construct models that consider various types of vehicle so that the existing model can be further improved according to the actual conditions such as asymmetric driving behaviour and multiple leading vehicles.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 9,072 | 2020-03-17T00:00:00.000 | [
"Computer Science"
] |
Influence of the Periodicity of Sinusoidal Boundary Condition on the Unsteady Mixed Convection within a Square Enclosure Using an Ag–Water Nanofluid
: A numerical study of the unsteady mixed convection heat transfer characteristics of an Ag–water nanofluid confined within a square shape lid-driven cavity has been carried out. The Galerkin weighted residual of the finite element method has been employed to investigate the effects of the periodicity of sinusoidal boundary condition for a wide range of Grashof numbers ( Gr ) (10 5 to 10 7 ) with the parametric variation of sinusoidal even and odd frequency, N , from 1 to 6 at different instants (for t = 0.1 and 1). It has been observed that both the Grashof number and the sinusoidal even and odd frequency have a significant influence on the streamlines and isotherms inside the cavity. The heat transfer rate enhanced by 90% from the heated surface as the Grashof number ( Gr ) increased from 10 5 to 10 7 at sinusoidal frequency N = 1 and t = 1.
Introduction
Mixed convection in an enclosure has attracted significant attention from thermal researchers and scientists due to its great importance in numerous thermal engineering applications.Enclosures with simple usual triangular or rectangular geometries have been considered to a prodigious extent as such boundaries are easier to model and the thermal and hydrodynamic flow patterns and rotations are less complex than an enclosure having a complex profile, such as a curly surface.Irregular types of cavities are used in the design of many types of heat exchangers, such as the cooling systems of micro-electronic devices, solar collectors, and food dryers [1,2].
A numerical investigation on mixed convection is extremely difficult as the thermal energy and the momentum equations are coupled together owing to a buoyancy force that creates the mixed convective flow within the domain.This scenario is involved when nanofluids are used in the cavity.In a cavity, there are two regions, namely the interior region and the boundary.For the analogous boundary conditions, these two regions have different flow and thermal characteristics.In the case of nanofluids, this circumstance becomes more troublesome if there are distinctive boundary conditions.Natural convection heat transfer phenomena are of prodigious importance as they have a very extensive range of applications, including in solar collectors, electronic cooling, and the desalination process [3][4][5][6].
However, the conventional fluids used for natural convection, such as oil, water, ethylene glycol, and air, have very poor thermal conductivity that cannot fulfill a demand for a high thermal conductivity of the fluid.To enhance the thermal conductivity notwithstanding the heat transfer rate, an innovative procedure has been developed recently by mixing nano-sized particles, for example, carbon materials, metal, and metal oxide, in the base fluid [7][8][9].Such fluid is known as a nanofluid, which has outstanding performance in thermal conductivity and therefore can address the difficulty of higher heat transfer efficiency [10][11][12][13][14][15][16][17] in contemporary engineering applications.
Numerous numerical studies [18][19][20][21][22] have been performed on the effect of a nanofluid considering that the nanofluid is in a single phase.Researchers have claimed that the heat transfer rate has been enhanced due to an increase in the number of nanoparticles.Kim et al. [23] observed that the thermal conductivity and the shape factor are reduced with an increase in the density and heat capacity of the nanoparticles.In addition, an experimental study [9] confirmed that a copper-water nanofluid improved the heat transfer rate.Hwang et al. [24] theoretically showed that the ratio of the heat transfer coefficient of a nanofluid to that of the base fluid is reduced as the size of the nanoparticles increases.Santra et al. [25] observed that the heat transfer rate decreased with an increase in the number of nanoparticles.Ghanbarpour et al. [26] made an experimental study to examine the performance of a heat pipe's performance using nanofluid.They found that the thermal performance of the heat pipes was enhanced using nanofluids compared to the base fluid.
The shape of the enclosure assumes an imperative part in convection; however, the shape relies upon a viable application.Distinctive sorts of enclosures [27][28][29][30][31][32][33][34][35][36][37][38] confined with nanofluids have been examined in earlier studies.Yu et al. [39] investigated the impact of nanofluids in a base-heated isosceles triangular enclosure using a transient buoyancy driven condition.Laminar mixed convection in an inclined triangular cavity [40,41] using a Cu-water nanofluid has been performed.The authors claimed that the inclined angle plays an important role for the nanofluids in the heat exchange.Billah et al. [42] examined unsteady buoyancy driven heat transfer improvement of nanofluids in a triangular cavity.Rahman et al. [43] exhibited the impact of a corrugated base surface of a triangular enclosure.Sheremet et al. [44] played out a numerical study on the unsteady free convection heat exchange qualities of a nanofluid restricted to a permeable open wavy cavity.They found that the normal Nusselt and Sherwood numbers diminish with an expansion in the undulation number and that these can be enhanced by means of a fitting tuning of the wavy surface geometry parameters.Rahman et al. [45] conducted a computational study to examine the impacts of carbon nano tube (CNT)-water nanofluids in an enclosure with non-isothermal heating for higher Rayleigh numbers.They demonstrated that there is an ideal incentive for nanofluid volume fraction to control heat exchange, temperature appropriation, and stream field.Wu et al. [46] studied the heat transfer and pulsed flow in a Y-type crossing point channel with two inlets and one outlet utilizing water-Al 2 O 3 nanofluids.They demonstrated that the utilization of the pulsed flow enhances the Nusselt number, particularly for a huge Reynolds number and a high pulse frequency.Rashidi et al. [47] developed a two-way coupling of the discrete phase model to track the discrete nature of aluminum oxide particles in an obstructed duct with two side-by-side obstacles.The influence of an induced magnetic field on the free convection of an Al 2 O 3 -water nanofluid on a permeable plate by means of the Koo-Kleinstreuer-Li (KKL) model is examined by Sheikholeslami et al. [48].
From the above literature, it is revealed that few investigations have been done for a square enclosure along with a sinusoidal heated base surface.The primary enthusiasm for this examination is to consider the heat exchange attributes inside a square shape enclosed area filled with an Ag-water nanofluid with a sinusoidal heated base surface.
Physical Model
We consider a two-dimensional lid-driven square enclosure of length L with an Ag-water nanofluids confined in it as depicted in Figure 1.The top horizontal wall is permitted to move with a steady speed U c .Likewise, the base surface is kept up at a high temperature T = T c + (T h − T c )sin(Nπx/L).Also, the vertical walls are considered to be insulated.The impact of gravity appears in the negative Y axis.
Thermophysical Property of the Nanofluid
For this numerical investigation, Ag is considered to be the nanoparticle and water is selected as the base fluid.Distinctive examinations have been completed by various researchers.The nanoparticles are accepted to have a uniform shape and size.Besides this, it is considered that both the fluid phase and nanoparticles are in a thermal balance state and they flow at a similar speed.In this examination, the thermophysical properties of the nanofluid are thought to be consistent aside from the density variety in the buoyancy force, which depends on the Boussinesq estimation.The information utilized for the numerical investigation is taken from Ahmed et al. [49] and is presented in Table 1.
Table 1.Thermophysical properties of the water and nanoparticle.
Mathematical Modeling
The governing equations which describe the framework are the conservation of mass, momentum, and energy equations.The Ag-water nanofluid is filled into the free space of the enclosure which is demonstrated as a Newtonian fluid.The flow is thought to be unsteady, laminar, and incompressible.The thermal balance between the base fluid and the nanoparticles are considered, and no slip conditions between the two media are assumed.In the light of these suppositions, the continuity, momentum, and energy equations [27] in their two-dimensional form can be composed as: where the effective density ρ n f of then a nanofluid [41] might be characterized by: where δ is the solid volume fraction of nanoparticles.
In addition, the thermal diffusivity α n f of nanofluids [49] is specified by: The heat capacitance of nanofluids [42] may be defined as: Furthermore, (ρβ) n f is the thermal expansion coefficient of the nanofluid and it can be found by: µ n f is the dynamic viscosity of the nanofluid introduced by Brinkman [50] as: The effective thermal conductivity [18,51] of the nanofluidis given as follows: The reasonable initial and boundary conditions in their dimensional shape are: For the entire domain: Equations ( 1)-( 4) are non dimensionalized using the following dimensionless variables: By employing Equation ( 13), the resulting dimensionless equations are reduced to: Here, the Prandtl number is defined as: The Grashof number is expressed as: The overall Nusselt number at the heated surface of the enclosure can be expressed by: The fluid motion is displayed by means of the stream function ψ acquired from velocity components U and V.The relationships among the stream function and velocity components [52] for two-dimensional flows are:
Numerical Scheme
The initial and boundary conditions for the governing Equations ( 15)-( 17) are described below.The initial and boundary conditions in the dimensionless frame for the present issues for τ = 0.
The condition of the entire domain: and for τ > 0, The equations are solved by employing the Finite element method, and the Galerkin weighted residual technique is employed to discretize the equations.Details about this discretization method can be found in [42].A non-uniform triangular component is to be found over the enclosed area.For every component, the required factors are processed utilizing an interpolation technique.An arrangement of algebraic equations has been framed by the representing condition "decrease the continuum territory into the discrete triangular region".Later on, the algebraic equations are solved utilizing Newton s iteration technique.The convergence criteria for successive solutions are fixed such that the relative error for each variable between consecutive iterations is computed below the performed value of ε as: where m is Newton's iteration index and Γ is the general dependent variable.The value of ε is set to be 10 −5 .
Grid Independency Test
A grid independency appraisal has been performed for this model and the outcome appears in Figure 2. The grid independency assessment has been done for δ = 0.4, N = 1, and Grashof number Gr = 10 7 for the Ag-water nanofluids.The components numbers differ for the four indicated conditions.The element numbers are 2496, 3588, 4844, and 6186.It has been discovered that overall, the Nusselt number is more noteworthy for the grid of element number 2496.So, the investigation has been performed for this grid of element number 2496.
Validationof the Code and Numerical Scheme
The present numerical investigation has been validated with the published work.Code approval was finished with a view to checking the exactness of the numerical solution and the arrangement methodology of the issue.The current investigation was compared with Khanafer et al. [18] with δ = 0.04 and 10 3 ≤ Gr ≤ 10 5 on the premise of the overall Nusselt number, and the deviation with the investigation is accounted for in Table 2. From the table, it is clear that the present code and the numerical scheme are totally reliable as they demonstrate great concurrence with the earlier published work.The present investigation differs not over 3% with the past one.
Results and Discussion
In this article, a time-dependent solution is adopted for the subsequent differential conditions utilizing a finite element analysis.The consequences of this investigation are analyzed for the scope of Grashof numbers, Gr from 10 5 to 10 7 , with a parametric variety of sinusoidal even and odd frequency, N from 1 to 6, at various instances (for τ = 0.1 and 1).For the other governing parameters, the Prandtl number (Pr) is settled at 6.2 and the solid volume fraction δ is set at 0.04.The subsequent stream and thermal structures are broken down to provide an understanding of the instrument behind the impact of the Grashof number, corrugation frequency, and instances on the stream and thermal fields and the enhancement of heat transfer.The essential non-dimensional parameters of enthusiasm for the present investigation are the local and average Nusselt numbers, which were inspected to reach the significant outcome of the present study.
Effect of Even and Odd Frequencies on Streamlines Varying the Dimensionless Time
In Figure 3, the effect of the odd frequency N (=1, 3, 5) on streamlines has been presented at Gr = 10 5 for the preferred values of τ.For τ = 0.1, the effect of the odd values of N is depicted in the left column.At N = 1 and τ = 0.1, two primary opposite rotating vortices are formed near the bottom surface with an incredibly low value of stream function, indicating poor convective heat transfer.From this figure, it can be observed that ψ max = 0.3 and ψ min = −0.3.Further, two reverse rotating vortices with the same flow strength are created near the vicinity of the vertical walls.Cells are framed in a symmetric way, indicating the exceptionally ordinary nature of convection.Here, the liquid takes up heat energy from the base-heated wall and ends up plainly lighter to invigorate a convective current.Lighter liquid goes up, and because of the symmetric idea of a colder wall, the colder bit of liquid takes after the line followed by the colder walls.As a result, symmetric cells are acquired.However, at N = 3 and τ = 0.1, the essential inverse pivoting cells rule and cover the entire space.This is why the two reverse-rotating vortices near the vertical walls disappear.For N = 5, four cells are split instead of two; the dominating cells in the interior have a reverse sense of rotation.At the bottom wall, a couple of minor vortices are twisted.On the other hand, for τ = 1, a similar pattern on the effect of the odd values of N is shown in the right column.The influence of the even frequencies N (=2, 4, 6) on streamlines at Gr = 10 5 of the preferred values of τ is depicted in Figure 4.For τ = 0.1, the effect of the even values of N is shown in the left column.At N = 2 and τ = 0.1, a major rotating eddy is formed near the bottom surface which covers nearly the major space of the domain.Two very smaller vortices are found at the corner of the bottom wall.An elliptic shape vortex is seen at the surrounding area of the right vertical surface.At N = 4, there are four cells having an opposite sense of rotation.However, in the case of N = 6, this symmetry is entirely conked out and a very strong convective current outline is produced.Five cells are formed, and four are near the sinusoidal heated bottom surface having low to moderate strength.In contrast, for τ = 1 and N = 2, there are two cells instead of four, with the commanding cells in the center having a reverse sense of revolution.The commanding cell is at the heart of the square cavity; it has a stream function value of −0.34 and is revolving in the clockwise direction.The effects of odd frequencies N (=1, 3, 5) on streamlines at Gr = 10 6 for the chosen values of τ are shown in Figure 5.For N = 1 and τ = 0.1, two key opposite rotating vortices are produced near the bottom face with a very low value of stream function, indicating a meager convective heat transfer.It is clearly seen that ψ max = 0.4 and ψ min = −0.5.The other two opposite rotating vortices with the same flow strength are formed near the vicinity of the upright walls.However, the primary opposite rotating cells at N = 3 and τ = 0.1 dominate and enclose the whole domain.As a consequence, the two reverse rotating vortices near the vertical walls are moved out.In this cell, the highest value of the stream function is observed to be ψ max = 0.11 and the smallest value is observed to be ψ min = −0.10.However, at N = 5, four cells are split instead of two, and the dominating cells in the interior have a reverse sense of rotation.Since the bottom wall induced heat energy into the fluid, the convective currents are more adjoining to the base-heated surface.Generally, the flow at N = 5 indicates very good convective characteristics.In contrast, for τ = 1, an analogous pattern on the effect of the odd values of N is shown in the right column.While it compares with the rotating cells at τ = 1, both the opposite rotating vortices are bigger in size near the bottom surface than at τ = 0.1.In this cell, the maximum value of the stream function is assumed to be ψ max = 0.16 and the minimum value of the stream function is ψ min = −0.16.If we compare the primary opposite rotating cells at N = 3 and τ = 1 with τ = 0.1, it is found that the flow strength is almost doubled.It is clearly observed that the maximum value of the stream function is ψ max = 0.22 and the minimum value is ψ min = −0.22.The influence of the even frequencies N (=2, 4, 6) on streamlines at Gr = 10 6 for some values of τ is displayed in Figure 6.For N = 2 and τ = 0.1, a most important rotating vortex is created near the bottom surface which covers almost the major space of the domain; it has a stream function value of 0.20 and a negative sense of rotation.Two very smaller vortices are found at the left corner of the bottom wall and at the top of the right vertical wall.At N = 4 and τ = 0.1, there are four cells having an opposite sense of rotation.However, in the case of N = 6, a mighty convective current outline is produced.Six vortices are created, among which five are in the vicinity of the sinusoidal heated base surface and have low to moderate strength.It is without a doubt observed that the convective streams are all the more connecting to the base-heated wall for an observable reason and that is that from this wall heat stream is approaching into the fluid.By and large, a decent convective attribute appears by the stream at N = 6 and τ = 0.1.However, for τ = 1 and N = 2, there are two cells rather than three, and the ruling cells in the center have an inverse feeling of turn.The commanding cell is at the center of the square walled-in area, has a stream function estimation of -0.91, and is turning in a clockwise way; however, for N = 4 and τ = 1, there are four cells that have an inverse feeling of rotation.Then again, if there should arise an occurrence of N = 6 and τ = 1, an amazingly strong convective current sample is generated.In this case, six cells are framed, where three are in the proximity of the sinusoidal heated base surface with low to moderate strength.The result of the odd frequencies N (=1, 3, 5) on streamlines at Gr = 10 7 for the different values of τ is revealed in Figure 7.For N = 1 and τ = 0.1, a more mightier stream pattern is accomplished with four cells shaped in a symmetric way.In general, a decisive change in convection is seen just by expanding the estimation of N. If there should arise an occurrence of N = 3 and τ = 0.1, the strongest stream pattern is found, with two noteworthy rotating cells framed in a symmetric approach having ψ max = 0.87 and ψ min = −0.93.Yet, there is a fascinating example of progress in the estimation of the streamline on account of τ = 0.1 for a fluctuating N. Here, four cells are shaped close to the base-heated surface of the cavity with a very strong flow strength (ψ max = 0.34 and ψ min = −0.38).The convective streams are more nearby the base-heated surface for a clear reason, and that is that from this wall heat energy is going into the fluid.Then again, if there should arise an occurrence of τ = 1 and N = 1, the symmetry is totally broken, and a very strong convective current pattern is created.Here, five cells are framed, four at each of the four corners with a low to moderate strength.The ruling cell is at the center of the rectangular corner, with a stream function estimation of 7.50 and positive sense of rotation.In any case, the overall convective current pattern recommends that the convective heat motion is more mightier in instances of N = 5 and τ = 1 than N = 1 at τ = 1.
Figure 8 gives the effect of the even frequencies N (=2, 4, 6) on streamlines at Gr = 10 7 for some values of τ.For N = 2 and τ = 0.1, a dominating rotating vortex is created near the bottom surface, which covered approximately the major part of the domain and has a stream function value of −1.96.Three very smaller vortices are found at three corners of the cavity.For N = 4 and τ = 0.1, four cells are formed, where three are in the vicinity of the sinusoidal heated foot surface with a lower to fair strength.However, in the case of N = 6, a very strong convective current outline is created.In this case, six cells are framed, where five are in the vicinity of the sinusoidal heated base surface with a low to moderate strength.An interesting flow pattern can be noticed in the right column of Figure 8.The overall convective current outline recommends that the convective heat transfer is strongest for N = 6 and τ = 1.
Effect of Even and Odd Frequencies on Isotherms Varying Dimensionless Time
In Figure 9, the consequence of varying odd frequency N (=1, 3, 5) on isotherms is shown for different values of τ (=0.1 and 1) for Gr = 10 5 .For N = 1, the isotherms near the top leading horizontal wall are parallel to each other.The parallel concept of isotherms indicates the control of conductive heat transfer; however, near the bottom sinusoidal heated face the isotherms are semi-elliptic and densely distributed for both situations τ = 0.1 and 1.For N = 3, the isotherms are found to be compactly packed along the bottom surface and very thickly packed at the sinusoidal heated bottom wall for N = 5.It confirms that convection is very mighty in those areas.As a consequence, the temperature gradient is lower near the top wall for both cases τ = 0.1 and 1.The results of varying even frequency N (=2, 4, 6) on isotherms for different values of τ (=0.1 and 1) is depicted in Figure 10 for Gr = 10 5 .For N = 2, the isotherms assume a coconut tree leaf shape showing the presence of convective heat transfer in both cases of τ (=0.1 and 1).It can clearly be seen that the isotherm pattern is symmetrical about the mid-vertical plane for the lower value of τ (=0.1), which demonstrates that our explanation held for the argumentation on the streamlines' distribution.For N = 4, finger print shape isotherms are found to be thickly stacked along the bottom surface and very thickly stacked near the sinusoidal heated bottom wall for N = 6.It gives that the convection is very strong in the bottom heated region.As an outcome, the temperature gradient is lower near the middle part of the top segment of the enclosure for both cases τ = 0.1 and 1.The outcome of varying odd frequency N (=1, 3, 5) on isotherms for different values of τ (=0.1 and 1) is foreshadowed in Figure 11 for Gr = 10 6 .For N = 1, the lower-value isothermal lines are found near the top leading horizontal wall and are parallel to each other.This parallel attribute of the isotherms provides for the control of conductive heat transfer; however, near the bottom sinusoidal heated face, the isotherms are semi-elliptic and densely distributed for both situations τ = 0.1 and 1.For N = 3, the isotherms are seen to be tightly crowded in the vicinity of the bottom surface, and are very tightly packed at the sinusoidal heated bottom wall for N = 5.At the upper segment of the enclosure, the isotherms are distorted.As a result, the temperature gradient is lower near the top wall for both cases τ = 0.1 and 1.It proves our remarks in the argumentation on the streamlines' distribution.
The effects of even frequency N (=2, 4, 6) on isotherms for different values of τ (=0.1 and 1) are shown in Figure 12 for Gr = 10 6 .The isothermal pattern is similar to that derived from the procedure defined in the earlier even frequency passages with respect to the time variable.No noticeable variation is found in the distribution of isotherms in the case τ = 0.1 for each of the even frequencies of N, even though for τ = 1, at smaller values of N, strong existence of conduction is found inside the cavity.Since the estimation of Gr is enlarged, both heat transfer modes appear to be more efficient.
Figure 13 displays the influence of odd frequency N (=1, 3, 5) on isotherms for different values of τ (=0.1 and 1) for Gr = 10 7 .This process continues that portrayed in the earlier odd frequency passages with respect to the time variable.Despite the fact that for τ = 1, at smaller values of N, there is mighty existence of conduction inside the cavity, there is no recognizable variation in the isotherms' distribution in the case τ = 0.1 for each of the odd frequencies of N.Moreover, whenever the estimation of Gr is improved, both the heat transfer modes appear to be effective.
The effects of even frequency N (=2, 4, 6) on isotherms for different values of τ (=0.1 and 1) are displayed in Figure 14 for Gr = 10 7 .At N = 2, the isotherm patterns are similar to a coconut tree leaf shape, showing the presence of convective heat transfer for τ = 0.1.For N = 4, finger print shape isotherms are found to be tightly crowded along the bottom heated surface, and areabundantly thickly packed at the sinusoidal heated bottom wall for N = 6.It is found that convection is incredibly strong in the bottom heated region.
Effect of Different Frequencies on Local Nusselt Number
Figure 15 illustrates the variation in local Nusselt number for odd and even frequencies N. In each case, it is noticed that for the estimation of Gr = 10 7 , the magnitude of the local Nusselt number is extremum.These results show that convection heat transfer appears to be mightier at a higher value of Gr and a higher frequency.
Effect of Different Frequencies on Overall Nusselt Number for Different Grs
In Figure 16, the variation in overall Nusselt number with changing instants in comparison to the odd and even frequencies N for different Gr has been represented.Figure 16a shows the effects of odd frequency N for three different Grashof numbers (Gr = 10 5 , 10 6 , and 10 7 ).For all cases, it is observed that for N = 1, the magnitude of the overall Nusselt number is supremum.The estimation of the overall Nusselt number rises for all values of Gr with a decreasing odd frequency N.These results give the connotation that convective heat transfer appears to be mightier at a larger estimation of Gr and a lower value of odd frequency, which strongly proves our remarks made during the argumentation on the streamlines' and isotherms' distribution.However, Figure 16b depicts the effects of even frequency N for three different Grashof numbers (Gr = 10 5 , 10 6 , and 10 7 ).It is also noticed that for N = 2, the value of the overall Nusselt number is supremum for all cases.The estimation of the overall Nusselt number grows for all estimations of Gr with a decreasing even frequency N.However, it can be noticed that the maximum average heat transfer rate occurred for a larger value of Gr with a smaller value of N.
Correlation of Overall Nusselt Number for Different Frequencies and Different Grs
In the current study, the correlation of overall Nusselt number Nu av for N and Gr has been found as follows: For even frequency N: Nu av = 0.0004 Gr − 0.0509 N + 0.2082.
Conclusions
In the present study, Ag-water nanofluids confined within a lid-driven square cavity with a sinusoidal heated bottom surface were solved numerically.The numerical results are explained, and from the investigation the subsequent points may be drawn:
•
When the solid volume fraction is retained at 0.04, the convective heat transfer execution is improved.
•
Improving the Gr number is adequate to raise the convective heat transfer successfully.
•
At larger estimations of Gr, convection is exceptionally strong for lower estimations of frequencies for both even and odd values of N.
•
For smaller estimations of Gr, conduction is the primitive mode of heat transfer.
•
A higher value of Gr and lower values of N supports enhanced heat transfer through convection and conduction.
•
The overall Nusselt number at the heated surface rises with an increasing estimation of Gr.
The heat transfer rate improved up to 90% from the heated wall as the Grashof number (Gr) increased from 10 5 to 10 7 for a low odd frequency at τ = 1.
Future studies may consider experimental investigations and compare their results with the simulation results of this study.Such a comparison will provide further validation of the numerical model developed in this study.
Figure 1 .
Figure 1.Schematic view of the cavity with the boundary conditions.
Figure 3 .
Figure 3. Influence of the odd values of N on streamlines for the selected values of τ with Gr = 10 5 .
Figure 4 .
Figure 4. Influence of the even values of N on streamlines for the selected values of τ with Gr = 10 5 .
Figure 5 .
Figure 5. Influence of the odd values of N on streamlines for the selected values of τ with Gr = 10 6 .
Figure 6 .
Figure 6.Influence of the even values of N on streamlines for the selected values of τ with Gr = 10 6 .
Figure 7 .
Figure 7. Influence of the odd values of N on streamlines for the selected values of τ with Gr = 10 7 .
Figure 8 .
Figure 8. Influence of the even values of N on streamlines for the selected values of τ with Gr = 10 7 .
Figure 9 .
Figure 9. Influence of the odd values of N on isotherms for the selected values of τ with Gr = 10 5 .
Figure 10 .
Figure 10.Influence of the even values of N on isotherms for the selected values of τ with Gr = 10 5 .
Figure 11 .
Figure 11.Influence of the odd values of N on isotherms for the selected values of τ with Gr = 10 6 .
Figure 12 .
Figure 12.Influence of the even values of N on isotherms for the selected values of τ with Gr = 10 6 .
Figure 13 .
Figure 13.Influence of the odd values of N on isotherms for the selected values of τ with Gr = 10 7 .
Figure 14 .
Figure 14.Influence of the even values of N on isotherms for the selected values of τ with Gr = 10 7 .
Figure 15 .
Figure 15.Variation of the local Nusselt number for (a) even values of N and (b) odd values of N, when Gr = 10 7 and τ = 1.
Figure 16 .
Figure 16.Variation of the overall Nusselt number for (a) odd values of N and (b) even values of N. | 7,470.4 | 2017-12-18T00:00:00.000 | [
"Physics"
] |
SERVICES SECTOR IN LITHUANIA: LABOUR PRODUCTIVITY AS A FACTOR OF GROWTH
This paper examines the tendencies of Lithuanian services sector's value added and labour productivity during 1995-2006. Comparative analysis of the average annual labour productivity growth in manufacturing and service industries reveals arguments supporting the W. Baumol's consideration that there can be sporadic productivity increases in nonprogressive sectors. During 1995-2000, labour productivity growth in services exceeded productivity growth in manufacturing. The paper offers an interpretation of the Verdoorn law for empirical regularities of the relationship between the cross sectorial labour productivity growth rate and the value added growth rate.
Introduction
Economic growth and de-industrialisation are important topics discussed in economic literature. Economic statistics provides an empirical evidence of the changing structure of economics in many countries. It is characterised by a gradual process of decline in the share of agriculture and manufacturing and a rise in the share of services. The service sector produces the major part of gross output in modern economies and makes a substantial contribution into employment. According to A. H. G. M. Spithoven, "services are crucial for the functioning of a society and an economy. Nonetheless, they have not been given the attention they deserve and remain 124 poorly understood by the economics profession. In many studies, services are taken to be technologically sluggish or stagnant, and this then, is regarded as an explanation for their rising share in overall employment" (Spithoven,2000).
De-industrialization tendencies suggest that the growth of the service sector tends to be associated with negative effects on the economic growth. Economic growth theory recognizes increasing returns as a factor generating economic growth. Labour productivity signifies the potential of growth. Labour-saving innovations associated with technological changes vary among the industries, however, manufacturing industries move ahead more rapidly than service and agriculture. Capital intensity, research intensity, skills of workforce are variables related to productivity growth and therefore factors of the manufacturing sectors's output growth. W Baumol's theory stresses that productivity improvements in services appear to be occasional and the labour-intensive nature of most services had become a constraint for productivity growth (Baumol, Towse, 1997). Therefore, an increase in the share of services implies a reduction in the rate of productivity of the national economy. Baumol's model of unbalanced growth points out the existence of progressive and nonprogressive sectors of the economy. Imbalances in productivity growth lead to expenditure shifts into sectors of lower productivity. Sectors of lower labour productivity will accept the level of wages in the labour market. A higher overall wage level based on a higher overall labour productivity signifies the potential availability of a relatively rapid increase in wages as compared with the increase in productivity. Due to the slow rate of productivity growth in labour-intensive service industries as compared with manufacturing, service-oriented economies tend to sag. However, many modern economies undergo a de-industrialization, have become service-oriented economies and experience even a higher rate of productivity growth.
The evidence provided in the productivity measurement research based on US statistical data suggest that labour productivity growth in the service industries after 1995 has accelerated at the same level as the economywide rate (TripleU, Bosworth, 2003). The average labour productivity growth in the service-producing industries during 1995-2001 was even higher than in the manufacturing. It follows that service industries experienced changes of the way of production. The level of productivity in the industry is determined by a number of factors, however, explanations of long-term productivity changes stress the technological change, i.e. the change in the quality and quantity of capital. The growth of the output has been induced by a more intensive use of the capital factor in the production of services. In some service industries, labour productivity growth has been related to investments into technological innovations. The increase of the share of such industries contributes to labour productivity acceleration in the overall economy.
The growth of the service sector suggests the idea of large-scale production. Large-scale production is supposed to be more efficient due to returns to scale. Economic growth theory recognizes increasing returns to scale as a factor generating economic growth through the growth in productivity. P. J. Verdoorn's and N. Kaldor's empirical analysis of growth has demonstrated the tendency of increasing returns to scale in the industrial sectors of economy (Verdoorn, 1993). Named after the name of author, P. J. Verdoorn's law acknowledges the relationship between the rate of growth in the output and the growth of productivity due to increasing returns. According to N. Kaldor, economies of scale are generated through technical change and improvement of skills. Technological innovations will cause an increase in labour productivity. P. J. Verdoorn and N. Kaldor stress the existence of increasing returns in the manufacturing industries. Nevertheless, the particular attention to the manufacturing sector as the engine of economic growth is based on the positive causal relationship between output growth and productivity growth in manufacturing, defined by the empirical tests. Such empirical evidences could be weakened as the economies undergo structural changes. P. Rayment suggests that structural changes in the manufacturing sectors are induced by labour migration from industries with relatively low skill intensities to industries with relatively high skill intensities (Rayment, 1981). Nelson and Winter have identified such factors of labour productivity growth as capital intensity, research intensity, skills ofworkforce (Nelson, Winter, 1982). With the processes of service industries changing so rapidly and capital intensity and technical innovations becoming features of service processes, the Kaldor-Verdoorn law should be reassessed in the context of service and manufacturing industries.
During the last decade Lithuania experienced a rapid growth and restructuring. The share of services, construction, manufacturing sectors increased, at the same time that of agriculture substantialy decreased. More detailed analysis of largest sectors of national economy provides an empirical evidence of the changing nature of services, including trends in employment and productivity. The rapid growth of the output and the limited supply of labour as constraints could be recognized as specific characteristics of the national economy. Therefore, questions of the character of labour productivity growth in different sectors of the economy, and the relationship between the rate of economic growth and labour productivity growth are supposed to provide evidence in the discussion about the potential of growth in the modem economies.
The aim of this paper is to assess Baumol's model of unbalanced growth in a rapidly growing economy by analysing the labour productivity growth pattern in manufacturing and service industries and to examine the relationship between the rate of growth in value added and the growth in labour productivity according to the Kaldor-Verdoorn law.
Examination of the relationship between the growth in value added and growth in pro-126 ductivity provides arguments for the hypothesis that the Verdoorn-Kaldor law, estimated using cross industry data, could be used to explain the disparities in growth rates among various sectors of the economy.
Method
The study method is based on a comparative analysis of national statistics data on labour productivity. The relationship between tl;le rate of value added growth and the growth of labour productivity was estimated by a linear regression analysis. The significance of results was analysed by standard R-squared and F tests.
Labour prodUctivity was calculated as value added per person engaged in production. Productivity was computed for the economic activities of the standard NACE statistical classification: industry; trade, hotels and restaurants, transport, storage; financial intermediation, real estate, renting; public administration, services for social sphere. Productivity growth was estimated for the goods-producing industry and various service-producing industries as well as the aggregated service-producing industry. We compare productivity change during the period 1995-2006 and the average annual labour productivity growth rate during the fiveyear periods of 1995-2000 and 2001-2006. The relationship between the growth of labour productivity and the growth of output was formulated as the Kaldor-Verdoorn law and took the form where p is the growth of labour productivity, q the growth of output, and ai' b l are the regresion coefficients. The slope coefficient b is commonly referred to as the Verdoorn coefficient.
We have used annual time series value added data to estimate the relationship. Nevertheless, Verdoorn and Kaldor analysed the effect of the law on industry, while in this paper we provide interpretations of empirical relationship for various sectors of the economy.
Trends in economic growth and employment
In the recent years, Lithuania has experienced an increase of the share of service, manufacturing, construction sectors in the total value added and a decrease in the share of agriculture. The share of services value added in the total economy accounted for 55.7% in 1995 and increased to 59.4% in 2006, and the share of manufacturing during this period has increased from 19.9% to 22.0%. The share of agriculture in value added declined from 11.5% in 1995 to 5.5% in 2006. In (1990In ( -2003, the share of services increased and of manufacturing in the value added declined in almost all industrial countries -manufacturing accounted for 20%; in some countries (Luxembourg, Greece, the United Kingdom, the United States) its share % 70.0 was less than 15% (Wolfl, 2005). The value added produced in the service sector accounts for a bulk of gross value added (service sector accounted for 80% in value added in the EU-15) (Pilat, Wolfl, 2005).
Structural changes in the Lithuanian economy show the tendencies of post-industrialisation development.
Data of national statistics show a strong growth in value added in the service sector and manufacturing (Fig. 2). The annual growth rate of gross value added during the period 1995-2006 was 11 %, and the growth of value added in service industries was even more rapid (12.1 %) than in manufacturing (10.9%).
Changes in the volume of output reflect the rapid growth of the national economy, caused by the growth of investment, changes in employment and a considerable rise of demand. Supply of labour decreased in the manufacturing and grew in traditionally less productive sectors such as trade, hotels and restaurants, financial intennediation and real estate. The decrease of the number of employed in 2006 versus 1995 in manufacturing reached 13.3%, the increase in the trade, hotels and restaurants, transport, storage being 19.1 %, in infinancial intermediation, real estate 36%, in public administration and services of social sphere 4% (Fig. 4).
The increase of demand for products of service industry, determined by the development of tourism and the growth of income, was the reason for employment growth in this sector. Globalisation of the labour market after the accession of Lithuania to the EU and movement of workforce was the other factor of changes in employment.
The tendency of a rapid economic growth and the shift of labour to the less productive service-producing industries suggests a situation in economic literature called Baumol's disease. It is argued that because of the natural constraint of productivity in labourintensive industries, productivity growth in service-oriented economies tends to slow down. Therefore, the economic growth tends to decelerate.
Labour productivity growth in manufacturing and service industries
During 1995-2006, annual labour productivity growth in manufacturing averaged to about 12.3% and was higher than in services. This rapid growth of labour productivity was more intensive during 1995-2000 and slower in the next five years. The period of rapid growth in labour productivity shows evidences that labour productivity in services exceeds labour productivity in manufacturing ( Table 1).
The situation when labour productivity growth in services exceeds labour productivity growth in manufacturing contravenes W Baumol's theory of unbalanced growth and services as nonprogressive sectors of the economy. As that the labour productivity growth in service industries after 1995 was higher than in the goods-producing industries, they asserted that Baumol's disease has been cured in the U .S. (Triplett, Bosworth, 2003). As the example of the Lithuanian economy shows, Baumol's disease has been cured here as well. However, our data show that the increase in labour productivity growth in services was confined to one industry -financial intermediation, real estate, renting. It can be illustrated by W Baumol's consideration that there can be sporadic increases in productivity in a nonprogressive sector (Baumol, 2002). It could be related to the increase in investments to technology of service production and therefore changes in labour use patterns. Information and communication technologies usually prevail in financial enterprises. The rapid growth in productivity in financial intermediation, real estate, renting industry could be related to the rise of financial markets where financial enterprises make their earnings. This explanation could be based on the dramatic changes in labour productivity -from 20.8% of the annual average growth in 1995-2000 to a decrease to 4.8%. In our opinion, the rapid growth of labour productivity in the finance sector could be explained by investments in information technologies as well as a rapid increase in credit markets.
Relationship between the rate of growth in value added and the growth in labour productivity
Results of linear regression of labour productivity growth on value added growth are summarized in Table 2.
The result of linear regression is significant for all the industries under analysis. The R2 indicator shows a strong linear relationship of productivity growth to value added growth in manufacturing as well as service industries. Kaldor explained the effect of output growth on productivity by the benefit of return on scale -by such factors as the increasing specialization due to an increase in output, the introduction of technical innovations to the process of production, product differentiation due to the increase in product output. As the results of regression show, a strong relationship between labour productivity growth and growth in value added is not specific to industry.
The Verdoorn coefficient is positive in every industry and is statistically significant at the 0.05 level. The Verdoorn coefficient b l implies a constant value of returns to scale of Roberts, 2007). Therefore, productivity was determined by technological factors of production rather than by the growth in output. Our analysis does not provide arguments enough to confirm the hypothesis about the appropriateness of the Verdoom-Kaldorn law for the cross-sectorial analysis.
Conclusions
Data of the national statistics show a significant growth in value added in the services sector and manufacturing: the annual growth rate of gross value added during the period 1995-2006 was 11 %, the growth of value added in service industries being even more rapid (12.1 %) than in manufacturing (10.9%).
Tendencies in labour supply show that the share of the number of employed in services grew during the last decade and almost reached 60% in 2006, whereas the number of employed in manufacturing slightly decreased to 20% in 2006. Supply of labour decreased in manufacturing and grew in traditionally less productive sectors such as trade, hotels and restaurants, financial intermediation and real estate. Tendencies in the services sector labour productivity growth show that services are a "nonprogressive" sector of economics. Labour productivity growth in manufacturing exceeded the growth in services during 1995-2006, the average annual growth of labour productivity in manufacturing being 12.3% and in services sectors 10.8%. Comparative analysis revealed a period when annual labour productivity growth in services was higher than in manufacturing -productivity growth in manufacturing averaged about 13.8% and in services industries 14.3% during 1995-2000. In our opinion, it supports the W. Baumol's consideration that there can be sporadic increases in productivity in the nonprogressive sector, rather than the J. E. Triplett's and B. Bosworth's idea about the changing pattern in services production and therefore a nonconformity to Baumol's disease conception. However, our data show that the relative increase in labour productivity growth in services was confined to one industry -financial intermediation, real estate, renting. In our opinion, the rapid growth of labour productivity in the finance sector could be related to an increase in credit markets and investment in information technologies.
Rapid growth, especially in services industries, suggests that large-scale production provides more output due to returns to scale. The Verdoorn-Kaldor relationship between the growth of labour productivity and the growth of output has been assessed. Results of linear regressions show a statistically significant relationship and constant returns to scale for various industries. Constant returns to scale reflect a substantial cross-sectorial externalities, therefore growth rate disparities among various sectors of economy cannot be explained by disparities in return to scale as the Verdoom-Kaldor law implies. Our analysis does not provide arguments enough to confirm the hypothesis about the appropriateness of the Verdoorn-Kaldorn law for the cross-sector analysis. The paper deals with the tendencies of Lithuanian manufacturing and services sector value added and labour productivity during 1995-2006. Data of national statistics show a significant growth in value added in the service sector and in manufacturing: the annual growth rate of gross value added was 11 %, the growth of value added in service industries being even more rapid (12.1%) than in manufacturing (10.9%).
132
Thndencies in labour supply show that the share of the number of employed in services grew during the last decade. Labour supply diminished in manufacturing and grew in traditionally less productive sectors such as trade, hotels and restaurants, financial intermediation and real estate.
The annual growth of labour productivity during the study period in manufacturing reached 12.3% and in the services sector 10.8% annually. Thndencies in services sector labour productivity growth show that services are a "nonprogressive" seetar of economics. However, comparative analysis revealed a sporadie increase in productivity in the "nonprogressive" services sector in 1995-2000: the average annual labour productivity growth in services was 14.3% and in manufacturing 13.8%. In our opinion, it supports W. Baumol's consideration that there can be sporadie increases in productivity in the nonprogressive sector, rather !han J. E. Triplen's and B. Bosworth's idea about the changing pattem in services production and therefore the nonconformity with Baumol's Disease conception.
The paper provides an interpretation of the Verdoom law of empirical regularities in the relationship between cross-sectorial labour productivity growth rate and value added growth rate. The Verdoom-Kaldor relationship between the growth of labour productivity and the growth of output has been assessed. Results of linear regressions shaw a statistically significant relationship and constant Tetums to scale for various industries. Constant retums to scale Tefleet substantial cross-sectorial extemalities, therefore growth rate disparities among various sectors of economy eannot be explained by disparities in retum to seale as the Verdoom-KaIdor law impHes. | 4,233.6 | 2020-01-01T00:00:00.000 | [
"Economics"
] |
Precocious maturation in male tiger pufferfish Takifugu rubripes: genetics and endocrinology
Testes of the tiger pufferfish Takifugu rubripes are a delicacy in Japan, and selective breeding for a male precocious phenotype, i.e., with early initiation of testes development, is desirable. However, it is unknown if precocious gonad development in this species is under genetic control. Here, we investigated genetic involvement in precociousness by using progeny tests with sires from two cultured populations, including a family line anecdotally known for its precociousness, and a wild population. Progeny derived from the “precocious” line consistently had greater testes weight than that from the other lines, even after accounting for effects of body weight, which indicates that precociousness is truly heritable. We also compared chronological changes in plasma steroid hormones between progenies sired by males from the precocious line and a wild population, and found that the precocious family line had higher levels of plasma estradiol-17β (E2) prior to the initiation of testicular development. Our findings suggest that selective breeding for testes precociousness in the tiger pufferfish is feasible, and that plasma E2 may be an indicator of this phenotype, which would allow for phenotype evaluation without the need to sacrifice specimens.
Introduction
The tiger pufferfish Takifugu rubripes is one of the most valuable aquaculture fish species in Japan (Hamasaki et al. 2017). Since the 1990s, when techniques for the broodstock management and artificial induction of the maturation and insemination of this species were developed (Miyaki et al. 1992;Chuda et al. 1997;Matsuyama et al. 1997), its aquaculture production has been maintained at about 4000 tons/year [Ministry of Agriculture, Forestry and Fisheries (MAFF), Japan: http://www.maff.go.jp/j/tokei /, accessed 25 April 2019].
In the tiger pufferfish, testes size is an important economic trait, since mature testes are regarded as a delicacy, and cost approximately 10,000 JPY/kg, three times the price of the fillet (Hamasaki et al. 2013). In general, tiger pufferfish testes increase to commercial size (larger than 100 g) about 2 months prior to the spawning season (March-April) off the western coast of Japan (Fujita 1962;Hattori et al. 2012). However, some individuals show a precocious phenotype, i.e., early initiation of the development of testes, which are not yet functionally mature, with testes weight (TW) exceeding 100 g in early December, the peak of yearly market demand. In Nagasaki Prefecture, which produced the greatest yield of cultured tiger pufferfish (accounting for 53.8% of total production) in 2017 (MAFF, Japan), there is a precocious family line (line A) favored by the market. This line is now recognized as one of the three major lines in production, due to its higher economic value (S. Yoshikawa, unpublished data). However, it is not known if the precociousness trait is truly heritable. If this precociousness were genetic in origin, this phenotype would be a valuable target for future aquaculture improvement, although selective breeding of this species is still in its infancy .
In many animals, gonad and body size, and especially body weight (BW), are positively correlated; this is known as allometric scaling (Kenagy and Trombulak 1986;Oikawa et al. 1992;Gage 1994;Jonsson et al. 1996;Fairbairn 1997). Thus, the precocious phenotype seen in line A could be a byproduct of early growth, in which case the differences in precociousness among family lines could be simply explained by body size differences. However, early maturation often has negative impacts on growth performance in aquaculture species (Okuzawa 2002;Taranger et al. 2009), and selection for a precocious phenotype may result in smaller body size. However, it is not known if there is a correlation between the precocious phenotype and growth in tiger pufferfish.
Reproductive physiology has been intensively investigated in teleosts, and the importance of androgens in male maturation recognized (Miura and Miura 2003;Schulz et al. 2010). For example, 11-ketotestosterone (11-KT) induced spermatogenesis and spermatogonial proliferation in the testes of Japanese eel Anguilla japonica in vitro (Miura et al. 1991). While there is some knowledge of the roles of steroid hormones in oocyte maturation in tiger pufferfish (Matsuyama et al. 2001;Lee et al. 2009), hormonal changes relevant to testicular development have not been identified. If precocious males show characteristic patterns in plasma steroid hormones during early development, these hormones can be used as indicators of precocious maturation. This would assist in the selection of precocious phenotypes, since selection could be done at early stages of production without sacrificing individuals for the measurement of TW. For example, precociousness in Chinook salmon Oncorhynchus tshawytscha can be predicted by a high level of plasma 11-KT 8 months prior to final maturation (Larsen et al. 2004), and androgen levels are useful in identifying precocious individuals in masu salmon Oncorhynchus masou (Ota et al. 1999), amago salmon Oncorhynchus rhodurus (Ueda et al. 1983) and Atlantic halibut Hippoglossus hippoglossus (Norberg et al. 2001).
In this study, we tested if the precociousness seen in line A has a genetic basis. We utilized progeny tests using maternal half-sib families to assess the possibility of selective breeding for this phenotype. We then compared the correlation patterns of TW and BW among test families to examine the impact of selection for precociousness on growth performance. We also investigated the endocrinology of the precocious line from chronological changes of plasma estradiol-17β (E2), 11-KT and testosterone (T) in line A and a family derived from a wild male, with the expectation that plasma steroids can be used as indicators of precocious phenotypes.
Materials and methods
All experiments were carried out in accordance with the Guidelines for Animal Experimentation of Nagasaki Prefectural Institute of Fisheries.
Test families for progeny tests
We conducted four progeny tests to examine paternal genetic effects on the precocious phenotype seen in line A. We produced maternal half-sib families using wild individuals and broodstock raised at Nagasaki Prefectural Institute of Fisheries (Nagasaki, Japan) or private hatcheries in Nagasaki Prefecture (Table 1; Fig. 1). To assess the significance of genetic effects on the target trait, we used maternal halfsib families. We focused only on paternal genetic effects, which is sometimes more practical than integrating both paternal and maternal genetic effects ). The broodstock was derived from line A and line B, two of the three major family lines used in Nagasaki Prefecture (S. Yoshikawa, unpublished data).
Progeny tests were conducted once a year from 2011 (test 1) to 2014 (test 4) at Nagasaki Prefectural Institute of Fisheries. In test 1, we compared TW as well as body size among progeny of three males from line A (A1), line B (B1) and a wild population (W1). In test 2, we produced test families using three sires, i.e., a descendant of A1 (A2), a male from line B (B2) and a wild male (W2). If the precocious phenotype seen in line A has a genetic basis, progenies sired by individuals of line A should outperform the others. In tests 3 and 4, we used A2, a paternal half-sib of A2 (A3 or A4; both are descendants of A1) and wild males (W3 or W4) as sires to confirm the superiority of line A by comparing its performance with that of the wild sires to address the phenotypic variation among progeny derived from line A.
All test families were produced by artificial insemination. Eggs obtained from each female were divided into three subgroups prior to fertilization, and each group was inseminated with sperm of the three males (Fig. 2). After fertilization, eggs were treated with 0.05% tannic acid for 15 s to reduce egg adhesiveness (Miyaki et al. 1998) and incubated per fullsib families in 1-kl tanks with flow-through seawater and aeration. Hatched larvae were transferred and reared in 2-kl tanks for 1 month and then in 6-or 8-kl concrete square tanks per full-sib family until cultures were started in communal tanks. Fish were reared following Miyaki et al. (1998) and fed nutrient-enriched live L-type rotifers, Artemia nauplii and commercial pellets, according to their developmental stage. Tanks were supplied with ultraviolet (UV)-sterilized seawater. The water temperature was kept at 21.0 °C during larval rearing and henceforth was uncontrolled (range Fig. 2 Crossing and rearing scheme of the progeny tests. In each test, three males were crossed with a female to produce a half-sib family, and each half-sib was reared in a separate tank until fish reached approximately 150 mm standard length. Specimens were then tagged individually and transferred to a communal tank 17.1-28.9 °C). The density of fish in each tank was adjusted four or five times in each progeny test.
Culture in communal tanks
When fish reached a mean standard length (SL) of approximately 150 mm, they were individually tagged with passive integrated transponder tags (Bio Mark, ID) and transferred to a communal tank. The number of fish, average SL and BW at transfer are given in Table 2. The capacity of the communal tank was increased from 6-kl to 50-kl according to the size of the fish from 8 to 17 months of age (MOA). Test fish were cultured until 20.5 MOA in test 1, 20.1 MOA in test 2, 21.1 MOA in test 3 and 21.7 MOA in test 4, when specimens reached harvest size of approximately 1 kg BW (early December). Fish were fed commercial pellets 3-5 times a week until satiation. Tanks were supplied with UV-sterilized seawater, and the water temperature was not controlled (range 12.7-27.8 °C throughout the experimental period). We did not observe mass mortality (Online Resource Fig. S1), but the survival rate in test 4 was relatively low (48.8-86.3%) because of heterobothriosis, a parasitic disease caused by Heterobothrium okamotoi (Ogawa 1991).
Evaluation of traits
In early December (20.1-21.7 MOA), fish were euthanized with an overdose (> 600 p.p.m.) of 2-phenoxyethanol (Fujifilm Wako Pure Chemical, Osaka, Japan), and SL and BW were measured. Testes were excised and weighed. The gonadosomatic index [GSI = 100 × gonad weight (g)/total BW (g)] was calculated for each specimen. In each test, ten to 45 fish were not sampled to maintain broodstock. There were no significant differences in SL and BW between males and females in any of the tests.
Changes in sex steroid hormones
A portion of the full-sibs sired by A2 and W3 produced for test 3 were utilized for physiological studies. At 6.9 MOA, 145 individuals from each family were transferred to 8-kl concrete square tanks, one tank per family. Fish were fed commercial pellets three to five times a week until satiation. Each tank was supplied with aeration and UV-treated seawater at ambient temperature (12.0-26.6 °C). Five to ten individuals were sampled monthly, or every other month, from 7.3 to 28.6 MOA, and immediately euthanized with an overdose of 2-phenoxyethanol (> 600 p.p.m.). Each individual was visually sexed and males alone were used for the following analyses. SL, BW and TW of each male were measured. GSI was calculated as described above. Blood samples were collected from the hepatic artery using a heparinized 5-ml syringe (21G needle) and kept on ice until centrifugation. Plasma was separated by centrifugation (644 g for 5 min at 4 °C) and stored at − 30 °C until further analysis. Plasma levels of E2, 11-KT and T, extracted with diethyl ether, were determined using an enzyme immunoassay kit (Cayman Chemical, Ann Arbor, MI) according to the manufacturer's instructions.
Statistical analysis
Statistical analyses were performed using R (R 3.4.4) (R Core Team 2018, accessed 15 March 2018). We first tested the sire effects on TW in each progeny test under a generalized linear model (GLM) using the glm function. Model comparison was done among four models, i.e., models with and without paternal effects assuming either a Gamma or a Gaussian distribution, based on Akaike's information criterion (AIC) and Akaike weight (w) (Akaike 1973;Burnham and Anderson 2002), and the model with the highest w was selected. Sire effects on SL, BW and GSI were also tested. When more than one model had large w (i.e., > 0.4), we first selected the model with the lowest df. If the df did not vary, we selected the lowest AIC value. Inverse and Identity were used as the link functions for the Gamma and Gaussian distribution models, respectively. When the model with paternal effects was selected, post hoc pairwise comparisons were performed (P < 0.05, adjusted using Tukey's method) using the lsmeans function of the emmeans package (version 1.3.4) (Lenth 2019). The least square mean and the 95% confidence interval (CI) of the mean were also estimated by using the lsmeans function. Note that the least square mean is the group mean adjusted for the other factors in the model. The mean values with 95% CI in the following sections are least square means, unless otherwise noted. To investigate the interrelationship between precociousness and growth phenotype, we tested for a correlation between TW and BW among families. Since BW includes TW, we used corrected BW (CBW), i.e., TW subtracted from BW, instead of BW. In this analysis, we examined the effects of CBW, sire and the interaction between CBW and sire on TW in the GLM. When the model comparison supported the model without the interaction term, this indicated that the correlation between the two phenotypes did not differ between families. In this case, we further tested sire effects on TW, eliminating the effects of CBW (i.e., the intercept of the linear model) as described above.
To compare the chronological changes in growth traits and sex steroid levels between the two families, we included sire, MOA and the interaction between sire and MOA as the fixed effects in the GLM. When the model with the interaction between sire and MOA was supported, pairwise comparisons using the lsmeans function were done for the post hoc significance test as described above.
Progeny tests
We carried out four progeny tests to evaluate the genetic superiority of line A as a precocious phenotype at harvest size (early December; 20.1-21.7 MOA). MOA at sampling, sample number, and average SL, BW, TW and GSI of male progeny are summarized in Table 3. The 95% CI of mean TW and the observed values of each test are shown in Fig. 3. Results of female progeny are summarized in Online Resource, Table S1.
Using GLM analysis, we examined paternal effects in tests 1 and 2 on TW using half-sib families, descendants of sires derived from line A, line B and wild populations. The model with sire effects was supported by model comparison based on AIC and w in both tests 1 and 2 (Table 4). In test 1, the mean TW of progeny of A1 was 148.0 g (95% CI = 130.1-165.8), which was significantly greater than that of B1 (mean = 60.8 g, 95% CI = 33.1-88.6; P < 0.0001) and W1 (mean = 92.1 g, 95% CI = 69.5-114.8; P = 0.0004). This was also confirmed in test 2, as the TW of A2 progeny (mean = 108.5 g, 95% CI = 94.1-122.9) was greater than that of B2 (mean = 25.7 g, 95% CI = 9.1-42.3; P < 0.0001) and W2 (mean = 35.8 g, 95% CI = 19.7-51.9; P < 0.0001). The model with sire effects was also supported for SL, BW and GSI, and progeny sired by line A individuals also performed better with regard to these phenotypes (Table 3; Online Resource Fig. S2-S4).
In tests 3 and 4, we compared the performance between line A and wild sires and among sires from line A. In these tests, models with sire effects were again supported for all phenotypes. In test 3, we used A2, a half-sib of A2 (A3) and a wild male (W3) as sires. A2 repeatedly outperformed the wild sire (W3) (P < 0.0001), as TW was 116.9 g (95% CI = 105.1-128.7) in the A2 progeny, while W3 was 38.6 g (95% CI = 26.5-50.7). In contrast, TW of A3 (56.0 g, 95% CI = 41.8-70.2) was not significantly greater than that of W3 (P = 0.1582), and was significantly lower than that of A2 (P < 0.0001). In test 4, we replaced A3 with A4, a full-sib individual of A3. The descendants of A2 and A4 outperformed those of W4 in terms of TW (A2-W4, P < 0.0001; A4-W4, P < 0.0001). The mean TW of each family was: A2 = 116.8 g (95% CI = 103.7-129.9), A4 = 84.6 g (95% CI = 71.5-97.7) and W4 = 25.1 g (95% CI = 10.4-39.8). A2 progeny were also larger than the other progeny in terms of SL, BW and GSI (Table 3; Online Resource Fig. S2-S4). We then focused on the correlation between the precocious phenotype and growth performance, since the precocious phenotype could have been a by-product of early growth. We therefore tested differences in the correlation between TW and CBW among families using GLM, including CBW, sire and the interaction between them as fixed effects. When patterns in the correlation between the two phenotypes did not differ among families, the model without an interaction term was selected. In tests 1, 2 and 3, the model including CBW and sire, but not the interaction term, was supported, suggesting that the correlation patterns were not different among families (Table 5). Estimated correlation coefficients (or regression slope) between TW and CBW were positive in these tests (test 1, r = 0.13; test 2, r = 0.10; test 3, r = 0.17) ( Fig. 4; Online Resource Table S2). In test 4, the model with the interaction term gave the highest w value, but w of the second-best model (the model including CBW and sire) was > 0.4. Therefore, we selected the model without the interaction term because it had the lowest df. Under this model, a positive correlation was also observed between TW and CBW (r = 0.17). We further tested the differences in TW between families using the selected model (i.e., including CBW and sire as fixed effects), eliminating the effect of CBW (Table 6). In test 1, the mean TW of A1 (95% CI = 127.8-155.6) was significantly greater than those of B1 (95% CI = 30.3-73.4; P < 0.0001) and W1 (95% CI = 90.0-126.3; P = 0.0131). In test 2, we repeatedly observed the superiority of line A; the mean TW of A2 (95% CI = 78.8-109.2) was significantly greater than that of B2 (95% CI = 20.5-52.7; P < 0.0001) and W2 (95% CI = 28.6-58.8; P = 0.0001). In tests 3 and 4, A2 again outperformed the others. The mean TW of A2 (95% CI = 90.7-113.2) was superior to those of A3 (95% CI = 40.0-64.6; P < 0.0001) and W3 (95% CI = 45.0-69.0; P < 0.0001) in test 3, and in test 4 the mean TW of A2 (95% CI = 91.8-112.3) was significantly greater than those Tables 3 and 4 Model 1: TW = BW × sire + error, family = Gaussian Model 2: TW = BW + sire + error, family = Gaussian Model 3: TW = BW × sire + error, family = Gamma Model 4: TW = BW + sire + error, family = Gamma a In test 4, we selected model 2 because models 1 and 2 had large w, but the latter had the lowest df
Changes in sex steroid hormones
The progeny tests consistently demonstrated the precociousness of line A. We further investigated changes in plasma sex steroid (E2, 11-KT and T) levels to assess the suitability of these hormones as indicators of individual precociousness.
Chronological changes in the patterns of these steroids were compared between progeny of A2 and W3 using GLM. We also compared changes in SL, BW, TW and GSI. An interaction between sire and MOA was supported for TW, GSI and each steroid, but not for SL and BW (Table 7). Significant differences between MOA within each family are summarized in Online Resource, Table S3-S9. Peak TW was observed at 24.0 MOA in W3 and 26.1 MOA in A2. The mean TW peak for A2 (mean = 219.3 g, 95% CI = 142.7-471.7) was higher, but not significantly so, when compared to that of W3 (mean = 163.1 g, 95% CI = 111.5-304.9; P = 1.0000) (Fig. 5a). Significant differences between the two families appeared from 19.0 MOA (early October). TW was significantly greater in A2 than in W3 from 19.0 MOA (A2, mean = 4.7 g, 95% CI = 3.2-8.8; W3, mean = 1.4 g, 95% CI = 1.1-2.1; P = 0.0001) to 21.1 MOA (A2, mean = 138.9 g, 95% CI = 94.8-259.7; W3, mean = 34.3 g, 95% CI = 23.4-64.1; P = 0.0021). Similar trends were observed for changes in GSI (Fig. 5b). For both families, all individuals were fully mature and milt could be stripped at 24.0 MOA. GLM analysis of SL and BW supported the model without the interaction term (Table 7). Changes in the trends of these two traits were similar between the two families, but these traits were significantly greater in A2 than in W3 throughout the rearing period (P < 0.0001) (Fig. 5c, d).
Discussion
We investigated whether the precociousness, i.e., early initiation of testes enlargement, seen in a family line (line A) of the tiger pufferfish is genetically linked. In addition, it was unclear if this phenotype is a by-product of early growth. We further provide the first data showing characteristic changes in plasma sex steroids associated with male precociousness in this species. Asterisks denote significant differences between families (*P < 0.05, **P < 0.01) In the first two progeny tests, 1 and 2, we compared precociousness among progeny derived from line A, line B and a wild population. Progeny of line A (A1 and A2) showed earlier testicular development compared to the other two lineages. The precocious phenotype of this line was repeatedly observed in later progeny tests, 3 and 4. These results suggest that the precociousness of line A is controlled by genetic factors. We also observed variation in the precocious phenotype among individuals from line A, i.e., A2 and its half-sibs (A3 and A4). TW of A4 progeny was significantly lower than that of A2, but greater than that of a wild male, W4. However, the TW of A3 was significantly lower than that of A2, but did not significantly differ from that of a sire from the wild population (W3). These variations indicate that the precocious phenotype is not dominant but additive. Additionally, the differences among the progeny tests were partly due to genotype by environment interactions (G × E). Our results suggest strong genetic effects on the precociousness of tiger pufferfish, but we were not able to evaluate the effects of G × E on this trait. Maturation timing is influenced by G × E in salmonid species, including Atlantic salmon Salmo salar (Wild et al. 1994) and rainbow trout Oncorhynchus mykiss (Kause et al. 2003). As these studies showed the importance of G × E effects on selective breeding programs, further investigation of these effects on precociousness of tiger pufferfish are needed.
While a positive correlation between gonad and body size is often seen in many animals (Kenagy and Trombulak 1986;Oikawa et al. 1992;Gage 1994;Jonsson et al. 1996;Fairbairn 1997), early maturation adversely affects growth performance in some aquaculture species including S. salar (McClure et al. 2007), H. hippoglossus (Imsland and Jonassen 2005) and O. tshawytscha (Campbell et al. 2003). In our testing of tiger pufferfish families, a positive correlation between TW and CBW was observed, and the correlation coefficients did not differ among families. However, individuals sired by line A had larger testes compared to the other two lineages, even after the effects of CBW had been eliminated. These results indicate that the precociousness of line A is not a by-product of early growth. Moreover, we suggest that selection for greater TW in early December can indirectly improve BW. Further genetic studies are needed to estimate the heritability of precociousness and the genetic correlation between TW and BW, to allow the assessment of possible simultaneous selection for these traits.
In contrast to phenotyping BW, phenotyping TW is currently difficult without dissecting out the testes at harvest. Therefore, we used plasma steroids as indicators of precocious maturation. Our chronological experiment revealed clear differences between precocious and wild families in patterns of body size change and steroid levels. Interestingly, significant differences in the plasma E2 level appeared before evidence of testes enlargement, while 11-KT and T levels increased with TW. In teleosts, E2 is one of the major female sex hormones (Devlin and Nagahama 2002), but it is also present at low levels in males (Miura et al. 1999;Chaves-Pozo et al. 2007;Shahjahan et al. 2010). Although the function of E2 in males is not clear, administration of E2 to sea bream Sparus auratus leads to the expression of various genes involved in different biological processes, such as cell proliferation, lipid metabolism and cell communication in the testes (Pinto et al. 2006). Furthermore, in A. japonica, E2 regulates spermatogonial stem cell renewal both in vivo and in vitro (Miura et al. 1999). Thus, our results suggest that E2 has a key role in the precocious phenotype of the tiger pufferfish, and that a high plasma level of E2 at about 6 months prior to harvest in early December can be an indicator of individual male precociousness. Additionally, 11-KT is the major teleost androgen and is involved in spermatogonial proliferation and spermatogenesis in A. japonica (Miura et al. 1991), Hucho perryi (Amer et al. 2001), goldfish Carassius auratus (Kobayashi et al. 1991), and yellowtail Seriola quinqueradiata (Higuchi et al. 2017). Our results are consistent with these previous studies and suggest a similar function for 11-KT in the tiger pufferfish. In contrast, the role of T in tiger pufferfish testicular development is somewhat unclear, as it is in other fishes, as T and 11-KT contribute to testicular development in a similar fashion (Rodríguez et al. 2000;de Waal et al. 2008;LeGac et al. 2008). T can also act as a precursor of E2 (Tanaka et al. 1992;Nagahama and Yamashita 2008). Our data show that plasma T increased more rapidly than 11-KT after TW started to increase; T may thus be more important for testicular development than 11-KT. However, we are currently unable to suggest any functional roles of these steroid hormones in testicular development. Further studies are needed for more insight into the endocrinological system (e.g., gonadotropin-releasing hormone pathway) which controls the initiation of testicular enlargement. This would clearly help our understanding of the physiological mechanisms underlying the precocious phenotype of the tiger pufferfish.
In conclusion, by using progeny tests with several maternal half-sib families, we showed that the precociousness of line A is heritable and has an additive nature. The precocious phenotype is not simply a by-product of early growth, as individuals sired by line A had larger testes than other families with the same body size. We also identified physiological characteristics of the precocious line, including a high concentration of plasma E2 just before TW increased. Our findings suggest that selective breeding for this precocious trait is possible for tiger pufferfish culture, and that plasma E2 can be used as an early indicator of this, which can be measured without sacrificing individuals. Current breeding populations of the tiger pufferfish are missing precise family history records, a drawback when starting a new selective breeding program using these populations. However, recent advances in molecular tools, such as genomic selection, now allow for selective breeding programs even where pedigree information is lacking (Meuwissen et al. 2001). The rich genomic resources developed for this species will enable great advances in the development of its genome-based selective breeding programs (Kai et al. 2011;Kamiya et al. 2012;Matsunaga et al. 2014;Sato et al. 2019;Kim et al. 2019). | 6,423.2 | 2019-12-10T00:00:00.000 | [
"Biology"
] |
Probing the causes of thermal hysteresis using tunable N agg micelles with linear and brush-like thermoresponsive coronas
Self-assembled thermoresponsive polymers in aqueous solution have great potential as smart, switchable materials for use in biomedical applications.
Representative polymer characterization data
1 H NMR and SEC data for the mCTA and the 50 mol% nBA diblock copolymer in each series is shown below Figure S2. 1 H NMR spectrum of mCTA1, analyzed at 400 MHz in CDCl 3 . Figure S3. 1 H NMR spectrum of polymer 1, analyzed at 400 MHz in CDCl 3 . Figure S4. SEC molecular weight distributions of mCTA1 and polymer 1, using 2% TEA in THF as the eluent and calibrated against PMMA standards. In each case, the distributions were calculated using the RI traces.
Figure S7
. SEC RI chromatograms of mCTA2 and polymer 6, using 5 mM NH 4 BF 4 in DMF as the eluent and calibrated against PMMA standards. In each case, the distributions were calculated using the RI traces. Figure S10. SEC RI chromatograms of mCTA3 and polymer 11, using 2% TEA in THF as the eluent and calibrated against PMMA standards. In each case, the distributions were calculated using the RI traces. Figure S13. SEC RI chromatograms of mCTA4 and polymer 17, using 2% TEA in THF as the eluent and calibrated against PMMA standards. In each case, the distributions were calculated using the RI traces. polymer 17 mCTA4 Figure S14. SEC RI chromatograms of polymer 17 before (black dashed line) and after (red solid line) three heating and cooling cycles from 50 -95 °C. 2% TEA in THF was used as the eluent and the instrument was calibrated against PMMA standards. In each case, the distributions were calculated using the RI traces. Representative light scattering data DLS and SLS data for the 100 mol% nBA diblock copolymer micelles in each series is shown below Figure S15. Multiple angle dynamic (above) and static (below) light scattering data of micelles comprised of polymer 5 at 1 mg mL -1 .
Representative turbidimetry data
Turbidimetry data for the 50 and 100 mol% nBA diblock copolymer micelles in each series is shown below. Figure S20. Variable temperature turbidimetry analysis of micelles comprised of polymers 6 (above) and 10 (below) at 1 mg mL -1 . In each case, the solid trace represents the heating cycle and the dashed trace represents the cooling cycle.
Additional calculations and discussions
Calculation of core composition for polymers 1-5 Since the 1 H NMR peaks from mCTA1 overlapped with the peaks from the pnBA-co-DMA core-forming block of polymers 1-5, it was necessary to subtract the 1 H NMR spectrum of mCTA1 from each of the spectra obtained for polymers 1-5. This was achieved by normalizing the intensity of both spectra to a peak that remained unchanged by the chain extension, the signal at 3.44 ppm corresponding to the methylene protons in the macroCTA's side chain. Following subtraction, the integrals of the peaks at 4.00 and 3.22 -2.77 ppm, corresponding to pnBA and pDMA respectively, were used to determine the core composition using the known DP of mCTA1. Figure S24. 1 H NMR spectra of mCTA1 (top) and polymer 3 (middle). Below is the spectrum of mCTA1 subtracted from that of polymer 3 used to calculate the core composition; the peaks at 4.00 and 3.22 -2.77 ppm are clearly resolved.
Definitions and calculations regarding the light scattering data
The scattering wave vector, q, is defined in equation (1) where n is the refractive index of the solvent, λ is the wavelength of the incident beam and θ is the angle of measurement.
The constrast factor, K, is defined in equation (2) where n standard is the refractive index of the toluene standard, dn/dc is the refractive index increment of the sample, N A is Avogadro's number and λ is the wavelength of the incident beam.
The Rayleigh ratio of the sample, R θ , is defined in equation (3) where I sample , I solvent and I standard are the intensity of scattered light of the sample, solvent and standard respectively, detected for each angle of interest and R θ, standard is the Rayleigh ratio of the toluene standard.
R core was calculated from N agg using equation (4) where ρ is the composition-weighted density of the two monomers in the core-forming block and M w, core is the weight average molecular weight of the coreforming block, calculated by the number average molecular weight, M n , determined by 1 H NMR multiplied by Ð determined by SEC analysis.
The average chain density within micelles comprised of pDEGMA (11-15) were calculated using equation (
Discussion of the effect of chain density on reversibility for pDEGMA micelles
The chain density of the micelles with pDEGMA coronas (11-15) as calculated using equation (5) are plotted as a function of core composition in fig. S25. There are two clear regions, which correspond to micelles whose transitions were reversible and irreversible respectively. Figure S25. Chain density of micelles comprised of polymers 11-15. Error bars represent 10% error. The two distinct regimes of reversible and irreversible phase transitions are marked with dashed lines. Note that polymer 14 (88% nBA in the core forming block) shows the highest chain density because its R H is smaller than that of polymer 15 (100% nBA in the core forming block). | 1,250.2 | 2016-08-09T00:00:00.000 | [
"Materials Science"
] |
PSR J2150+3427: A Possible Double Neutron Star System
PSR J2150+3427 is a 0.654 s pulsar discovered by the Commensal Radio Astronomy FAST Survey. From the follow-up observations, we find that the pulsar is in a highly eccentric orbit (e = 0.601) with an orbital period of 10.592 days and a projected semimajor axis of 25.488 lt-s. Using 2.7 yr of timing data, we also measured the rate of periastron advance ω̇ = 0.0115(4) deg yr−1. An estimate for the total mass of the system using the ω̇ gives M tot = 2.59(13)M ⊙, which is consistent with most of the known double neutron star (DNS) systems and one neutron star (NS)–white dwarf (WD) system named B2303+46. Combining ω̇ with the mass function of the system gives the masses of M p < 1.67 and M c > 0.98 M ⊙ for the pulsar and the companion star, respectively. This constraint, along with the spin period and orbital parameters, suggests that it is possibly a DNS system, and we cannot entirely rule out the possibility of an NS–WD system. Future timing observations will vastly improve the uncertainty in ω̇ , and are likely to allow the detection of additional relativistic effects, which can be used to modify the values of M p and M c . With a spin-down luminosity of Ė = 5.07(6) × 1029 erg s−1, PSR J2150+3427 is a very low-luminosity pulsar, with only the binary pulsar J2208+4610 having a smaller Ė .
Introduction
The Five-hundred-meter Aperture Spherical radio Telescope (FAST) is an ideal telescope for discovering pulsars (Nan et al. 2011;Qian et al. 2020).Pulsar searching is a key aspect of the Commensal Radio Astronomy FAST Survey (CRAFTS; Li et al. 2018), which samples the sky area between the range of −14°< decl.<66°in drift-scan mode (Cruces et al. 2021) using the FAST 19-beam receiver with a total bandwidth of 1.05-1.45GHz and a center frequency of 1.25 GHz (Jiang et al. 2020).Currently, the CRAFTS survey has discovered about 179 new pulsars,15 57 of which have their timing solutions reported from earlier studies (Cameron et al. 2020;Cruces et al. 2021;Miao et al. 2023;Wu et al. 2023).About 11% of the known pulsars are in binary systems (ATNF Pulsar Catalogue v1.69; Manchester et al. 2005), with the majority (∼84%) being millisecond pulsars (MSPs) with spin period P < 20 ms.MSPs are proposed to be formed from the evolution of low-mass X-ray binaries (Bhattacharya & van den Heuvel 1991).In the evolutionary scenario known as the "recycling" process (Alpar et al. 1982;Bhattacharya et al. 1992), the neutron star (NS) gains angular momentum from its companion via the Roche lobe overflow and spins up to millisecond periods.The outcome of the process is typically an MSP with a helium white dwarf (WD) companion in an extremely circular (e < 10 −3 ) orbit (Phinney 1992).
Some rare binary pulsars may experience a different evolutionary path.In the case when the companion star is massive enough to evolve into a supernova explosion, and the binary system is not disrupted by the explosion, a double neutron star (DNS) system will be formed.In such a system, the firstborn pulsar is a recycled pulsar, and the second-born NS will have the spin characteristics consistent with that of canonical pulsars (Agazie et al. 2021).Currently,20 Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
The evolution of a DNS system and its progenitor plays an important role in many fields of astrophysics, including powerful gravitational wave emission (Wex 2014), modeling of X-ray binary accretion processes, formation of millisecond pulsars (Lewin & van der Klis 2006), and possibly gamma-ray bursts (Eichler et al. 1989;Cantiello et al. 2007).In addition, the ancestors of the detected DNS systems experienced multiple mass transfer stages, with one or more common envelope episodes, and two supernova explosions, which make their observed characteristics similar to fossil records that have stored their past evolutionary history (Tauris & van den Heuvel 2006;Tauris et al. 2017).Therefore, DNS systems can be used as key probes in binary stellar astrophysics.Furthermore, some DNSs in relativistic orbits are ultrastable clocks, allowing unprecedented testing of gravitational theory in strong field states (Wex 2014).Finally, DNS systems help to constrain the equation of state of nuclear matter in high density (Özel & Freire 2016).
In this paper, we report the discovery and properties of PSR J2150+3427 based on FAST observations.It is a binary pulsar in a 10 day (P b ∼ 10.592 days) orbit with eccentricity e ∼ 0.601 and a spin period P ∼ 654 ms.Examination of the orbital period, eccentricity, companion mass (M c ), and the total mass of system (M tot ) suggests that this system is likely a DNS system.
Observations and Analysis
PSR J2150+3427 was discovered in an observation conducted on 2019 October 30, and confirmed in a drift scan using an L-band 19-beam receiver (see CRAFTS pulsar list).Our observations were performed at 91 different epochs between 2020 December and 2023 April using the FAST 19beam receiver at a center frequency of 1250 MHz and a bandwidth of 400 MHz (Jiang et al. 2020).The duration for most observations is 12 minutes, while three dozen epochs have a duration of 4 minutes and only one observation has an integration time of 30 minutes.A polarization calibration signal generated by a noise diode was recorded for 40 s before each pulsar observation.After the polarimetric calibration, the tool rmfit in the PSRCHIVE 17 software package (Hotan et al. 2004;van Straten et al. 2011) was used to search for the Faraday rotation measure (RM).The polarization profile shown in Figure 1 is obtained from the observation with the highest signal-to-noise ratio.
We dedispersed and folded the data using the DSPSR18 (van Straten & Bailes 2011).The radio frequency interference (RFI) was excised using the pazi and paz tools in the PSRCHIVE.It was then followed by adding all the phase-aligned profiles using the psradd tool.The pat tool from PSRCHIVE was used to compute the times of arrival (TOAs).The paas tool was used to generate a standard profile, which was then used to calculate the TOAs to refine the timing ephemeris.Finally, the TEMPO219 (Edwards et al. 2006;Hobbs et al. 2006) was used to build a phase-connected timing solution.
Preliminary Orbital Analysis
Since timing analysis requires initial estimates of the Kepler binary parameters, we first measure the barycenter spin period P obs for every observation using the TEMPO2 package.After that, a series of P obs is fitted using the fitorbit 20 program in order to derive the first-order orbital parameters.As shown in panel (a) of Figure 2, the measured P obs between MJD 59700 and MJD 59729 reveals a 10.592 day orbit with an eccentricity of e = 0.601.
Pulsar Timing
In this section, we outline the steps for improving the orbital ephemeris derived in Section 3.1 through a process known as pulsar timing.For observations with integration time of either 4 or 12 minutes, we calculated one TOA for each epoch, whereas two TOAs were produced for the observation with an integration time of 30 minutes.We fitted the dispersion measure (DM) by dividing the bandwidth into two frequency subbands and the TOA was calculated separately for each subband.A reduced χ 2 ∼ 1 was obtained by combining a scaling factor applied to all raw TOA uncertainties (called EFAC), and adding a term in quadrature to the TOA uncertainties (called EQUAD).
Our total data set spanned roughly 2.7 yr (MJD 59216-60189).The resulted ephemeris and timing residuals are shown in Table 1 and Figure 2, respectively.Apart from the least-squares method, the Bayesian timing analysis is also conducted using TEMPONEST (Lentati et al. 2014), and both return consistent results.In Table 1 the ephemeris is reported in barycentric coordination time (TCB) units, which is derived from the Damour & Deruelle's (DD) binary model (Damour & Deruelle 1985, 1986) and the JPL DE43821 solar system ephemeris.The rms of our timing solution is 147.086μs.
From the measured orbital period (P b ) and the projected semimajor axis of orbit (x), we obtain the mass function where M tot = M p + M c is the total mass of the system, M c and M p are masses for the companion and the pulsar, respectively, and i is the angle between the orbital angular momentum vector and the line of sight.The mass of the Sun in time units is given by T e = GM e /c 3 = 4.925490947 μs, G is the Newton's gravitational constant, M e is the mass of the Sun, and c is the speed of light.Notes.Timing results are obtained with tempo2 and reported in units of TCB. a The flux density at 1.25 GHz was estimated using the radiometer equation from Lorimer & Kramer (2004).(Taylor & Weisberg 1982): Using the measured w gives the total mass of the system M tot = 2.59( 13)M e , where the uncertainty is 1σ.Corongiu et al. 2007), and more.The remaining PK parameters cannot be determined for this system at this stage, making it impossible to evaluate the individual mass in this binary.However, the pulsar and companion mass can be constrained with the constraint i sin 1 from the mass function ( f ).We obtain the lower limit for M c , such that Panel (a) of Figure 3 shows the "mass-mass" diagram, which demonstrates the possible pulsar and companion masses allowed by w and those forbidden by the mass function.We obtain M p < 1.67 M e and M c > 0.98 M e (1σ error) for the pulsar and the companion, respectively.In other confirmed DNS systems with total system mass measurements only, such as PSRs J1325−6253, J1411+2551, J1759+5036, and J1811 −1736, a similar range for the companion masses to PSR J2150+3427 is observed.
Discussion
The spin period of PSR J2150+3427 is consistent with most canonical pulsars that have a period derivative as small as = ´-P 3.60 4 10 18 ( ) s s −1 .This gives the characteristic age of 2.88(3) Gyr and the surface magnetic field strength of B s = 4.90(2) × 10 10 G.The measured P and P imply that PSR J2150+3427 possesses a small spin-down luminosity of = É 5.07 6 10 29 ( ) erg s −1 , and only 12 pulsars in PSRCAT22 (version 1.69) have smaller luminosity.The magnetic field strength and the small P suggest that PSR J2150+3427 may be slightly recycled.The high eccentricity is likely the aftermath of the supernova explosion of the companion star, which is consistent with the DNS systems showing high eccentricity (Tauris et al. 2017;Pol et al. 2019;Balakrishnan et al. 2023) and two NS-WD systems (Davies et al. 2002).
Intrinsic Spin-down Rate
In general, the observed spin-period derivative (P obs implies that the actual distance is possibly less than 3 kpc.Due to the lack of reliable distance, the reliable values of P int and V are not obtained.
System Origin
Comparing PSR J2150+3427 with other pulsars is beneficial for determining the type of companion star in this new binary.Most binary MSPs in our Galaxy are highly recycled (P 10 ms) and in highly circularized orbits (e 10 −3 ).The intermediate spin-period pulsars (10 ms P 20 ms; Camilo et al. 2001;Balakrishnan et al. 2023) tend to have a WD companion (Ferdman et al. 2010), and their orbits have very low eccentricities.The rare DNS systems are survivors from two supernova explosions (Tauris et al. 2017).They tend to have a higher orbital eccentricity in the range of 0.064 e 0.83 and orbital periods between 0.1 P b 45 days (Tauris et al. 2017).Their eccentricities are much greater than the high-and intermediate-recycled pulsars.
The spin period of PSR J2150+3427 is consistent with young pulsars, but larger than the recycled NS (old), in DNS systems.In addition, the P b , e, and M tot of PSR J2150+3427 are consistent with that of the known DNS systems.On the other hand, we also noticed that the M p -M c distributions (panel (b) in Figure 3) of PSR J2150+3427 are consistent with three NS-WD systems (PSRs J1141−6545, J2222−0137, B2303 +46).This suggests that PSR J2150+3427 may possess a massive WD companion.To better determine whether it belongs to a DNS or a massive NS-WD system, we use the system mass ratio q (q = M p /M c ) and define an orbital factor Q (Q = P b /e day) to compare PSR 2150+3424 with known DNS and NS-WD systems.
The Q-q diagram for PSR J2150+3427 and the known DNS systems are shown in Figure 3.As shown in panel (c), the Q-factor of the known DNS system is less than 37 days, and q is between 0.91 and 1.33 (∼1).Assuming that the orbital inclination is i = 90°and 60°, and that J2150+3427 has q = 1.54(5) and q = 1.20(4), respectively, the Q of PSR J2150 +3427 is given by ∼17.61 days.Obviously, the Q and q values for PSR J2150+3427 are in good agreement with the DNS systems but differ significantly from most NS-WD systems.In the Q-q diagram, the distribution of this pulsar is also consistent with the location of PSRs J1141−6545 and B2303 +46, which implies that PSR J2150+3427 may have a massive WD companion.
The noticeable differences between PSR J2150+3427 and J2222−0137 are the eccentricity, age, and spin period.PSR J2222−0137 has a smaller period (P ∼ 0.032 s), more circularized orbit (e ∼ 3.8 × 10 −4 ), and younger age, which could distinguish it from the DNSs in the Q-q diagram.However, PSR J2150+3427 is an older pulsar in an eccentric orbit, which suggests that it is different from recycled NS-WD systems (e.g., PSR J2222−0137).
PSRs J1141−6545 and B2303+46 are young nonrecycled pulsars (Davies et al. 2002), and their companions are confirmed as WDs, which appear to have formed before the NS (van Kerkwijk & Kulkarni 1999; Kaspi et al. 2000;Davies et al. 2002).Their commonality with PSR J2150+3427 means that they cannot be distinguished from the DNSs through the companion star mass and orbital parameters.Especially when comparing B2303+46 with J2150+3427, they show very similar period, orbital eccentricity, orbital period, and magnetic field.However, the characteristic age are different by about 2 orders of magnitude.Furthermore, the difference in characteristic age between PSRs J2150+3427 and J1141−6545 is about 3 orders of magnitude.This suggests that PSR J2150+3427 does not belong to the younger nonrecycled pulsars, unless it is very old.
Known NS-WD systems with orbital periods of 9 ∼ 11 days have e < 6.6 × 10 −3 , P < 30 ms (MSPs), and characteristic ages between 58 Myr and 45 Gyr (mean value 9.3 Gyr).They have undergone a recycled process.In comparison, PSR J2150 +3427 is also an old pulsar (2.86 Gyr) but not significantly recycled (P ∼ 0.654 s and e ∼ 0.601).Based on the fact that the companion star cannot provide enough mass for the recycling process, PSR J2150+3427ʼs companion star is probably an NS.Moreover, the consistency of M tot , Q, and q suggests that PSR J2150+3427 is likely belong to a DNS system.The DNS evolution described by Tauris et al. (2017) may be applicable to PSR J2150+3427.
In a DNS system, the first NS born is the A star (recycled), and the last one born is the B star (nonrecycled).As described by Tauris et al. (2017), the spin period of recycled pulsars as a rough function of P b in all the observed Galactic disk DNS systems follows the relation: The P of PSR J2150+3427 is different from the predicted value (∼81.27ms) for recycled DNSs but similar to nonrecycled DNS pulsars (B star), which suggests that it is likely to be an aged B star.Like most DNS systems, we did not detect any signals from the second NS.One possible reason is that its beam of radiation does not sweep pass the Earth.Therefore, we cannot absolutely confirm whether it is an A star or B star.
There are observations for a potential companion of PSR J2150+3427 in Gaia Data Release 3 Part 1 (Gaia Collaboration 2022) with an offset of 0 0118 from the pulsar's position.The source ID is 1948939718066589952, and the mean magnitude of the source in the G band is 20 mag.It indicates that better telescopes are needed to obtain more information on its distance, temperature, proper motion, and so on.This source could be a foreground star or a background star.
Using the equation described by Bergeron et al. (1995) and Ruiz et al. (1995), the visual magnitude (m) of a WD can be estimated by the following relation: and R is the radius of the WD, d is the distance, H λ is the monochromatic Eddington flux (in units of ergs cm −2 s −1 Hz −1 str −1 ), whose value is given in Table 1 of Bergeron et al. (1995).For B2303+46s WD companion, at a DM distance of 4.3 kpc and for a cooling timescale of ∼30 Myr, a 1.3 M e WD counterpart would have m ∼ 25.8.This value is close to the measured value of 26.60(9) mag in the B band (van Kerkwijk & Kulkarni 1999).For PSR J2150 +3427, we assume that the companion has an age close to the characteristic age of the pulsar (∼2.88 Gyr).When using a DM distance from the YMW16 model (4.77 kpc) and a WD mass of 1.02 M e (i = 90°), the magnitude is estimated to be m ∼ 28.5.At the DM distance given by the NE2000 model, the value is estimated to be m ∼ 27.5.When using a WD mass of 1.18 M e (i = 60°), the estimations are m ∼ 29.1 and m ∼ 28.1, respectively.The companion of PSR J2150+3427 is more difficult to observe than B2303+46.The pulsar could be nonrecycled and just very old.If that were the case the companion could be a WD.
The Pulsar Death Line
Currently, PSR J2150+3427 is a pulsar with the lowest spindown luminosity in binary systems (PSRCAT version 1.69).The rotational parameters of PSR J2150+3427 imply that it has an = É 5.10 10 10 29 ( ) erg s −1 and it is an old pulsar (τ c = 2.86(6) × 10 9 yr).This places it below the typical death line (black line in the P P -diagram), shown in Figure 4, where a few pulsars are known.Traditionally, radio-quiet pulsars are supposed to be located under the death line.Various death lines are presented in the P P -diagram by earlier investigators.In Figure 4, the black line is a typical death line based on curvature radiation (CR) in the vacuum gap (V) model, as proposed by Ruderman & Sutherland (1975).So far, 55 pulsars (PSRCAT version 1.69) have been found below this line, meaning that the models cannot explain the origin of radio emission from these sources.Chen & Ruderman (1993) defined the region with two death lines forming the pulsar death valley (Equations ( 6) and (9) in their article).As shown in Figure 4, the black dashed line is a death line modeled by Equation (9) proposed by Chen & Ruderman (1993).About 38 pulsars are below this line.Neither the typical death line nor the death valley can explain the emission of PSR J2150+3427.
We have also explored the death line model of Zhang et al. (2000) to explain PSR J2150+3427.In Figure 4, the death lines as predicted by the CR-induced space-charged-limited flow (SCLF) model and by the inverse Compton scattering (ICS)-induced SCLF model are indicated by the blue and green dashed-dotted lines, respectively.The blue dashed line is the death line predicted by the CR-induced vacuum gap model proposed by Zhang et al. (2000).These two models cannot explain the radio radiation in PSR J2150+3427.We also noticed that the CR-V and ICS-V models of Zhang et al. (2000) cannot explain PSR J2150+3427.This suggests that the death line models need improvement.
As described by Zhou et al. (2017), different equations of state (EOS) for a pulsar result in different death lines.The mass of PSR J2150+3427 is <1.67 M e .The moment of inertia and the radius for an NS of a specific mass, together with the associated EOSs, are given in Table 1 by Zhou et al. (2017).Like Zhou et al. (2017), we adopt the typical potential drop ΔV = 10 12 V in the polar cap accelerating region.At the inclination angle of 90°, the death line model with EOS named wwf1 in Zhou et al. (2017) can explain the radio luminosity of PSR J2150+3427 (the green solid line in Figure 4).Szary et al. (2014) proposed an alternative explanation for the pulsar emission cessation process.They suggest that radio emission may be assumed to have a maximum possible efficiency (x º = L E 0.01 max ).Here, L is the radio luminosity, which can be obtained from Equation (2) of Szary et al. (2014).Assuming a spectral index of −1.6 (Lorimer et al. 1995), the radio efficiency of PSR J2150+3427 is ∼0.064 and 0.025, based on the distances predict from the NE2001 and the YMW16 models, respectively.The radio efficiency is greater than the maximum possible efficiency proposed by Szary et al. (2014).Based on the maximum radio efficiency of pulsars ), the possible minimum E ("observation-limit line") at the same distance as PSR J2150+3427 is ∼5.2 × 10 28 erg s −1 .We noticed that the E of PSR J2150+3427 is greater than E min , which suggests that this "observation-limit line" can explain its flux density luminosity.
Conclusions and Future Work
In this paper, we have reported the discovery and timing campaign of PSR J2150+3427.It is a 654 ms binary pulsar with an eccentric orbit (e = 0.601) of 10.592 days around a possible NS.Using our 2 yr data set, we also measured the rate of periastron advance, whose value suggests that the total mass of PSR J2150+3427 is consistent with the known DNSs in our Galaxy.Based on the orbital factor and mass ratio, PSR J2150 +3427ʼs companion star is probably an NS.Currently, it is the known pulsar with the smallest E in a binary system outside of globular clusters (PSRCAT version 1.69).
The w of all known DNSs are in the range of 0.00078(4) to 25.6(3) deg yr −1 and have an average value of 5.4(3) deg yr −1 .The w of PSR J2150+3427 is in accordance with that of the DNS population.Continued follow-up observations may determine the masses of PSR J2150+3427 and its companion via detection of more relativistic effects (PK parameters).One of these parameters, known as the Shapiro delay, can be The binary pulsars are indicated in black, DNS systems are represented by the blue circles, and PSR J2150+3427 is indicated by the red star.The blue and green dashed-dotted lines are the death lines predicted by the CR-induced space-charged-limited flow (SCLF) models and the inverse Compton scattering-induced SCLF model, respectively (Zhang et al. 2000).The blue dashed line is the death line predicted by the CR-induced vacuum gap model proposed by Zhang et al. (2000).The black line is a typical death line predicted by CR from the vacuum gap model (Ruderman & Sutherland 1975).The black dashed line is modeled by Equation (9) of Chen & Ruderman (1993).The green solid line is the death line model from Zhou et al. (2017).
observed in highly inclined (nearly edge-on) DNS systems (Shapiro 1964).As described by Freire & Wex (2010), a new PK parameter h 3 follows where h 3 is the "orthometric amplitude parameter" of the Shapiro delay, r is the "range" of Shapiro delay, ς is the "orthometric ratio" parameter expressed in the ratio of the amplitude to the successive harmonics of the Shapiro delay, and s is the "shape" of Shapiro delay.For an inclination angle of 60°, the value of h 3 would be 1.2 μs, which is probably undetectable given our EQUAD of 9 μs.If the Shapiro delay cannot be detected, measuring the mass of the companion would require using the Einstein delay, which will take decades.
Figure 1 .
Figure 1.Integrated polarization profiles of PSR J2150+3427.In the upper panel, the black dots with error bars are the PA points.The total intensity, linearly polarized flux and the circular-polarized flux are shown in black, red, and blue, respectively, in the lower panel.
Assuming a system inclination of 90 • (Lorimer 2008), the mass function given in Equation (1) implies a minimum companion mass of M c = 0.94 M e .The orbital eccentricity in the orbital solution allows us to measure one post-Kepler (PK) parameter, namely the periastron advance, which is estimated to be w = 0.0115 4 ( ) deg yr −1 .If w is purely relativistic, the total mass (M tot ) of the binary system can be determined by the following equation
is the intrinsic spin-down rate, P shk is the Shklovskii correction to the period derivative P , V is the transverse velocity, and d is the distance to the pulsar.The V 2 /cd and μ 2 d/c are the so-called Shklovskii effect(Shklovskii 1970).Hence, V = μd and <V determination of P int and V depends on reliable distances.According to the results byDeller et al. (2019) on the parallax distance for 57 pulsars, both electron density models (NE2001,Cordes & Lazio 2002; and YMW16, Yao et al. 2017) have significant shortcomings, especially in the high-latitude regions of the Galaxy.The Galactic latitude of PSR J2150+3427 is b = −15.002,and its DM distance may not be reliable.At the DM distance derived from the NE2001 model (3 kpc), we obtain the upper transverse velocity limit of V < 392(4) km s −1 , based on < V s −1 .With the distance derived from the YMW16 model (4.77 kpc), V < 495(5) km s −1 , and < s −1 are obtained.The maximum P shk obtained from both distances is greater than P obs (3.62(7) × 10 −18 s s −1 ).This
Figure 3 .
Figure 3. Panel (a): the mass-mass diagram of PSR J2150+3427 obtained from pulsar timing assuming fully general relativity.The gray region is excluded based on the mass function (black dotted line) and orbital geometry.The red line corresponds to w = 0.0115 4 ( ) deg yr −1 , and the dotted line shows the regions that are consistent within the error in w .Panel (b): mass-mass diagram for known pulsars and PSR J2150+3427.Panel (c): the Q-q diagram of PSR J2150+3427 and the systems with the NS mass measured.Here, q = M p /M c is the system mass ratio, and Q = P b /e (day) is the orbital factor.The blue diamonds are the DNS systems with individual NS mass measurements.The green dots are the NS-WD systems with individual NS mass measurements.The black and red stars of the PSR J2150+3427 system are calculated assuming the orbital inclination of i = 90°and 60°, respectively.
al. 2014) and the sensitivity of the radio telescope,Wu et al. (2020) proposed an "observationlimit line" ( low E pulsars discovered in the "grave yard" in the P-P diagram.According to the sensitivity of FAST (
Figure 4 .
Figure 4.The P-P diagram of the known pulsars, including PSR J2150+3427.The binary pulsars are indicated in black, DNS systems are represented by the blue circles, and PSR J2150+3427 is indicated by the red star.The blue and green dashed-dotted lines are the death lines predicted by the CR-induced space-charged-limited flow (SCLF) models and the inverse Compton scattering-induced SCLF model, respectively(Zhang et al. 2000).The blue dashed line is the death line predicted by the CR-induced vacuum gap model proposed byZhang et al. (2000).The black line is a typical death line predicted by CR from the vacuum gap model(Ruderman & Sutherland 1975).The black dashed line is modeled by Equation (9) ofChen & Ruderman (1993).The green solid line is the death line model fromZhou et al. (2017). | 6,479.2 | 2023-11-01T00:00:00.000 | [
"Physics"
] |
Pro-Oxidant Activity of Amine-Pyridine-Based Iron Complexes Efficiently Kills Cancer and Cancer Stem-Like Cells
Differential redox homeostasis in normal and malignant cells suggests that pro-oxidant-induced upregulation of cellular reactive oxygen species (ROS) should selectively target cancer cells without compromising the viability of untransformed cells. Consequently, a pro-oxidant deviation well-tolerated by nonmalignant cells might rapidly reach a cell-death threshold in malignant cells already at a high setpoint of constitutive oxidative stress. To test this hypothesis, we took advantage of a selected number of amine-pyridine-based Fe(II) complexes that operate as efficient and robust oxidation catalysts of organic substrates upon reaction with peroxides. Five of these Fe(II)-complexes and the corresponding aminopyridine ligands were selected to evaluate their anticancer properties. We found that the iron complexes failed to display any relevant activity, while the corresponding ligands exhibited significant antiproliferative activity. Among the ligands, none of which were hemolytic, compounds 1, 2 and 5 were cytotoxic in the low micromolar range against a panel of molecularly diverse human cancer cell lines. Importantly, the cytotoxic activity profile of some compounds remained unaltered in epithelial-to-mesenchymal (EMT)-induced stable populations of cancer stem-like cells, which acquired resistance to the well-known ROS inducer doxorubicin. Compounds 1, 2 and 5 inhibited the clonogenicity of cancer cells and induced apoptotic cell death accompanied by caspase 3/7 activation. Flow cytometry analyses indicated that ligands were strong inducers of oxidative stress, leading to a 7-fold increase in intracellular ROS levels. ROS induction was associated with their ability to bind intracellular iron and generate active coordination complexes inside of cells. In contrast, extracellular complexation of iron inhibited the activity of the ligands. Iron complexes showed a high proficiency to cleave DNA through oxidative-dependent mechanisms, suggesting a likely mechanism of cytotoxicity. In summary, we report that, upon chelation of intracellular iron, the pro-oxidant activity of amine-pyrimidine-based iron complexes efficiently kills cancer and cancer stem-like cells, thus providing functional evidence for an efficient family of redox-directed anti-cancer metallodrugs.
Introduction
Cancer cells undergo metabolic adaptations to sustain their uncontrolled growth and proliferation. Diverse intrinsic and extrinsic molecular mechanisms contribute to this metabolic reprogramming to supply cancer cells with sufficient energy and biosynthetic capacity in the tumor environment [1,2]. Altered metabolism together with activated oncogenic signaling and deregulation of mitochondrial function typically results in an increase in the generation of reactive oxygen species (ROS) in cancer cells [3,4]. Interestingly, this phenomenon leads to a differential redox homeostasis in normal and malignant cells that is gaining ground as a promising target for the design of more selective and effective anticancer agents [5][6][7][8].
Highly reactive ROS are produced in cells by the incomplete reduction of molecular oxygen to water during aerobic metabolism. ROS are normally regulated by cellular defensive antioxidants [9,10] and participate in multiple cellular functions including signal transduction, enzyme activation, gene expression and protein post-translational modifications [11]. When generated in excess or when the efficiency of the cellular antioxidant system is submaximal, ROS accumulate and cause irreversible cellular damage through the oxidation of biomolecules such as lipid membranes, enzymes or DNA which generally leads to cellular death [12]. ROS can also promote cancer initiation and progression by inducing DNA mutations and pro-oncogenic signaling pathways [13,14].
Increased ROS in cancer cells upregulates the antioxidant response, resulting in a new redox balance that enables these cells to maintain higher ROS levels than normal cells. Consequently, cancer cells exhibit persistent oxidative stress, which promotes cell proliferation but is insufficient to cause cellular death [4,13]. This altered homeostasis renders cancer cells vulnerable to exogenous oxidizing agents that generate additional ROS, which are likely to increase oxidative stress levels above the cytotoxic threshold. This susceptibility is heightened by the restricted capacity of cancer cells to strengthen the antioxidant response to neutralize the oxidative insult [15]. In contrast, normal cells can tolerate higher levels of exogenous ROS stress since they exhibit lower constitutive ROS levels together with a superior responsiveness of antioxidant systems. In fact, it is well described that, in addition to their direct effects on DNA and cell division, the mechanism of action of many chemotherapeutic agents such as 5-fluoruracil, bleomycin, cisplatin, doxorubicin or paclitaxel also involves ROS-mediated apoptosis [13,[16][17][18][19].
While the biological effects of ROS and the mechanisms regulating ROS levels are well established in cancer cells, little is known about the role of ROS in the cancer stem cell (CSC) subpopulation, which displays a high capacity for self-renewal and differentiation and also the potential to generate tumors with a marked chemo-/radio resistance [20,21]. CSCs contain lower levels of ROS than non-CSCs, likely as a consequence of enhanced free radical scavenging systems [22]. Low ROS levels might be related to the privileged status of this subset of cells, preserving DNA integrity and protein function, which is critical to maintain the potential for self-renewal and stemness [23,24]. Thus, exogenous ROS elevation might be an approach to kill the CSC subpopulation, which is normally enriched after conventional chemotherapy. Indeed, niclosamide and arsenic trioxide (AS 2 O 3 ), which are potent ROS inductors, have been shown to promote CSC death [25].
A number of anticancer agents that target the cellular redox balance are in different phases of preclinical and clinical development [5,6]. Mechanistically, these agents either inhibit the cellular antioxidant defense systems [27][28][29] or generate ROS [30][31][32]. In addition to these agents, transition metal-based compounds may be promising candidates for pro-oxidant therapies. When accumulated in cells, metals such as iron, manganese and copper, undergo cycling redox reactions that generate high levels of ROS, principally the highly-damaging hydroxyl radical species through the Fenton reaction. This metal-mediated form of oxidative stress is a well-known cause of cell death [33], and thus, an increasing number of investigations are exploring the potential of metallodrugs in redox-based anticancer therapies [34][35][36][37].
Transition metal complexes with aminopyridine-containing organic scaffolds have emerged as powerful catalysts for the oxidation of organic substrates. These complexes are also regarded as bioinspired catalysts since they reproduce structural and reactivity properties of oxidative enzymes. A key aspect of their activity is their strong binding to iron and manganese ions, generating powerful oxidants after reacting with peroxides [38][39][40][41][42][43]. These oxidant compounds function as catalysts to promote the oxidation of inert molecules such as alkanes, alkenes and even the challenging water molecule. The mechanism of action involves ferric-peroxide species, chemically reminiscent to activated bleomycin. In addition, these compounds are highly resistant to self-oxidation. With this background, we here assessed the antiproliferative and cytotoxic activity profiles of five amino-pyridine-based Fe(II)-complexes which have been previously shown to be particularly active in peroxide activation reactions [38][39][40][41][42][43], and the corresponding metal-free ligands, against a panel of diverse human cell lines including epithelial-to-mesenchymal (EMT)-induced stable populations of cancer stem-like cells and nonmalignant cells. The most active compounds were further analyzed for their ability to inhibit the clonogenicity of cancer cells, modulate the cell cycle and induce cell death. The capacity of the amine-pyrimidine-based iron complexes to generate ROS and cause DNA damage was evaluated together with the influence of the chelation of intracellular iron on their cytotoxic profile. Based on the lethal disruption in the redox balance caused by these complexes in cancer and ROS-resistant cancer stem-like cells, we provide strong functional evidence for an efficient family of redox-directed anti-cancer metallodrugs.
Cytotoxicity Assays
The cytotoxic activity of the compounds was determined by MTT reduction assay as described [45]. Compounds were diluted in Milli-Q water to obtain 1 mmol/L stock solutions. Appropriate aliquots of these solutions were diluted in the corresponding cell culture medium to obtain the final working concentrations. Aliquots of 5000 1BR3G cells, 6000 MCF-7, 6000 PC-3 cells, 10 000 CAPAN-1 cells, 4000 MCF 10A cells, 4000 HMLE cells or 4000 CCD-18Co cells were seeded in 96-well plates, 24 h prior to the treatments. Hematological cell lines were seeded at 400 000 cells/mL. Cells were treated with the corresponding compound at concentrations ranging from 0 to 100 μmol/L for 48 h. Three replicates for each compound were used. The IC 50 was established for each compound by standard non-linear regression and curve fitting using GraphPad Prism (Graph Pad software Inc., La Jolla, CA, USA).
Hemolytic assay
The hemolytic activity of the compounds at 100 μmol/L was evaluated by determining hemoglobin release from erythrocyte suspensions of fresh human blood (5% vol/vol) as described [46].
Colony formation assay MCF-7 cells were seeded in 12-well plates. Twenty-four-hours later, cells were treated with cisplatin, compound 1, 2 or 5 at 10 μmol/L, or vehicle alone as a control, for 3 and 24 h at 37°C. Additionally, cells were exposed to compound 1 for 3, 6, 12 and 24 hours. Subsequently, cells were washed with PBS, collected with trypsin and plated at low density (3000 cells in a 360-mm plate). Cells were allowed to divide and form colonies for 7-10 days; after which, colonies were fixed and stained with 2% methylene blue in 50% ethanol. The number of colonies in each plate was determined using the Alpha Innotech Imaging system (Alpha Innotech, San Leandro, CA).
Caspase activity analysis
Enzymatic caspase activity was determined after exposing the cells to compound 1, 2 and 5 at 10 μmol/L for 48 h. Caspase 3/7 activity was measured with the luminometric Caspase-Glo 3/ 7assay (Promega, Madison, WI, USA) using a Synergy HT multi-detection microplate reader (Bio-Tek).
Cell cycle analysis
Cell cycle profiles were analyzed by flow cytometry of PI-stained cells. Briefly, cells were collected by centrifugation, washed in ice-cold PBS and fixed for 30 min at 4°C in 70% ethanol. After washing twice with PBS, DNA was stained with 50 μg/mL PI in the presence of 50 μg/ml RNase A. Stained cells were then processed using a FACScan flow cytometer (Coulter Epics XL-MSL; Beckman Coulter, Fullerton, CA, USA) and winMDI software.
ROS mesurement
Cellular ROS content was determined using the 2 0 ,7 0 -dichlorodihydrofluorescein diacetate probe (H 2 DCFDA). Cells were seeded in 24-well plates (50 000 cells/well) in phenol red-free DMEM 24 h prior to treatments. Cells were treated with different concentrations of compound 1, 2 and 5 (2.5, 5 or 10 μmol/L) or vehicle alone as a control, for 5 or 24 hours at 37°C. In some experiments, cells were co-treated with the compounds plus 5 mmol/L NAC. After treatments, cells were washed with PBS and incubated with 1 μmol/L H 2 DCF-DA in PBS for 30 minutes in the dark. After washing, cells were collected with trypsin and analyzed by flow cytometry using a FACS-Calibur flow cytometer (Becton-Dickinson 1 , Immunofluorometry Systems, Mountain View, CA, USA). The geometric mean fluorescence intensity of 10 000 cells was established using CellQuestTM software (Becton Dickinson). The fluorescence fold-increase versus untreated cells was determined for each treatment.
Determination of cellular labile iron pool
The cellular labile iron pool was determined with calcein-AM. CAPAN-1 cells (125 000 cells/ well) were seeded in 24-well plates and incubated for 24 h. Then, cells were treated for 24 h with 10 μmol/L of compound 1, 2 or 5 at 37°C. In some experiments, cells were incubated for 2 h with 100 μmol/L of DFO or 100 μmol/L FeCl 2 . Cells exposed to the vehicle alone were used as a control. After treatments, cells were washed with PBS and incubated with calcein-AM (0.25 μmol/L) for 30 min at 37°C in the dark. Subsequently, cells were washed and collected with trypsin and the geometric mean fluorescence intensity of 10 000 cells was determined by flow cytometry as described.
Cellular DNA damage analysis DNA damage was assessed by monitoring the intensity of p-H2A.X fluorescence using flow cytometry. Briefly, cells were collected with trypsin, washed in PBS and fixed in 3.7% formaldehyde for 15 min on ice. Cells were then permeabilised with 0.2% v/v Triton-X100 for 10 min and incubated with 1:400 rabbit anti-p-(S139)-H2A.X antibody (Cell Signaling Technology, Danvers, MA) for 30 min on ice. After washing in 0.1% Triton-X100 in PBS, cells were incubated with 1:400 anti-rabbit Alexa 555-conjugated antibody (Jackson ImmunoResearch, Newmarket, UK) for 20 min on ice. Analysis was carried out in a FACScan flow cytometer with Flowing software.
DNA cleavage analysis
DNA cleavage was monitored by agarose gel electrophoresis. A stock solution of pUC18 DNA was freshly prepared in Milli-Q water at a concentration of 0.5 μg/mL (1512 μmol/L nucleotides; 756 μmol/L bp). Reactions were performed by mixing 0.5 μl of pUC18 with appropriate aliquots of the compounds and 1 μL of activating agent solution (35% wt/vol H 2 O 2 in H 2 O). Cacodylate buffer (0.1 M, pH 6.0) was added to the mixture to give a final volume of 20 μl. The final concentration of pUC18 DNA was 37.8 μmol/L in nucleotides (18.9 μmol/L bp). Samples were incubated for 1 h at 37°C; reactions were quenched by adding 6 μL of a buffer solution consisting of bromophenol blue (0.25%), xylene cyanol (0.25%), and glycerol (30%). Subsequently, the samples were subjected to electrophoresis in 0.8% agarose gels in 0.5×TBE buffer (0.045 mol/L Tris, 0.045 mol/L boric acid, and 1 mmol/L EDTA) at 100 V for 1 h and 40 min. Gels were stained with ethidium bromide (10 mg/mL in TBE) for 15 min and visualized under UV transillumination. DNA bands were captured using the ProgRes CapturePro 2.7 system and the intensity of each band was quantified with the GelQuant version 2.7 software (DNR Bio-Imaging Systems, Jerusalem, Israel) using a correction factor of 1.31 to compensate for the reduced ethidium bromide uptake of supercoiled plasmid pUC18 DNA [47]. The proportion of different forms of plasmid DNA was established for each treatment. To test the involvement of ROS in strand scission and possible complex-DNA interaction sites, various ROS scavengers and groove binders were added to the reaction mixtures. The scavengers used were Tiron (10 mmol/L), sodium azide (0.4 mol/L), and dimethyl sulfoxide (DMSO, 3 μL). The groove binders used were methyl green (20 mmol/L) and Hoechst (40 μmol/L).
Statistical analysis
Statistical analysis was performed with SPSS statistical software for Windows (version 15.0; SPSS Inc., Chicago, IL, USA). Quantitative variables were expressed as mean and standard deviation (SD) of at least three independent experiments. The normality of the data was tested using the Shapiro-Wilk test. The differences between data with normal distribution and homogeneous variances were analyzed using the parametric Student's t test. A value of p<0.05 was considered significant.
Compounds 1, 2 and 5 are highly cytotoxic against cancer and cancer stem-like cells cells lines, MCF-7 and CAPAN-1. Compounds were tested at different concentrations ranging from 0 to 100 μmol/L to determine the concentration required to inhibit cell growth by 50% (IC 50 ). Compounds with IC 50 values greater than 100 μmol/L were considered to be inactive. Only three of the five iron complexes (1-Fe, 3-Fe, 4-Fe) demonstrated a measurable antiproliferative effect in MCF-7 cells, while none of iron complexes were active against CAPAN-1 cells ( Table 1). The antiproliferative activity of 3-Fe and 4-Fe was rather modest (IC 50 = 73.5 ±0.7 μmol/L and 63.5±2.1 μmol/L, respectively). In contrast, all iron-free ligands were cytotoxic in both cell lines analyzed, with IC 50 values ranging from 3.7±0.4 to 88.5±0.7 μmol/L in MCF-7 cells and from 6.0±0.7 to 32.0±10.4 μmol/L in CAPAN-1 cells. These values are within the range of well-established anticancer agents such as cisplatin assayed under the same conditions (Table 1). Given the weak antiproliferative activity of the iron complexes, we focused on the metal-free organic compounds and evaluated their cytotoxicity against a selection of tumor (PC-3, Z-138 and JURKAT) and non-malignant (HMLE, MCF 10A, 1BR3G and CCD-18Co) cell lines (Table 2). Compound 2 was the most active ligand against tumor cells, with IC 50 values ranging from 3.8±0.2 to 7.2±1.9 μmol/L. Compounds 1 and 5 also exhibited low IC 50 values (from 4.8±1.2 to 15.1±3.1 μmol/L and from 2.9±0.4 to 7.7±0.3 μmol/L, respectively), while a more moderate antitumor activity was obtained for ligands 3 and 4. Only in the normal colon CCD-18Co cell line the activity of the compounds was lower than in tumor cells lines. Importantly, none of the ligands were hemolytic, even at 100 μmol/L ( Table 2). The antitumor properties of the ligands were further evaluated in a panel of cell lines, including human leukemia, lymphoma and glioma cancer cells, by analyzing their cytotoxic effects at 10 μmol/L. As anticipated, compounds 1, 2 and 5 also displayed high antiproliferative activity against these cell lines (Fig 2A), demonstrating their ability to be broadly active antitumor agents.
To gain insight into the cytotoxic potency of ligands 1, 2 and 5, their antiproliferative activity was evaluated in a stable breast cancer stem (CS)-like cell line (HMLER-shEcad). This cell line was originally established from triple oncogenic transformed and immortalized human mammary epithelial cells (HMLER), wherein knockdown of E-cadherin triggered an epithelialmesenchymal transition (EMT) that resulted in cells with features characteristic of CSCs [44,48]. As expected, CS-like HMLER-shECad cells were more resistant to the well-known chemotherapy agent doxorubicin than non-CS-like HMLER isogenic control cells [48] (IC 50 = 0.3 ±0.02 μmol/L vs 0.10±0.02 μmol/L, respectively, representing a~3-fold increase in IC 50 ) ( Fig 2B). In contrast, the cytotoxic profile of ligands 1 and 2 remained largely unaltered in CS-like HMLER-shECad cells (IC 50 = 5.3±0.7 μmol/L and 6.8±0.1 μmol/L, respectively) relative to HMLER cells (IC 50 = 6.5±0.4 μmol/L and 6.6±0.3 μmol/L, respectively) (Fig 2B), indicating that these ligands induce cell death through a mechanism that cannot by repressed by the chemoresistant-CS-like phenotype. Moreover, compound 1 displayed some selective cytotoxicity towards HMLER-shECad cells. In contrast, HMLER-shECad exhibited some resistance to ligand 5-induced cytotoxicity (IC 50 = 8.6±0.5 μmol/L compared with 5.1±0.1 μmol/L in HMLER cells) (Fig 2B). The long-term activity of the ligands was determined by measuring their ability to inhibit the clonogenic potential of cancer cells. Thus, MCF-7 cells were treated for 3 or 24 h with 10 μmol/L of ligand 1, 2 or 5, or cisplatin as a positive control, followed by plating at low density. Analysis of colony numbers after 10 days revealed a marked inhibitory effect of compound 2 on colony formation and the number of colonies was significantly reduced by 39% compared with control cells after 3 h exposure to the ligand (Fig 3A). Furthermore, the clonogenicity of MCF-7 cells was almost abolished after 24 h exposure to compound 2, revealing a greater inhibitory activity than cisplatin. At this time point, compounds 1 and 5 also significantly reduced the colony numbers by 57% and 53%, respectively, although their activity was lower than compound 2, which is in agreement with the antiproliferative activity of the ligands ( Table 2). In contrast to cisplatin treatment, inhibition of cell growth by ligands was timedependent. Exposure of MCF-7 cells to ligand 1 for 3, 5, 12 and 24 h reduced the number of colonies by 0%, 18.7%, 40.5% and 56.6%, respectively (Fig 3B). These results indicate that the ligands trigger a delayed cell death mechanism that requires several hours to take place.
Compounds 1, 2 and 5 promote cell cycle arrest and apoptosis
To determine whether the ligands induce cellular death through the activation of programmed cell death (apoptosis), the activation of the executioner caspases, caspase-3 and -7, was analyzed using a luminometric assay in a panel of human cancer cell lines. Cells were treated with the ligands at 10 μmol/L and caspase activity was monitored after 48 h. All three ligands activated caspase 3/7 to some extent (Fig 4). Compound 2 treatment clearly increased caspase 3/7 activity in all cell lines in comparison with untreated controls. Interestingly, treatment with compound 1 led to significant caspase 3/7 activation in lymphoma (Z-138, Jeko-1, Granta and SP-53) but not in leukemia (JURKAT) or glioma (LN229 and U87MG) cell lines. Compound 5 induced a broad pro-apoptotic effect, activating caspase 3/7 in most cell lines except PC-3 cells, and was the most effective compound against JURKAT cells (Fig 4). Importantly, these results correlate strongly with the profile of cytotoxic effects induced by compounds 1, 2 and 5 (Fig 2A), and suggest that compounds 2 and 5 promote cell death chiefly by inducing apoptosis. These results were confirmed by analyzing the effect of caspase inhibition on the cytotoxic activity of the compounds. As shown in Fig 4B, the pan-caspase inhibitor QVD-Oph significantly reverted the cytotoxicity of compounds 1 and 5 in MCF-7 and CAPAN-1 cells inducing an increase in cell viability ranging from 2.5 to 4 fold. Noteworthy, in agreement with our previous observations, compound 2 displayed a very high cytotoxic activity, which may explain the lack of reversion in the presence of the caspase inhibitor in these experimental conditions. These findings support that the cytotoxic activity of these compounds involves caspase-dependent apoptosis.
To explore the effect of ligands on cell cycle progression, the cell cycle distribution of MCF-7 and LN229 cells was examined by flow cytometry after 24 and 48 h exposure to compounds 1, 2 and 5 (10 μmol/L). In agreement with its robust cytotoxic activity in both cell lines, compound 2 increased the proportion of cells in G1 at 24 h, followed by a dramatic induction of apoptosis at 48 h as indicated by the increase in the sub-G1 population (Fig 5). In contrast, compound 1 exerted only a modest effect on the cell cycle, which was apparent after 48 h as indicated by a small induction of apoptosis in MCF-7 cells and a reduction in the S-phase fraction in LN229 cells. Interestingly, in both cell lines, compound 5 treatment resulted in partial G2/M arrest at 24 h, followed by a marked induction of apoptosis at 48 h (Fig 5).
Compounds 1, 2 and 5 are inducers of oxidative stress
To determine whether the compounds induce oxidative stress, ROS accumulation was evaluated in CAPAN-1 cells using the non-polar cell permeable probe H 2 DCFDA. Once inside cells, the acetate groups of the probe are enzymatically cleaved generating the nonfluorescent derivative H 2 DCF, which emits strong green fluorescence on oxidation by ROS [49]. Exposure of CAPAN-1 cells to increasing concentrations of compound 1 for 24 h resulted in a dose-dependent induction of ROS as measured by an increase in fluorescence from 5.8 (0 μmol/L) to 10.9 (2.5 μmol/L), 11.79 (5 μmol/L), and 32.6 (10 μmol/L) (Fig 6A). Exposure of CAPAN-1 cells to equal concentrations of 1, 2 and 5 for 5 and 24 h revealed that all three ligands could generate intracellular ROS in a dose-and time-dependent manner ( Fig 6B). Consequently, CAPAN-1 cells exposed to compound 1 for 5 h exhibited a 1.73, 2.01 and 2.61-fold increase of ROS levels at 2.5, 5, and 10 μmol/L, respectively. Equivalent concentrations of compound 2 resulted in a 1.39, 1.66 and 2.53-fold increase in ROS (Fig 6B). At this time point, compound 5 exhibited lower oxidative activity than 1 and 2 at all concentrations. Importantly, ROS continued to be produced and, after 24 h treatment with ligands at 10 μmol/L, the intracellular ROS levels were 6.4-fold (2), 5.1-fold (1) and 2.4 fold (5) higher than in untreated cells (Fig 6B), pointing to a strong oxidative activity of the ligands in this cell line.
To explore the generality of ROS production, MCF-7 and JURKAT cells were likewise exposed to 10 μmol/L of compounds 1, 2 and 5 for 24 h. Results revealed a differential oxidative activity of the ligands in individual cell lines. In MCF-7 cells, compound 2 generated a significant 7-fold increase in ROS levels (Fig 6C), which was equivalent to the ROS induction detected in CAPAN-1 cells. The oxidative activity of compound 1 in MCF-7 cells was, however, lower than in CAPAN-1cells. In JURKAT cells, ROS levels were significantly increased only with compound 2 (1.94 folds versus control cells; Fig 6C).
To assess the relationship between prooxidant properties and cytotoxic activity of the ligands, we studied whether the widely-used ROS scavenger N-acetylcysteine (NAC) could inhibit ligand-induced cytotoxicity. NAC treatment (5 mmol/L) reduced levels of ROS induced by compounds 1 and 2 in CAPAN-1 cells by 31.1% and 26.8%, respectively; however, the effect of NAC on the oxidative activity of 5 was more modest (Fig 7A). Furthermore, CAPAN-1 cell viability significantly increased from 23.9±2.9% when exposed to compound 2 (5 μmol/L) in the absence of NAC to 33.6±3.8% in the presence of 5 mmol/L NAC, representing a 40.6% increase in cell survival (Fig 7B). CAPAN-1 cell viability also increased from 40.7±6.9% with compound 1 (10 μmol/L) without NAC to 45.2±5.4% in the presence of NAC, while no protective effect of NAC was observed for compound 5 cytotoxicity (Fig 7B).
Compounds 1, 2 and 5 chelate intracellular labile iron
We analyzed whether ligand-induced ROS generation was associated with their strong capacity to bind iron [38][39][40][41][42][43], forming iron coordination species inside cells. Thus, the effect of compounds 1, 2 and 5 on the intracellular labile iron pool in CAPAN-1 and MCF-7 cells was determined using the iron-sensitive probe calcein-AM, a cell membrane-permeable molecule that is rapidly hydrolyzed in the cytosol to release the fluorescent probe calcein. Calcein fluorescence is quenched stoichiometrically upon binding to intracellular metals, mainly to labile iron [50,51]. The classic iron chelator agent deferoxamine (DFO) was used to estimate the labile iron pool in CAPAN-1 and MCF-7 cells [52]. Treatment of CAPAN-1 cells with ligands at 10 μmol/L for 24 h prior to incubation with calcein-AM significantly increased the fluorescence intensity of the probe to 141.4±19.5% (1), 136.8±11.7% (2) and 144.6±13.8% (5) of untreated cells (Fig 8A), revealing a decrease in the intracellular iron content. Incubation of the cells with DFO at 100 μmol/L for 24 h resulted in a similar increase in calcein fluorescence, indicating that the cellular chelatable iron was complexed by DFO to a similar extent to the ligands ( Fig 8A). In contrast, exposure of CAPAN-1 cell to 100 μmol/L FeCl 2 for 24 h led to a quenching effect on calcein that resulted in a fluorescence decrease to 74.2±15.5% of control cells ( Fig 8A). Similar iron-binding capacity was detected for compounds 1 and 2 in MCF-7 cells since pre-incubation with the ligands significantly increased calcein fluorescence by 131.4±13.2% (1) and 133.8±10.02% (2) compared with untreated cells (Fig 8B), which was equivalent to the fluorescence increase observed after DFO incubation (132.1±16.5%). However, only a moderate iron-chelating effect was detected for compound 5 in MCF-7 cells (Fig 8B).
The cytotoxicity of compounds 1, 2 and 5 is not associated to intracellular iron depletion Given the above, we addressed whether the depletion of intracellular iron by the chelating activity of the ligands plays a role in their cytotoxicity. Thus, the antiproliferative activity of compound 2 (5 μmol/L) in CAPAN-1 and MCF-7 cells was determined after pretreatment of cells with increasing concentrations of FeCl 2 for 2 h in order to increase the intracellular iron content and balance the depletion of iron provoked by the ligands. Cells were also exposed to equivalent concentrations of FeCl 2 alone to exclude any iron-induced cytotoxicity. Results showed that treatment with FeCl 2 alone did not affect the viability of MCF-7 or CAPAN-1 cells at any tested concentration (Fig 9A). In contrast, FeCl 2 pretreatment increased the cytotoxicity of compound 2 in an iron concentration-dependent manner (Fig 9A), resulting in a significant reduction in MCF-7 and CAPAN-1 cell viability by 51.8% and 37.7%, respectively, in cells pretreated with 100 μmol/L FeCl 2 (Fig 9B). Remarkably, when FeCl 2 was co-incubated with compound 2, the cytotoxic effect of the ligand in both cell lines was clearly inhibited (Fig 9A and 9B), probably because the corresponding non-active iron-complex (2-Fe) was rapidly generated in the cell culture medium. FeCl 2 pretreatment also significantly enhanced compound 1 cytotoxicity in MCF-7 and CAPAN-1 cells (by 58.6% and 25.5%, respectively), while FeCl 2 coincubation inhibited the cytotoxicity of the ligand in CAPAN-1, but not in MCF-7 cells ( Fig 9B). These findings are in agreement with our previous results showing that the corresponding iron complex (1-Fe) was cytotoxic against MCF-7 cells (IC 50 = 17.5 μmol/L) while it was not active against CAPAN-1 cells (IC 50 >100 μmol/L). Neither pretreatment nor co-incubation with FeCl 2 affected compound 5 cytotoxicity, particularly in MCF-7 cells, which may be explained by the reduced oxidative activity of this ligand in this cell line (Fig 9B).
The enhanced cytotoxicity of compound 2 in FeCl 2 -pretreated CAPAN-1 cells was associated with a significant increase in its oxidative activity, resulting in a 1.5-fold increase in ROS levels compared with non-pretreated cells. In contrast, extracellular complexation of compound 2 to iron by FeCl 2 co-incubation abolished ROS induction (Fig 9C).
Compounds 1, 2 and 5 induce oxidative DNA damage
The ability of compounds 1, 2 and 5 to induce DNA damage was evaluated at the cellular level by analyzing the phosphorylation of histone H2A.X on serine 139, a well-established cellular marker of DNA double-stranded breaks [53]. Exposure to ligands (10 μmol/L) resulted in a time-dependent increase in phosphorylated H2A.X in MCF-7 cells, between three fold (1 and 2) and four fold (5) (Fig 10A).
To complete the analysis, we evaluated the capacity of the ligands to directly interact with DNA using supercoiled pUC18 DNA and gel electrophoresis. The nuclease activity of ligands 1, 2 and 5 (25 μmol/L) and their respective iron complexes 1-Fe, 2-Fe and 5-Fe (25 μmol/L) was measured as the extent of the conversion of supercoiled DNA (Form I) to open circular DNA (Form II) and/or linear DNA (Form III) in the presence and absence of hydrogen peroxide. As expected, 1-Fe, 2-Fe and 5-Fe displayed strong nuclease activity in the presence of hydrogen peroxide, leading to a complete degradation of plasmid DNA under the conditions established for the assay (Fig 9B lanes 6, 10 and 14, respectively; Table 3). When the concentration was decreased to 15 μmol/L, 1-Fe, 2-Fe and 5-Fe were able to induce total conversion of the supercoiled DNA (Form I) to a nicked circular form (Form II) and linear form (Form III) (Fig 10C lane 3) by double strand breaks in the plasmid DNA. The relative proportions of the different forms of plasmid DNA after the treatments are detailed in Table 4. The DNA cleavage activity of the different compounds was studied in the presence of Hoechst (a minor DNA groove blocker) [54,55] and methyl green (a major DNA groove blocker) [56,57]. As shown in Fig 10C (lines 4 and 5) and Table 4, the amount of linear DNA was not reduced by addition of specific DNA groove blockers, indicating that the nuclease activity takes place without any groove selectivity. The involvement of ROS in the nuclease mechanism was confirmed by monitoring the inhibition of DNA cleavage in the presence of ROS scavengers (Fig 10C lines 6,7 and 8; Table 4) [58]. Accordingly, the addition of tiron (a superoxide radical scavenger), sodium azide (a singlet oxygen scavenger) and dimethylsulfoxide (a hydroxyl radical scavenger), reduced DNA cleavage activity of 1-Fe and 2-Fe, indicating the involvement of different ROS in the DNA cleavage reactions. In contrast, 5-Fe activity was reverted only by addition of sodium azide [59], suggesting that the nuclease activity of this complex is likely to be associated with singlet oxygen generation.
Discussion
Exploiting the differences between normal and cancer cells is an essential step to develop innovative cancer therapies. In this regard, the distinction between redox setpoints in these two cell types represents a valuable therapeutic window that might permit redox-targeting interventions to potently and selectively eliminate cancer cells with constitutively upregulated levels of ROS [5]. Theoretically, a pro-oxidant deviation that might be well tolerated by nonmalignant cells could rapidly reach a cell-death threshold in malignant cells already at a high setpoint of constitutive oxidative stress. This hypothetical scenario prompted us to study the suitability of five highly oxidant iron complexes with selected aminopyridine ligands (1-Fe, 2-Fe, 3-Fe, 4-Fe and 5-Fe), which were expected to be potent ROS inductors together with the corresponding uncomplexed ligands, as potential antitumoral agents. Our results demonstrate that the iron complexes failed to display any relevant cytotoxic activity, whereas the iron-free organic counterparts were cytotoxic. In particular, compounds 1, 2 and 5 exhibited strong antiproliferative activity against a broad panel of molecularly diverse human cancer cells with IC 50 values in the low micromolar range. Importantly, the cytotoxic activity profile of compounds 1 and 2 remained unaltered in EMT-induced stable populations of cancer stem-like cells, which characteristically exhibit resistance to the majority of commonly employed anti-cancer agents including the well-known ROS inducer doxorubicin [60,61].
The apparently counterintuitive cytotoxicity of the aminopyridine ligands can be explained from the studies of cellular Fe(II) chelation, which show that Fe(II) from the labile iron pool is efficiently chelated by the metal-free ligands. Thus, it appears reasonable to propose that the inactivity of the synthesized iron complexes must be related to the impossibility of charged species to cross the cell membrane since iron conjugation confers a positive charge to the apolar nature of the ligands, leading to less lipid-soluble molecules with impaired ability to cross the cell membrane [62]. Conversely, neutral organic ligands can readily traverse the cell membrane and form the highly oxidizing iron complexes in situ [26][27][28][30][31]. Ligands 1 and 2 were found to be strong inducers of oxidative stress, leading to a greater than 5-fold increase in ROS levels in CAPAN-1 cells. The oxidative activity of compound 5 was rather more modest. Interestingly, the kinetic profile of ROS accumulation paralleled the results obtained in clonogenic assays, indicating that prolonged exposure to ligands is required to exceed the threshold levels of oxidative damage capable of compromising cell growth. ROS accumulation strongly correlated with the induction of oxidative DNA damage and preceded the activation of caspases and the onset of apoptosis, indicating that the ligands promote delayed cell death through oxidative mechanisms. Indeed, results obtained with NAC demonstrated that ROS reduction enhanced the cellular survival to compounds 1 and 2 treatments, confirming that oxidative stress is, in part, responsible for cell death. Nevertheless, NAC had little effect on the cytotoxicity of compound 5, suggesting that alternative mechanisms may be involved in its cytotoxicity, in agreement with its unique ability to promote G2/M arrest.
Remarkably, a lower oxidative activity of the ligands was detected in MCF-7 and JURKAT cells compared with CAPAN-1 cells, despite similar cytotoxicity. The vulnerability of cancer cells to oxidative stress is greatly dependent on the particular pathways dysregulated in the cells as well as on their antioxidative capacities [6,63]. Consequently, different alterations in ROS levels may lead to similar cytotoxic outcomes in different tumors. For instance, chronic lymphocytic leukemia lymphocytes are reported to have a predominant oxidative stress status, which may favor an enhanced cytotoxic response to prooxidant interventions [64,65]. Nonetheless, it cannot be ruled out that other mechanisms may be contributing to the cytotoxic activity of the ligands, particularly for compound 5.
We examined whether the depletion of intracellular iron by the chelating activity of the ligands may be involved in their antitumor activity. Iron is essential for cell growth and DNA synthesis, and iron deprivation can lead to cellular death [66,67]. Different studies and clinical trials have demonstrated that iron-chelators are effective anti-cancer agents [68][69][70]. Our results showed that iron overload with FeCl 2 salts failed to reverse the cytotoxic activity of the ligand in cells. On the contrary, higher intracellular iron levels led to increased cytotoxicity of the compounds presumably because the intracellular formation of the oxidizing iron-complexes was enhanced, resulting in a significant increase in the amount of ROS. Hence, the antitumor effect of the ligands relies on their strong oxidative activity rather than their ironchelating capacity. These experiments also confirmed that the extracellular generation of ironcomplexes by co-incubation of the ligands with FeCl 2 salts clearly inhibits their cytotoxic activity.
Different anticancer compounds with metal binding properties can induce DNA strand breaks when binding redox active metals in the presence of oxygen [71]. The bleomycin family of glycopeptide antibiotics constitutes paradigmatic examples with utility in current chemotherapy. It is well established that bleomycin cytotoxicity is founded on a metal-dependent prooxidant mechanism that leads to DNA fragmentation. Bleomycin binds ferrous iron and O 2 and after one-electron reduction in vivo produces an activated intermediate, a ferric hydroperoxide species [BLM-Fe(III)-OOH], which cleaves DNA by hydrogen abstraction [5]. Similarly, ligands 1, 2 and 5 demonstrated DNA cleavage activity in cells, with kinetics that mirrored intracellular ROS accumulation. Analysis of the interaction of ligands with naked DNA revealed that only the Fe-complexed ligands displayed nuclease activity by inducing double strand breaks in the DNA in the presence of hydrogen peroxide. The DNA cleavage activity was quenched by different ROS scavengers, revealing that the nuclease activity of 1-Fe and 2-Fe involve different ROS, while the activity of 5-Fe is likely to be associated to singlet oxygen generation. Further, the nuclease activity took place without any DNA groove selectivity. Collectively, these results indicate that once bound to intracellular iron, the ligands induce a strong oxidative DNA damage through ROS, which results in double-stranded DNA breaks.
The anti-cancer activity of amine-pyridine-based iron complexes relies on different interdependent processes, involving intracellular Fe(II) chelation, generation of ROS, DNA fragmentation through oxidative mechanisms, induction of cell cycle arrest and apoptosis (Fig 11). This mode of action is clearly associated with the observed cytotoxic effects of 1 and 2. Additional mechanisms may be involved in the anticancer activity of 5, which displays similar antiproliferative and proapoptotic activities but limited generation of ROS.
Cancer cells have increased steady-state ROS levels and are likely to be more vulnerable to damage by further ROS insults induced by exogenous agents [1,72]. Indeed, the cell-killing activity of the vast majority of currently used anti-cancer therapies is mostly related to a commonly shared ability, directly or indirectly, to generate ROS [69]. Accordingly, drug resistance phenotypes, including those of multidrug resistant tumor-and metastasis-initiating CS-like cellular states, can be explained in terms of resistance to ROS-induced apoptotic killing. In a call for a much faster timetable for developing new curative anti-cancer strategies, it has been recently proposed that greater efforts must be made on "oxidative therapy" as a strategy against the current incurability of metastatic cancers [7]. Although it is acknowledged that future studies will have to confirm any beneficial in vivo effects and the nature of interaction as cocktail partners either with current ROS-generating radio-and chemo-therapeutic regimens or with the newer therapies that do not directly generate ROS, our current findings illustrate that, upon chelation of intracellular iron, the pro-oxidant activity of amine-pyrimidine-based iron complexes efficiently kills cancer and ROS-refractory cancer stem-like cells. Thus, our study provides functional evidence for promising redox-directed anti-cancer metallodrugs. | 8,933.4 | 2015-09-14T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Multiple collapses of blastocysts after full blastocyst formation is an independent risk factor for aneuploidy — a study based on AI and manual validation
Background The occurrence of blastocyst collapse may become an indicator of preimplantation embryo quality assessment. It has been reported that collapsing blastocysts can lead to higher rates of aneuploidy and poorer clinical outcomes, but more large-scale studies are needed to explore this relationship. This study explored the characteristics of blastocyst collapse identified and quantified by artificial intelligence and explored the associations between blastocyst collapse and embryo ploidy, morphological quality, and clinical outcomes. Methods This observational study included data from 3288 biopsied blastocysts in 1071 time-lapse preimplantation genetic testing cycles performed between January 2019 and February 2023 at a single academic fertility center. All transferred blastocysts are euploid blastocysts. The artificial intelligence recognized blastocyst collapse in time-lapse microscopy videos and then registered the collapsing times, and the start time, the recovery duration, the shrinkage percentage of each collapse. The effects of blastocyst collapse and embryo ploidy, pregnancy, live birth, miscarriage, and embryo quality were studied using available data from 1196 euploid embryos and 1300 aneuploid embryos. Results 5.6% of blastocysts collapsed at least once only before the full blastocyst formation (tB), 19.4% collapsed at least once only after tB, and 3.1% collapsed both before and after tB. Multiple collapses of blastocysts after tB (times ≥ 2) are associated with higher aneuploid rates (54.6%, P > 0.05; 70.5%, P < 0.001; 72.5%, P = 0.004; and 71.4%, P = 0.049 in blastocysts collapsed 1, 2, 3 or ≥ 4 times), which remained significant after adjustment for confounders (OR = 2.597, 95% CI 1.464–4.607, P = 0.001). Analysis of the aneuploid embryos showed a higher ratio of collapses and multiple collapses after tB in monosomies and embryos with subchromosomal deletion of segmental nature (P < 0.001). Blastocyst collapse was associated with delayed embryonic development and declined blastocyst quality. There is no significant difference in pregnancy and live birth rates between collapsing and non-collapsing blastocysts. Conclusions Blastocyst collapse is common during blastocyst development. This study underlined that multiple blastocyst collapses after tB may be an independent risk factor for aneuploidy which should be taken into account by clinicians and embryologists when selecting blastocysts for transfer. Supplementary Information The online version contains supplementary material available at 10.1186/s12958-024-01242-6.
Introduction
Choosing embryos with high developmental potential is crucial for ensuring higher implantation rates (IR) and live birth rates (LBR).Since the early days of in vitro fertilization (IVF), the conventional way of morphological evaluation has been recognized generally as the mainstream non-invasive strategy for embryo evaluation by most clinicians and embryologists, which was made up of only several static microscopic observations taken at specific times during pre-implantation development [1,2].Time-lapse microscopy (TLM) allows embryologists to track, record and assess embryonic morphology and developmental events through real-time images, providing solutions to some of the limitations of static morphological assessment [3].TLM can identify morphological phenomena such as irregular division and blastocyst collapse and re-expansion, which are often overlooked by static observation using conventional incubators [4,5].The dynamic process of embryo development can also be assessed and summarized more comprehensively with distinct morphokinetic variables [6].
TLM also enables the observation of blastocyst collapse and re-expansion.They have been seen in rabbits [7], bovines [8], mice [9], domestic cats [10] and detailed in humans by Marcos [11].Many human blastocysts undergo one or more collapses of the blastocoel cavity, resulting in the separation of part or all of the trophectoderm (TE) cells from the zona pellucida (ZP) [11].The occurrence of blastocyst collapse may become an indicator of preimplantation embryo quality assessment.It has been reported that collapsing blastocysts can lead to higher rates of aneuploidy and poorer clinical outcome, as well as changes in morphological quality and morphokinetic variables [12][13][14][15][16][17].
This study explores the associations between blastocyst collapse and embryo ploidy, morphological quality, morphokinetic parameters, and clinical outcomes using artificial intelligence (AI), preimplantation genetic testing (PGT), and TLM.We explored the characteristics of blastocyst collapse identified and quantified by AI in a large cohort including 3288 TLM videos of embryos that were biopsied for PGT and analyzed the associated developmental and genetic issues in the hope of helping clinicians and embryologists to select embryos.
Study design and participants
Data from 1071 TLM-PGT cycles and 3288 biopsied blastocysts that were collected at the Reproductive Medicine Center, Huazhong University of Science and Technology Hospital from January 2019 to February 2023 were included in this study.Images of embryos were collected using the Embryoscope Plus time-lapse microscopy system (Vitrolife, Denmark) from post-insemination to biopsy and cryopreservation.All patients signed written informed consent and underwent the routine clinical treatment performed in our center.No additional intervention was performed.
Embryo culture
All PGT cycles enrolled in this study were fertilized through intracytoplasmic sperm injection (ICSI), which has been described previously elsewhere [18].All embryos were cultured by G1 Plus (Vitrolife, Sweden) in Embryoscope Plus time-lapse microscope system (Vitrolife, Denmark) until biopsied on day 5 or day 6.At biopsy, a laser (HAMILTON THORNE) was used to make a 5 μm hole in ZP, and 3-6 trophectoderm cells were obtained by mechanical dissection.The inner cell mass (ICM) and trophectoderm of blastocysts are graded according to the Gardner criteria [2].The blastocyst was vitrified and warmed using Kitazato Kit.
The AI algorithm and measurement of the blastocyst collapse
We build end-to-end convolutional neural networks to detect collapse (Fig. 1).Several senior embryologists divide the embryo images in the data set into the blastocyst stage and non-blastocyst stage and then mark the blastocyst area and ZP of blastocysts in the Figs.51 and 252 pre-implantation embryo images (blastocyst stage: non-blastocyst stage = 2:1) were used to train prediction models for the blastocyst stage images and non-blastocyst stage.24,183 blastocyst stage images (collapsing: non-collapsing = 1:3) were used to train the segmentation model of blastocyst area and zona pellucida.
We set the successively uninterrupted blastocyst collapse as a single collapse, and determine whether a blastocyst collapsed according to the change of the area of the blastocyst (Fig. 2A).Before full blastocyst formation (tB), the ratio of minimum blastocyst cavity area to the blastocyst cavity area before a collapse event was taken as the shrinkage percentage of collapse.After tB, the ratio of minimum blastocyst cavity area to area in zona pellucida at this time was taken as the shrinkage percentage of collapse.The duration between the time of minimum blastocyst cavity area and the time of blastocyst recovery from a collapse event is recorded as the recovery duration of blastocyst collapse.The preset imaging frequency of our Embryoscope imaging system is 10 min, which Fig. 1 (A) Network structure diagram of embryo region detection: We adopt Unet [39] network to detect the blastocyst cavity area or area in zona pellucida during the blastocyst collapse.We adjust the parameters of the convolution layer in the original paper and utilize 3*3 convolution at the same level, which ensures the same size of feature maps at the same level and more efficient extraction of feature information.(B) Detection network diagram of area between TE and ZP: In this artificial intelligence model, the process for detecting blastocyst collapse is as follows.Firstly, the embryo region detection network is used to calculate the embryo sequence frame by frame, predicting the period and size of the embryo.Then, the embryo at the blastocyst stage was input into the blastocyst region detection network to obtain the blastocyst cavity area or area in zona pellucida in the image, and whether the blastocyst was collapsing can be determined based on the area ratio.(C) Different states of TE and ZP during blastocyst collapse.(D) Blastocyst collapse detection process.TE, trophectoderm; ZP, zona pellucida limits the ability to annotate the duration of blastocyst collapse more precisely.According to the literature and practical experience [11,13], blastocyst collapses that occur and fully recover within ten minutes are extremely rare, making this imaging frequency sufficient to identify the vast majority of blastocyst collapse events.
The results of blastocyst collapse identified by artificial intelligence were manually verified by two embryologists (Fig. 2B).Due to the 28.1% collapse rate after tSB observed in the total study sample, we selected 223 blastocysts with a collapse rate of 30%.In 10 (4.48%) blastocysts that collapsed once, the AI did not detect any collapse.In 14 (6.28%) blastocysts had multiple collapses, the AI detected fewer than the real number of collapses.The main reason for the error is that cell debris and big extra-blastocyst cells between TE and ZP affect the segmentation.
Statistical analysis
Statistical analyses were performed using the Statistical Package for Social Sciences, version 13.0 (SPSS).Continuous variables were reported as mean ± SD and compared by Mann-Whitney U test or ANOVA.Fisher's exact or chi-squared tests were used to compare categorical variables.The Mantel-Haenszel test was used to determine whether there was a linear trend between categorical variables.
To analyze the effects of confounders, we collected 1072 embryos with complete morphokinetic parameters and patient and cycle characteristics for multivariate analysis.Multilevel mixed-effects models account for the correlation among observations in the same cluster.Because patient-generated embryos do not provide independent information, a multi-level random effects model (level one: embryo; level two: cycle) adjusted for confounding factors such as patient and cycle characteristics was used to assess the effect of blastocyst collapse on embryo ploidy.Statistical significance was established at P < 0.05.
Ethical approval
All patients were given written informed consent.The study was approved by the Ethics Committee of the Reproductive Medicine Center of Tongji Hospital.
Results
Supplemental Table S1 summarizes the main descriptive features of the cycles included in this study.The mean age of the patients was 32.0 ± 4. Figure 3A shows the occurrence of blastocyst collapse in all, euploid and aneuploid embryos.In this study, 5.6% of blastocysts collapsed at least once (ranging from 1 to 3) only before the time of full blastocyst formation (tB), 19.4% collapsed at least once (ranging from 1 to 6) only after tB, and 3.1% collapsed both before and after tB.
Supplementary Table S2 describes the characteristics of the first blastocyst collapse before or after tB.The first Fig. 3 (A) Occurrence of blastocyst collapse in biopsied blastocysts.(B) Euploidy rates of biopsied blastocyst.The letters above each column display the results of pairwise comparisons between each group.(C) Euploidy rates of blastocysts collapsed after tB.P values for each group were obtained by comparing to embryos without blastocyst collapse.(D, E) Relationship between blastocyst collapse after tB and type of aneuploidy.P values for each group were obtained by comparing to euploid embryos.BC, blastocyst collapse; vs, versus collapse before tB starts at 106.6 ± 8.2 hpi, the shrinkage percentage of which was 23.9 ± 4.5%, and the duration of which was 0.9 ± 0.9 h.For blastocysts that collapsed after tB, the first collapse starts at 121.3 ± 11.1 hpi, the shrinkage percentage of the first collapse was 25.0 ± 15.2%, and the recovery duration of the first collapse was 1.0 ± 5.8 h.There was no significant difference in the characteristics of the first blastocyst collapse between euploid and aneuploidy embryos.
Association between blastocyst collapse and embryo ploidy
Figure 3 shows the association between blastocyst collapse and embryo ploidy.In non-collapsing blastocysts, blastocysts that only collapse before tB, blastocysts that only collapse after tB, and blastocysts that collapse both before and after tB, the euploidy rates were 50.0%, 53.2%, 41.0%, and 33.8%, respectively (Fig. 3B).We adopt the partitions of chi-squared tests to compare the euploidy rates of the four types of blastocysts at a significance level of 0.008.The euploidy rates of blastocysts that collapse after tB were significantly lower than that of non-collapsing blastocysts (P < 0.001 for blastocysts collapsing only after tB; P = 0.005 for blastocysts collapsing before and after tB).The euploidy rates of blastocysts that collapse both before and after tB were significantly lower than blastocysts that collapse only before tB (P = 0.006) 560 blastocysts experienced at least one collapse after tB.Their euploid rate was 40.0%, which was significantly lower than that of non-collapsing blastocysts (P < 0.001) (Fig. 3C).Regarding blastocysts that collapsed 1, 2, 3, or ≥ 4 times, the euploidy rates were 45.4% (P > 0.05), 29.5% (P < 0.001), 27.5% (P = 0.004) and 28.6% (P < 0.049).The euploidy rates of blastocysts that collapsed 2, 3, and more than 4 times were significantly lower than non-collapsing blastocysts.The euploidy rate of 183 blastocysts that collapsed more than once was 29.0%, significantly lower than blastocysts that collapsed only once (P < 0.001) We explored whether the type of aneuploidy affected the blastocyst collapsing rate after tB (Fig. 3D).Aneuploid blastocysts are divided into 4 subgroups according to their type of chromosome variation: the fragment deletion group with subchromosomal deletion of segmental nature only, the fragment duplication group with subchromosomal duplication of segmental nature only, the monosomic group with one whole chromosome missing (there may also be subchromosomal deletion of segmental nature), and the trisomic group with one whole chromosome repeating (there may also be subchromosomal duplication of segmental nature).In comparison with euploid embryos, the monosomic group had a higher proportion of blastocyst collapse (P < 0.001), the fragment deletion group and the monosomic group had a higher proportion of multiple blastocyst collapses (P < 0.001).
Association between blastocyst collapse and embryo morphological quality
Supplementary Fig S2 shows the morphological quality of non-collapsing blastocysts, blastocysts that only collapse before tB, and blastocysts that only collapse after tB.The proportion of blastocysts biopsied on Day 5 was significantly lower in blastocysts with collapse (P < 0.05, Fig. 3A).The proportion of blastocysts rated as A for ICM or rated A or B for TE was significantly lower in blastocysts collapsed only after tB than in blastocysts without collapse (P < 0.001, Fig. 3B, 3 C).The number of blastocyst collapses after tB was negatively associated with morphological quality (P < 0.001 for biopsied time, ICM grade, TE grade).A similar decline of quality in blastocysts collapsed only after tB was also observed in euploid and aneuploid blastocysts (P < 0.01).
Association between blastocyst collapse and clinical outcomes
Figure 4 summarizes the clinical pregnancy, live birth, and miscarriage rates of 494 vitrified-warmed euploid single embryo transfers.Whether a patient is pregnant was determined by serum β-HCG levels 9 days after the embryo transfer or the presence of fetal heart activity or gestational sac formation 7 weeks after.There is no significant difference in clinical pregnancy rates and live birth rates between collapsing and non-collapsing blastocysts (Fig. 4A and 4B).The miscarriage rate of blastocysts that collapsed only before tB was significantly higher than blastocysts without collapse (P = 0.008, Fig. 4C).
Discussion
In most studies, the definition of blastocyst collapse is a spontaneous separation of the ZP and TE in the blastocyst, resulting in the surface of the TE being separated > 50% from the inner side of the ZP [11,17].This definition limits the study of blastocyst collapse to blastocysts at and after the third stage (after tB).Cimadomo et al. defined collapse events as the uninterrupted reduction in the ZP area lasting < 10 h and with a final embryo: ZP ratio smaller than or equivalent to 90% and reported a high incidence of collapse events in over 50% of human embryos after tSB [13].Using artificial intelligence, this study aimed to identify blastocyst collapse between tSB and tB, as well as blastocyst collapse after tB, and investigated whether the collapse before or after tB would have different effects on embryo ploidy and quality.In this study, 22.5% of biopsied blastocysts collapsed at least once after tB, which is similar to other studies [11,12,14,16,22].Besides, 28.1% of biopsied blastocysts collapsed at least once after tSB, which is much smaller than Cimadomo et al. [13].Different definitions of blastocyst collapse and different thresholds used to determine aneuploidy and euploidy in the two studies limit their comparability.
Blastocyst collapse is associated with the ploidy level of the embryo.Several studies have found lower euploidy rates of collapsing blastocysts compared with the noncollapsing blastocysts [13,15,16].This study indicates that the euploid rate of blastocysts with multiple collapses after tB decreases significantly, and there is a negative correlation between the collapsing times and the euploid rate.Of note, we found no decrease in the euploid rate for blastocysts that only collapsed before tB or collapsed once after tB.Analysis of the aneuploid embryos showed a higher ratio of collapses and multiple collapses in monosomies and embryos with subchromosomal deletion of segmental nature.However, in previous studies, monosomies present fewer collapsing times [16].More relevant research is needed to assess this association.
The occurrence of blastocyst collapse before and after tB is related to delayed blastocyst development and poor morphological quality.In blastocysts that collapsed after tB, the dynamic parameters (t8, tSB, tB, tB -tSB, ECC3, s3) and time of biopsy were significantly prolonged, and the quality of ICM and TE declined significantly.There are negative correlations between morphological quality, prolongation of dynamic parameters, and collapsing times after tB.Similarly, other studies found that as the collapsing times increase, the delay in tEB [14], tSB, and t-biopsy [13] gradually increases.Several studies have observed poorer morphological quality of collapsing embryos [12,13].In blastocysts that collapsed before tB, the prolongation of dynamic parameters (tB, tB -tSB) were significant, and the proportion of blastocysts biopsied on Day 5 was significantly reduced.
In our study, no decrease in euploid rate was found in blastocysts that underwent blastocyst collapse only before tB, and the decrease in embryo quality was not as significant as in blastocysts that underwent collapse only after tB.This may be because the contact between the TE and the ZP before tB is more likely to cause significant changes in surface tension of the TE layer, leading to blastocyst collapses.Therefore, euploid embryos or embryos of better quality are also more likely to collapse before tB.
Some authors have reported a decrease in implantation rate [11,16,22] and pregnancy rate when collapsed blastocysts were transferred in IVF cycles [22].In a multivariate analysis, blastocyst collapse was confounded by stronger predictors and was not considered a significant predictor of LBR [14].However, the ploidy status of the transplanted embryo in some studies is unknown [22].In this study, collapsing blastocysts have no significant difference in clinical pregnancy and live birth rates compared with non-collapsing blastocysts in euploid single embryo transfers.The miscarriage rate of blastocysts that collapsed only before tB was significantly higher than blastocysts without collapse.Likewise, Cimadomo et al. reported that there was no significant difference in LBR and miscarriage rate between euploid collapsing and non-collapsing blastocysts [13].Therefore, blastocyst collapse may affect clinical outcomes mainly through ploidy status, especially aneuploidy caused by the deletion of genetic material.A previous report showed that blastocyst collapse is associated with lower implantation and clinical pregnancy rates when euploid [16].It is necessary to conduct more research to evaluate this association and to determine the impact of confounding factors on the results.
Research on the early development of mouse embryos has shown that the expansion of the blastocoel cavity is a complex process.The higher concentration of sodium ions (Na + ) in the blastocoel cavity forms an osmotic gradient between the blastocoel cavity and the external environment which promotes extracellular fluid to enter the blastocoel cavity through the aquaporins (AQPs), leading to an increase in hydrostatic pressure in the blastocoel cavity [23].Dumortier et al. observed that the hydrostatic pressure in the blastocyst is comparable to pressures capable of inducing hydraulic fracturing of cell-cell contacts in vitro [24,25].The high hydrostatic pressure promotes the establishment of paracellular sealing in the TE layer, which allows the blastocoel cavity to retain Na + and water molecules that had entered the cavity, leading to the continuous accumulation of fluid inside the blastocyst [25][26][27].During the process of blastocyst expansion, it is crucial to maintain a balance between hydrostatic pressure in the blastocyst cavity and the surface tension of the TE layer.If the balance is well maintained, the blastocyst will progressively expand with oscillations.Some of the contraction that occurs at this time due to alteration in epithelial permeability may be a normal process of blastocyst development.For example, with the normal insertion of dividing cells during cytokinesis, cell rounding may lead to a transient loss of paracellular sealing, resulting in focal intercellular leakage [28].
Blastocyst collapse may be an acute failure of the TE in response to gradually increasing hydrostatic pressure during progressive expansion.According to our observation, the proportion of low-quality TE (Gardner's scheme graded C) is higher in collapsing blastocysts.The authors suggest that abnormal morphology and function of TE cells, such as abnormal paracellular sealing, abnormal contractility of TE cells, and abnormal activity of ion pumps and water channels, may cause the TE layer to be unable to withstand excessive pressure in the blastocyst cavity, which may lead to the physical transient separation of paracellular sealing.Additionally, mechanical obstacles encountered by TE during expansion, such as excluded blastomeres or cellular debris within the perivitelline space, may also lead to significant changes in hydrostatic pressure.Then, the fluid leakage in the blastocyst relaxes the tension and induces the paracellular gaps to close, allowing the blastocyst to expand again [29].Due to the rapid leakage rate of low-viscosity liquids, the area of the blastocyst is reduced rapidly during blastocyst collapses, and most embryos gradually reexpanded within 3 h after the collapse event.
Good intercellular connection is essential to maintain the integrity of TE during blastocyst expansion.Significant frequent collapse and developmental delay were observed in mouse embryos cultured with gap junction inhibitors [30].Inhibiting cell contractility decreases the surface tension of the blastocyst [25,31].The activities of ion pumps and aquaporins on the TE cell membrane, as well as the osmotic pressure in the culture medium, affect the hydrostatic pressure in the blastocoel cavity.Na + / H + exchangers and Na + / K + -ATPase play key roles in Na + influx into the apical membrane and Na+outflow from the basal membrane of trophoblast cells, respectively [23].Inhibiting NHE3, one of the Na + / H + exchangers enriched in TE apical membrane, can reduce the re-expansion rate of blastocyst collapsed by cytochalasin D [32].Adverse factors in genes, culture medium, and culture environment may affect the quality of TE through the aforementioned ways, leading to blastocyst collapse.Viñals Gonzalez et al. found that chromosomes 1, 6, and 19 showed copy gain in collapsing blastocysts, and some of the gene families involved in blastocyst formation (i.e.Na/K-ATPase pumps, adherents, and gap or tight junctions) were located on these chromosomes [16].In our study, such differences were not significant.More research is needed to explore whether there is an abnormal expression of related genes in aneuploid blastocysts, thereby leading to blastocyst collapse.The volatile organic compounds in the culture medium and culture environment, the increase of osmotic pressure of the culture medium, and the increase of other solute concentrations in the culture medium may also have an impact on the quality of TE [33][34][35].
The occurrence of severe or multiple blastocyst collapses can also have adverse effects on blastocyst development, for example, embryo dehydration and energy consumption [9,22].Excessive tension can not only damage the cell-cell and cell-matrix adhesions but also damage the cell membrane, causing cell death and the formation of cracks in the epithelium [24,36,37].Delayed blastocyst expansion associated with multiple collapses (possibly hatching) may lead to blastocystendometrial asynchrony, which may decrease the LBR of the fresh cycle [38].
In summary, developmental defects in the blastocyst caused by genes or other factors may make it difficult for the blastocyst to handle the gradually increasing pressure during expansion, leading to blastocyst collapse.The process of collapse and re-expansion may also cause embryo damage.The mechanism of blastocyst collapse and reexpansion is pending to be revealed.Related experiments of other mammalian embryos can provide a reference.
Conclusions
This study used artificial intelligence to analyze TLM videos and found that the incidence of multiple blastocyst collapses after tB was an independent risk factor for aneuploidy.In addition, there was a significant association between blastocyst collapse and delayed embryonic development, and reduced morphological quality.Analysis of the aneuploid embryos showed a higher ratio of collapses and multiple collapses in monosomies and embryos with subchromosomal deletion of segmental nature.At present, we are unable to answer the causality and mechanism behind these associations.Further large sample and multicenter studies and basic studies are needed to explore the relationship between blastocyst collapse, chromosome and IVF outcomes, and the underlying mechanism.In conclusion, we suggest that blastocyst collapses should be taken into account when clinicians and embryologists select embryos for transfer.
Fig. 2 (
Fig. 2 (A) The segmental model of blastocyst collapse.(B) Factors interfering with AI recognition of blastocyst collapse.ZP, zona pellucida; TE, trophectoderm 6 years.In 1071 TLM-PGT cycles, 1051 cycles had blastocyst formation.Supplemental Fig S1 is a flowchart depicting the process of blastocyst screening.A total of 3,288 embryos were biopsied.Among them, there were 1,381 (42.0%) euploid embryos, 1,448 (44.0%) aneuploid embryos, 416 (12.7%) mosaic embryos, and 43 (1.3%) blastocysts have no data get due to amplification failure.After removing the embryos with poor TLM image quality (57 blastocysts with imaging abnormalities such as blurring or black images and 276 blastocysts moved out of view), we analyzed the data of 1196 euploid embryos and 1300 aneuploidy embryos.
Fig. 4
Fig. 4 Clinical pregnancy, live birth, and miscarriage rates of 494 vitrifiedwarmed euploid single embryo transfers.P values for each group were obtained by comparing to embryos without blastocyst collapse.(A) Clinical pregnancy rate per collapsed balstocyst.(B) Live birth rate per collapsed blastocyst.(C) Miscarriage rate per collapsed blastocyst.BC, blastocyst collapse; vs, versus | 5,856.2 | 2024-07-15T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Numerical Simulation of Orographic Gravity Waves Observed Over Syowa Station: Wave Propagation and Breaking in the Troposphere and Lower Stratosphere
A high‐resolution model in conjunction with realistic background wind and temperature profiles has been used to simulate gravity waves (GWs) that were observed by an atmospheric radar at Syowa Station, Antarctica on 18 May 2021. The simulation successfully reproduces the observed features of the GWs, including the amplitude of vertical wind disturbances in the troposphere and vertical fluxes of northward momentum in the lower stratosphere. In the troposphere, ship‐wave responses are seen along the coastal topography, while in the stratosphere, critical‐level filtering due to the directional shear causes significant change of the wave pattern. The simulation shows the multi‐layer structure of small‐scale turbulent vorticity around the critical level, where turbulent energy dissipation rates estimated from the radar spectral widths were large, indicative of GW breaking. Another interesting feature of the simulation is a wave pattern with a horizontal wavelength of about 25 km, whose phase lines are aligned with the front of turbulent wake downwind of a hydraulic jump that occurs over steep terrain near the coastline. It is suggested that the GWs are likely radiated from the adiabatic lift of an airmass along an isentropic surface hump near the ground, which explains certain features of the observed GWs in the lower stratosphere.
Introduction
Gravity waves (GWs) play a critical role in transporting momentum from the troposphere to higher altitudes where it is often deposited by turbulent or viscous mechanisms and driving meridional circulations in the middle atmosphere.Additionally, turbulence associated with GW breaking play a role in mixing of heat, momentum, and minor constituents (Fritts & Alexander, 2003).However, obtaining the global characteristics of GWs is challenging due to their small scales, short periods, and highly intermittent nature.GWs can arise from various sources, including flow lifting along mountains (R. B. Smith, 2019;and references therein) as well as other non-orographic processes such as convection and jet imbalance (e.g., Fovell et al., 1992;Grimsdell et al., 2010;O'Sullivan & Dunkerton, 1995;Plougonven & Snyder, 2007;Yasuda et al., 2015).Previous studies have identified GW generation near the ground, notably in association with undulations above the convective boundary layer (e.g., Kuettner et al., 1987), the leading edge of a GW current (e.g., Ralph et al., 1993), cold fronts (e.g., Plougonven & Snyder, 2007;Ralph et al., 1999), and sea surface temperature fronts (Kilpatrick et al., 2014).Such phenomena are characterized by humps of an isentropic surface, which act as atmospheric obstacles.They can induce uplift in the airflow, facilitating GW generation (Plougonven & Zhang, 2014).
In general circulation models (GCM) and numerical weather predictions, GW parameterizations are used to calculate momentum deposition due to subgrid-scale (unresolved) GWs from explicitly resolved fields.To Abstract A high-resolution model in conjunction with realistic background wind and temperature profiles has been used to simulate gravity waves (GWs) that were observed by an atmospheric radar at Syowa Station, Antarctica on 18 May 2021.The simulation successfully reproduces the observed features of the GWs, including the amplitude of vertical wind disturbances in the troposphere and vertical fluxes of northward momentum in the lower stratosphere.In the troposphere, ship-wave responses are seen along the coastal topography, while in the stratosphere, critical-level filtering due to the directional shear causes significant change of the wave pattern.The simulation shows the multi-layer structure of small-scale turbulent vorticity around the critical level, where turbulent energy dissipation rates estimated from the radar spectral widths were large, indicative of GW breaking.Another interesting feature of the simulation is a wave pattern with a horizontal wavelength of about 25 km, whose phase lines are aligned with the front of turbulent wake downwind of a hydraulic jump that occurs over steep terrain near the coastline.It is suggested that the GWs are likely radiated from the adiabatic lift of an airmass along an isentropic surface hump near the ground, which explains certain features of the observed GWs in the lower stratosphere.
Our goal in this paper is to simulate Antarctic MWs with horizontal resolution of 250 m in order to accurately depict their responses in the troposphere and stratosphere, and to compare these results with the radar observations at Syowa Station.The present paper is organized as follows: The radar observations at Syowa Station are introduced in Section 2. Model description and specification of numerical experiments are described in Section 3. In Section 4, winds and momentum fluxes associated with GWs observed on 18 May 2021 over Syowa Station are described.Turbulent energy dissipation rates estimated from radar Doppler spectral widths are also shown.Wave characteristics seen in the numerical experiments including vertical propagation with directional shear, breaking around the critical level, and momentum fluxes are described in Section 5. Section 6 discusses an interesting feature seen in the simulation, which is the wave generation from the isentropic surface hump near the ground.Finally, Section 7 provides a summary and concluding remarks.
VHF Radar Observations
The present study used observations from the Program of Antarctic Syowa MST/Incoherent Scatter radar (PANSY radar).The radar parameters are summarized in Table 1, and a detailed specification of the radar is described in 10.1029/2023JD039425 3 of 22 Sato et al. (2014).The PANSY radar has provided near continuous observations since 30 April 2012.Since late September 2015, the system was brought to its current capabilities.The wind estimation method is described in Sato et al. (1997) and Fukao et al. (2014).Turbulent energy dissipation rates (ε) are estimated from widths of the radar Doppler spectra following Kohma et al. (2019) and Nishimura et al. (2020).We estimated ε using measurements by four oblique (northward, eastward, southward, and westward) beams with a zenith angle of 10° in order to eliminate specular reflection that affects the spectrum for the vertical beam (e.g., Tsuda et al., 1986).For calculating the spectral widths due to turbulent wind fluctuations, non-turbulent broadening effects need to be removed.Since the two-way beam pattern was not axially symmetric due to the irregular antenna distribution of the PANSY radar, the conventional formula for beam broadening effect for a symmetric antenna distribution (Hocking, 1985) is questionable for the radar.In the present study, we extracted the turbulent velocity variance considering the antenna distribution following an algorithm developed by Nishimura et al. (2020), in which the beam broadening component is subtracted with deconvolution operation for the measured radar spectra.The velocity variance due to turbulence in a stably stratified flow is related to = R ′2 , where ′2 and N is velocity variance and buoyancy frequency, respectively (Hocking, 1983;Weinstock, 1981).In the present study, c R was set to 0.45 while a value of 0.45-0.5 for c R is typically used in previous studies (e.g., Hocking, 1999;Wilson, 2004).Temperature profiles from operational radiosonde observations are used for calculation of 2 = ∕∕ , where θ and g are potential temperature and gravitational acceleration.The present study shows the average ε value from the four oblique beams.
Reanalysis Data
The present study used the fifth major global reanalysis produced by ECMWF (ERA5; Hersbach et al., 2020) with a 0.5° × 0.5° regular latitude-longitude grid to calculate the height of the dynamical tropopause (Hoskins et al., 1985).Here, the dynamical tropopause height is defined as the height with a potential vorticity (PV) of −2 × 10 −6 Km 2 kg −1 s −1 .The PV values over Syowa Station were obtained by linear interpolation.
Model and Computational Domain
For the numerical experiment, we used the Complex Geometry Compressible Atmospheric Model, which is a finite-volume code for compressible Navier-Stokes equations (Fritts et al., 2021;Lund et al., 2020).The governing equation is as follows: where ρ is density, ρu i is momentum per unit volume, The model uses the low-storage, third-order Runge-Kutta time integration, and Δt is set to 0.2 s in the present experiment.The total integration time is 12 hr.Although the integration time is shorter than the spin-up time used in previous studies (e.g., Plougonven et al., 2013), the numerical simulation confirms that the primary wave patterns below an altitude of 10 km become approximately steady after 10 hr, while small-scale turbulent motion retains its transient nature.
Terrain
For the terrain around Syowa Station, we used the Radarsat Antarctic Mapping project v2 (RAMPv2) data set with a horizontal resolution of 200 m (Liu et al., 2015).The terrain is shown in Figure 1.Note that in the model domain, +x and +y directions are approximately eastward and northward, respectively.There is a steep terrain
Initial Conditions and Forcing
The background condition for the numerical experiment was given by a single vertical profile composed of two kinds of reanalysis data: the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2, Gelaro et al., 2017) and Japanese Atmospheric GCM for Upper Atmosphere Research-Data Assimilation System (JAGUAR-DAS; Koshin et al., 2020Koshin et al., , 2022) ) data sets.Note that the top of MERRA-2 data set (0.1 hPa) is lower than the model top of the present numerical experiment, whereas the JAGUAR-DAS is not capable of realistically reproducing phenomena smaller than the synoptic scale ones in the troposphere due to its low horizontal resolution (T42).Vertical profiles from MERRA-2 and JAGUAR-DAS were smoothly connected around an altitude of 45 km; for example, the background zonal wind U 0 is given by where U MERRA2 and U J-DAS are zonal wind vertical profiles at the grid point nearest to Syowa Station averaged over 18 May 2021 from the MERRA-2 and JAGUAR-DAS, respectively.Here, W(z) = (1 + tanh[(z − 45 km)/4 km])/2.Background meridional wind V 0 (z) and temperature T 0 (z) were calculated similarly.Background vertical profiles are shown in Figure 2a-2c.The surface wind is from the ENE direction and its magnitude is 20.2 m s −1 .
The background zonal wind is westward below 10 km while strong westerly jet is observed above z = 20 km.The meridional wind is southward from the ground to 65 km altitude.The tropopause height is approximately 9 km, where the buoyancy frequency squared, 2 0 = ∕00∕ , has a local maximum (∼0.47 × 10 −4 s −2 ).Note also that 2 0 has another local maximum (∼4 × 10 −4 s −2 ) at the ground.There is no altitude range where the Richardson number ( ) is less than unity (not shown), indicating that local instability from the background fields is unlikely to occur.
Following Lund et al. (2020), non-physical starting transients are minimized by initially damping the mean background horizontal winds toward zero in the lower portion of the model domain.The background winds U(z, t) and V(z, t) near the surface are then increased gradually in time according to: where
Boundary Conditions
A characteristic boundary condition with numerical sponge layers was used at the upper and horizontal boundaries to prevent reflection of GWs and acoustic waves at the boundaries.The implementation of the sponge layer is described in Lund et al. (2020).The sponge layer has the hyperbolic-tangent shape with a width of 80 km (4 km) at the lateral (upper) boundaries.The time constants for damping are 128 and 4 s for the lateral and upper boundaries, respectively.At the lower boundary, free-slip and adiabatic boundary conditions were employed.This neglects the strong radiative cooling over the surface of the Antarctic continent, which is known to the main driver of katabatic winds observed along the coast of Antarctica (e.g., Parish & Bromwich, 1987).
Winds and Momentum Fluxes
Figures 3a-3c show time-height sections of zonal, meridional, and vertical winds (u, v, w) from the PANSY radar on 16-20 May 2021.Southward winds dominate from the lowest level up to 25 km altitude, while the zonal winds are predominantly westward at 1.5-10 km altitudes, and eastward above 12 km.Strong vertical wind disturbances with amplitudes greater than 0.8 m/s were observed below 8 km altitudes on 18-19 May.The upper limit of the disturbances corresponds largely to the tropopause height (black curves).The time evolution of wind fluctuations v′ ≡ (u′, v′, w′) is shown in Figure 3d-3f.Here, the background wind is defined as the wind components with a vertical wavelength longer than 6 km.On 18 May, a wave-like pattern was observed for u′, v′, and w′ at z = 10-15 km, and the phase of the wave was steady for about a day.The amplitude of v′ is 5-6 m s −1 at z = 11-14 km, which is larger than that of u′ (<1.5 m s −1 ).The negative maximum of v′ are observed at altitudes of 10 and 13 km, which suggests the vertical wavelength is approximately 3 km.The steadiness of the wave phase indicates that the wave pattern is attributable to orographic GW.Hodograph analysis (e.g., Minamihara In the troposphere, the wave pattern is not clear on 18 May.This is likely due to longer vertical wavelength in the troposphere that the GW has owing to smaller background N 2 compared to the stratosphere.According to the linear wave theory, when the background field is both steady and horizontally uniform, ground-based frequency, ω, and horizontal wavenumber, k, remain constant along the ray, namely path of wave packet propagation (e.g., Andrews et al., 1987).For internal hydrostatic GWs, the (local) dispersion relation is given by ( − ) 2 = (∕) 2 , where U = U(z) is the background horizontal wind oriented to the horizontal wavenumber vector (N-S direction).The buoyancy frequency N are 1.0 × 10 −2 s −1 and 2.1 × 10 −2 s −1 at altitudes of 4 and 11 km, respectively, while the background meridional winds do not exhibit significant variation across the tropopause height (Figure 2).For upward propagating wave packets to maintain constant ω and k, the vertical wavelength in the lower stratosphere should be approximately a half of that in the troposphere.There are local minima for the unfiltered meridional winds (v) at 2 and 8.5 km on 18 May (Figure 3b), suggesting that the GW wavelength in the troposphere is about 6.5 km.Since the background winds can also vary with a similar vertical scale, it is difficult to distinguish long-period waves with such a large vertical wavelength from the background using radar observations at a single location.Note that another wave pattern was observed in the lower stratosphere on 19 May.The wave pattern is presumably linked to strong upward motion below an altitude of 8 km and the associated vertical displacement of the tropopause.Since the phase of the wave pattern descends with time, further examination may be necessary in order to determine the wave source.
Figure 4 shows time-height sections of zonal and meridional momentum fluxes.The estimation method proposed by Vincent and Reid (1983) has been used, and smoothing with a width of 6 hr and 6 km was applied for clear visualization.On 18 May, strong positive ′ ′ with a maximum of 1.0 m 2 s −2 is observed at altitudes of 8-15 km, while ′ ′ is wake and does not show a systematic pattern in the lower stratosphere.Since the background meridional wind is southward, the sign of ′ ′ is consistent with the linear theory of the orographic GW.
Turbulent Energy Dissipation Rates
Figure 5 shows a vertical profile of the daily-averaged ε on 18 May 2021.Below z = 3 km, a strong turbulent layer of ε larger than 10 −3 m 2 s −3 was observed, which is an order of magnitude larger than the annual mean (Kohma et al., 2019).While ε is small in an altitude range of 5-8 km compared to that in the low-level turbulent layer, Note that the magnitude of the vertical wind over Syowa Station is approximately 2 m s −1 , which is slightly larger than or comparable to observation from the radar on 18 May (Figure 3c).
Numerical Simulations
Horizontal maps of w at z = 15 km are shown in Figures 6c and 6d.Although wave patterns are observed northeast of Syowa Station at an altitude of 15 km, they are significantly different from those at 7.5 km.For example, wave structures with wavenumber vectors pointing to the E-W direction are not evident at 15 km.It is interesting to note that, while the large-amplitude disturbances with horizontal wavelengths shorter than ∼15 km are observed southwest of Syowa Station at 9 hr, wave patterns with a horizontal wavelength of ∼30 km and a wavenumber vector directed to the N-S direction are evident in the regions west of Syowa Station at 12 hr.
To examine the temporal change in horizontal structure of the MW with altitude, horizontal maps of w at 5, 8, 12, 16, 20, and 24 km at 12 hr are shown in the left column of Figure 7.The right column of Figure 7 shows two-dimensional horizontal power spectra P w (k, l) calculated from w in the 100 km × 100 km horizontal domain at 12 hr, where k and l are zonal and meridional wavenumbers, respectively.The power spectra are calculated from w fields vertically interpolated with an interval of 60 m, and then averaged over a vertical width of about 4 km.Below 10 km, waves with horizontal wavelengths longer than 5 km are dominant, and the orientation of horizontal wavenumber vector k h = (k, l) with the N-S, NNE-SSW, NE-SW, ENE-WSW, and E-W directions are observed.It is interesting to note that reduction of P w for k h oriented to the E-W and ENE-WSW directions is observed in the altitude range of 10-14 km.Furthermore, above 14 km, amplitudes of waves with k h oriented to the NE-SW direction are small compared to those below 14 km.In other words, prevailing waves have NNW-SSE oriented k h above 14 km, despite P w for k h oriented to NNW-SSE being smaller in the altitude range of 3-7 km than those with other directions.
According to linear GW theory, the propagation characteristics of MWs are dictated by the vertical wavenumber m ≡ 2π/λ z , where m 2 is given by the dispersion relation as where U h is the component of background horizontal winds in the direction of k h , k h is horizontal wavenumber, and H is density scale height (e.g., Lund et al., 2020).Linear theory indicates that large m 2 leads to small upward group velocity and that m 2 becomes infinite at the critical level.shows the vertical profiles of m 2 calculated from the background wind profile (Figure 2) with k h = 30 km for k h oriented to N-Sand NE-SW.While there is a critical level for MW with k h oriented to the NE-SW direction in the altitude range of 15-18 km, for waves directed to N-S, m 2 have finite positive values in the altitude range of 1-30 km. Figure 8b shows the m 2 values calculated every π/48 rad.It is found that there is no critical level for waves directed to NS and SSENNW up to an altitude of 30 km.Thus, the altitudinal variation of P w , namely predominant horizontal structure of GW, is likely attributable to the critical-level filtering effect in the directional shear.
Figure 9a-9c shows y-z sections of w along x = 0 km at 8, 9, and 12 hr.Above Syowa Station (black vertical lines), at t = 12 hr, positive values of w are observed at altitudes of 2.0, 8.5, 11.5, and 14.5 km.This suggests that a wave pattern with a vertical wavelength of ∼3 km is observed at altitudes higher than 9 km whereas the vertical wavelength is longer than 6 km in the troposphere.The wave pattern of w in the lower stratosphere looks like those observed on 18 May 2021 from the radar observations (Figure 3).For y < 0 km, strong vertical wind disturbances are observed in the troposphere all the time.Interestingly, above an altitude of 10 km, the small-scale disturbances of w appear in y < 0 km at t = 12 hr.To examine the turbulence generation at these altitudes, the same sections but for the vorticity magnitude |ζ| are shown in Figures 9d-9f.Movies of the time evolution of |ζ| in the same section are included for reference in the accompanying Supporting Information S1.A strong turbulent layer is observed near the surface south of Syowa Station (i.e., y < 0 km), which has been continuously observed after 7 hr.The depth of surface turbulent layer is about 1.5 km.Above Syowa Station, there are layers of large |ζ| at altitudes of 11-12 km and around 13 km, indicative of MW breaking.It should be noted that the multi-layer structure of strong ε in the lower stratosphere is also seen in the radar observations (Figure 5) although the heights of the turbulent layers are not exactly the same as those seen in the numerical simulation.For y < −20 km, z = 8-11 km, patches of large |ζ| are observed.Figures 9g-9i shows |ζ| along x = +50 km, which is upwind of x = 0 km for the lower stratosphere.At t = 9 hr, the turbulent billows tend to develop along the high-shear region associated with the GW phase in the altitude range of 9-11 km.The turbulent billows are advected westward and result in patches of large |ζ| in the section along x = 0 km (Figures 9b and 9c).It is worth noting that the altitude range of 9-13 km includes the critical levels for stationary GWs like MWs with k h oriented to the E-W, ENE-WSW, and NE-SW directions (Figure 8), which should lead to GW breaking for these modes.
The three-dimensional structure of |ζ| above Syowa Station at 9-12 km altitudes is shown as isovalue surface where |ζ| is 10 −4 s −1 in Figure 10.Note that the background wind in this altitude range is largely from the +y direction.
There are many horseshoe-shaped or hairpin-shaped vorticity tubes for y < 0 km, indicating streamwise-aligned counter-rotating rolls.The horseshoe-shaped vorticity tubes are known to be a typical characteristic of the early stage of GW instabilities (e.g., Andreassen et al., 1998;Fritts et al., 1998Fritts et al., , 2009)).Movies depicting the evolutions of the isentropic surfaces are included for reference in the accompanying Supporting Information S1.At 300 K, turbulent disturbances on a small scale are observed along the phase line extending southward, indicating GW breaking around the critical level (Figure 9).GWs with N-S phase lines can be attributed 12 of 22 to downslope winds from the ENE along the coastal terrain that extends in the N-S direction (Figure 11a).After t = 8 hr, a drastic rise in the isentropic surface of 260 K near the coast by 0.5-0.8km suggests the presence of a hydraulic jump downwind of the steep slope.Figure 12 shows x-z sections of θ and u along y = −20 km.A sharp rise in the isentropic surfaces is observed on the downslope of the continent after t = 8 hr.East of the jump, strong downslope winds are observed, whereas west of the jump, the magnitude of u near the ground is quite small.These features are typical characteristics of a hydraulic jump (e.g., Durran, 1986Durran, , 1990)).Additionally, at t = 8 hr, a low-level turbulent wake is observed, spreading downwind of the hydraulic jump (Figure 11b).At later times the turbulent wake front progress in the +y direction (northward), resembling a bore (e.g., Rottman & Simpson, 1989).At t = 12 hr, the front appears steady, and the resultant phase lines are largely straight and extends in the E-W direction.Interestingly, the E-W extending turbulent wake front produces a structure similar to that of the 300-K isentropic surface with a horizontal wavelength of ∼30 km to the west of Syowa Station.
To investigate the relation between the low-level turbulent wake and upper-level wave structure, vertical sections of potential temperature θ and meridional wind disturbances v′ along x = −50 km are presented in Figure 13.
Here, v′ is defined as a departure from the large-scale fields with a meridional wavelength longer than 60 km.Near the surface, the northward progression of the isentropic surface hump is observed at t = 7-10 hr.The vertical gradient of θ is small below the elevated isentropic surfaces, indicating strong vertical mixing within the bottom layer.Since sharp changes in θ across the front are evident near the surface, the propagation of the front of the turbulent wake is considered to be associated with a gravity current (or density current).Above the turbulent wake front, a wave structure for v′ is observed, with vertical wavelength of ∼8 km in the troposphere but reducing to ∼3 km in the lower stratosphere.It should be noted that another wave pattern is observed on the windward side of the hump, which is associated with GWs generated along the Antarctic coast northeast of Syowa Station and advected by the background winds (Figure 11).
Figure 14 shows the same vertical sections but for meridional momentum fluxes v′w′.Positive v′w′ are also prominent above the turbulent wake front.Notably, the lower ends of the v′ wave structure of and positive v′w′ move following northward progression of the turbulent wake front.Since the background wind is from ENE, it stands to reason that the adiabatic lift of an airmass along the isentropic surface hump results in GW generation.
Figure 15 displays zonal and meridional momentum fluxes associated with GWs.Here, GW components are defined as departures from large-scale fields with zonal and meridional wavelengths longer than 60 km.Spatial averaging is applied to the momentum fluxes in the zonal and meridional directions using a low-pass filter with a cutoff length of 60 km.While ′ ′ shows positive values of ∼0.2 m 2 s −2 at 12 km and small negative values at 18 km over Syowa Station, ′ ′ exhibits large positive values at both altitudes.The height variation of the sign of ′ ′ is consistent to power spectra of w and the wave filtering effect of background winds (Figures 7 and 8).
At 12 km, the magnitude of positive ′ ′ is up to 1.0 m 2 s −2 , which is as large as that observed by the radar (Figure 4b).The positive ′ ′ extends to the west of Syowa Station, which is roughly aligned to the front of the isentropic surface hump near the ground (Figure 11d).These results indicate that the significant northward momentum fluxes observed over Syowa Station are likely due to GW generated from the gravity current front.
Discussion
In the present simulation, GWs with a horizontal wavelength of ∼30 km are seen west of Syowa Station in the lower stratosphere, which explains the positive meridional momentum fluxes observed over Syowa Station.The meridional wavelength λ y can be estimated from the radar observations.Using the continuity equation, λ y is given by: where a term related to the density scale height is ignored, and the horizontal wavenumber vector is assumed to be oriented to the N-S direction.The radar observations showed wave structure in v′ and w′ with a vertical wavelength of 2-3 km and amplitudes of 5-6 m s −1 and 0.2-0.3m s −1 , respectively, at 11-14 km altitudes (Figures 3e and 3f).Thus, the meridional wavelength is estimated to be 38-69 km, which is slightly larger but comparable to the wavelength seen in the numerical simulation.
One of the interesting characteristics of the GWs radiated from the turbulent wake front is that the horizontal wavenumber vector is not aligned with the terrain slope east of Syowa Station, but aligned with the isentropic surface hump near the surface.Figure 16 shows a horizontal map of θ at an altitude of 0.5 km at t = 12 hr with streamlines of surface horizontal winds.It is found that the isentropic surface front extends almost straight in E-W direction (indicated by a white broken line) and that the surface wind (a thick black arrow) crosses the front with a finite angle α (=41°-50° as shown in Figure 16), indicating adiabatic lift of an airmass across the front.If the elevated isentropic surface, caused by the hydraulic jump occurring at the steep terrain, were advected passively, the front should be aligned to the surface wind vector (i.e., α = 0°).In that case, the uplift of airmass, and thus GW radiation, at the front would not occur.Therefore, it is interesting to consider the mechanism that determines α.
To continue the discussion, we assume that the propagation speed of the front relative to the background wind is determined by the propagation speed of the gravity current.Since the front does not move much after t = 10 hr (Figure 13), the ground-based speed of the front can be regarded as zero, and thus the steady state of the front is satisfied, while the turbulent wake downwind of the front shows a transient nature.From the analogy to shock waves, α can be regarded as a Mach angle, α s , which is the half angle of the shock cone radiating from the edge of an object under the flow moving at a velocity V greater than the speed of sound c s (e.g., Landau & Lifshitz, 1987).
Since the Mach angle is given by α s = asin(c s /V), α is estimated by the following equation: where c gc is the propagation speed of gravity current, and U surf is the horizontal surface wind upwind of the front.Following the layer theories of downslope winds (R. B. Smith, 2019), c gc is related to reduced gravity up (Benjamin, 1968), and thus, where θ up (θ down ) denotes potential temperature upwind (downwind) of the front, and H gc is the depth of the gravity current (see Figure 17a).Figure 17b shows vertical profiles of θ at t = 12 hr at points A and B in Figure 16.Note that points A and B correspond to upwind and downwind of the front, respectively.It is found that the difference between θ A and θ B is larger than 1 K below an altitude of 1.0 km while the vertical profiles are almost coincident in the altitude range of 1.1-2.5 km.Here, θ up and θ down are calculated as follows: where H gc is set to 1.0 km.From Equation 12, the propagation speed of the gravity current is given by 12 m s −1 .The speed of horizontal surface wind is 18 m s −1 at point A, and thus, from Equation 11, α = 43°, which is consistent with the value observed in Figure 16 (α = 41°-50°).We also found that the change in H gc from 0.8 to 2.0 km leads to that of α with a range of 40°-55°.Therefore, the angle of the surface wind across the front is determined by both the surface wind speed and propagation speed of gravity current.
One interesting implication can be obtained regarding the component of background wind perpendicular to the front U ⊥ , which is generally a key factor in determining wave characteristics for orographic GWs.Since = gc and gc = √ 2 ′ gc , U ⊥ does not explicitly depend on the total background wind U surf .This implies that an increase in U surf leads to a decrease in α, and consequently, the resultant U ⊥ does not change.Nonetheless, the total background wind U surf plays a significant role in determining wave characteristics because the depth of the hydraulic jump occurring at the steep terrain of the continent, and thus depth of gravity current H gc , will depend on U surf .
In summary, the present simulation suggests the occurrence of GW radiation downwind of a hydraulic jump, which will be classified as a type of GW radiation processes resulting from the interaction between surface frontal structures and cross-front winds (Kilpatrick et al., 2014;Plougonven & Snyder, 2007;Ralph et al., 1999).As indicated in Figure 16, three-dimensional simulations, rather than two-dimensional simulations, are necessary to reproduce such a GW radiation process.The following remarks can be made about this GW radiation: • Supercritical downslope flow (Froude number greater than 1) is in general associated with hydraulic jump occurrence.The steep topography and frequent occurrence of strong surface winds on the coast of Antarctica (Parish & Bromwich, 1987) make it a potential hot spot of this type of GW radiation while the shock-like structure along the coastal region has been reported in the midlatitudes (Burk & Thompson, 2004).• The horizontal wavelength of the GWs is longer than that of small-scale (turbulent) disturbances and should depend on the horizontal scale of the isentropic surface hump.10.1029/2023JD039425 16 of 22 • The phase lines of the GWs are aligned with the isentropic surface hump near the surface, meaning that the horizontal wavenumber vector is not parallel to the coastal slope, as is typically observed for orographic GWs.• Numerical models aiming to simulate the GW radiation downwind of a hydraulic jump should explicitly resolve small-scale (turbulent) eddies or use boundary-layer parameterizations to capture small-scale (turbulent) disturbances near the surface.
Concluding Remarks
A numerical simulation of GWs observed by a radar at Syowa Station, Antarctic on 18 May 2021 was conducted using a high-resolution model.The horizontal grid spacing is 250 m in the central domain and vertical grid spacing is 60 m, both of which are much higher than those used in the previous GW modeling studies over Syowa Station.The simulation successfully reproduced the observed features of the GWs, including the amplitude of vertical wind disturbances in the troposphere and vertical fluxes of northward momentum in the lower stratosphere.The modeling results include the following: • In the troposphere, ship-wave responses are observed along the small coastal topography northeast of Syowa Station, while in the stratosphere, wave filtering in the directional vertical shear of background winds causes a significant change in the wave pattern.• A multi-layer structure of small-scale turbulent vorticity was simulated over Syowa Station in the lower stratosphere as is consistent with radar observations, and the simulated volume rendering of vorticity shows horseshoe-shaped vortex tubes, indicative of GW breaking.• The simulation shows another wave pattern with a horizontal wavelength of about 25 km is seen in the lower stratosphere west of Syowa Station, whose phase line is aligned with the turbulent wake front downwind of a hydraulic jump that occurs over the steep terrain.• The observed GWs are likely radiated from the adiabatic lift of an airmass along the isentropic surface hump near the ground, which explains the northward momentum fluxes observed by the radar in the lower stratosphere.
The height variation of GW amplitude and phase correlates with background wind direction profile, as predicted by the linear theory (e.g., Shutts, 1998) and other numerical simulations (e.g., Eckermann et al., 2007;Guarino et al., 2018).Eckermann et al. (2007) examined changes in wave patterns with altitude as observed from space and concluded that it strongly related with the variation of background winds with height.While the present simulation shows a similar change in the wave pattern with altitude, it also reveals turbulent small-scale vortex tubes around the critical level, which are indicative of GW breaking.A comparison between high-resolution simulations and vertical profiles of turbulent energy dissipation rates from the radar is promising for further case studies.
Finally, although the present study focused on the MWs and their responses in the troposphere and lower stratosphere, the simulation covers from the troposphere to mesosphere, and we are currently analyzing GW dynamics in the upper stratosphere and mesosphere.We will report the results in the literature elsewhere.
Figure 1 .
Figure 1.Terrain heights around Syowa Station in the Complex Geometry Compressible Atmospheric Model domain.Syowa Station is located on a small island at the center of the model domain.The contour interval is 150 m.The thick white contours indicate the coastline.A gray open rectangle indicates the central domain with a constant grid spacing of 250 m.The gray shaded area along the edge of the panel indicates the sponge layer.
Figure 2 .
Figure 2. (a) Vertical profiles of U 0 (red) and V 0 (blue) over Syowa Station up to an altitude of 100 km.The broken curves indicate initial vertical profiles of U 0 and V 0 .Panels (b, c) same as panel (a) but for (b) T 0 and (c) 2 0 .(d) The prescribed time variation of ramping f(t) for low-level U 0 and V 0 .
) and where t m = 4 hr, z c = 8 km, and z w = 4 km.The background meridional wind V(z, t) is given similarly.The vertical profiles of the background winds at the initial time step are shown by the broken curves in Figure2a.The ramp function f(t) gradually increases to unity by t = 8 hr (Figure2d), and [U(z,t),V(z,t)] = [U 0 (z),V 0 (z)] after t = 8 hr.
Figures
Figures6a and 6bshows horizontal maps of w at an altitude of 7.5 km at t = 9 and 12 hr.Movies of the time evolution of w in the horizontal plane are included for reference in the accompanying Supporting Information S1.On the continental coast northeast of Syowa Station, ship wave patterns with an amplitude of ∼0.5 m s −1 are present at 9 and 12 hr.The phase and amplitude of the wave pattern for both plots are quite similar.The results suggest generation of MW from small-scale uneven terrain along the coast of the continent under the background surface winds from the ENE.To the south of Syowa Station, there are vertical wind disturbances with an amplitude greater than 1 m s −1 .The wave phase lines are approximately aligned with the coast of steep terrain.Furthermore, small-scale turbulent disturbances are prominent in the region southeast to the west of Syowa Station, particularly at t = 12 hr.Note that the magnitude of the vertical wind over Syowa Station is approximately 2 m s −1 , which is slightly larger than or comparable to observation from the radar on 18 May (Figure3c).
Figure 6 .
Figure 6.(a, b) Horizontal maps of w at an altitude of 7.5 km at (a) t = 9 hr and (b) 12 hr.A black arrow at the upper-right corner of each panel indicates the direction of surface wind.Gray contour indicates the terrain height with an interval of 150 m.Panels (c, d) same as panels (a, b) but for w at an altitude of 15 km.
Figure 8 .
Figure 8.(a) Vertical profiles of m 2 for horizontal wavelength of 30 km for k h oriented to N-S (red) and NE-SW (blue).(b) The values of m 2 for horizontal wavelength of 30 km for different directions of k h as a function of height.The calculation of m 2 is performed every π/48 rad.A red and blue vertical broken lines indicate the N-S and NE-SW orientations, respectively.
Figures
Figures11b-11gshows isentropic (potential temperature) surfaces for 260 and 300 K at t = 8, 10, and 12 hr.Movies depicting the evolutions of the isentropic surfaces are included for reference in the accompanying Supporting Information S1.At 300 K, turbulent disturbances on a small scale are observed along the phase line extending southward, indicating GW breaking around the critical level (Figure9).GWs with N-S phase lines can be attributed
Figure 10 .
Figure 10.(a) Isovalue surfaces where |ζ| is 10 −4 s −1 at t = 12 hr at altitudes of 9-12 km for the central domain with a width of 20 km shown from +x direction.Panels (b-d) same as panel (a) but for surface rotated clockwise around the z-axis by 30° each.
Figure 11 .
Figure 11.(a) Elevation of terrain around Syowa Station.A white arrow shows the direction of surface background wind (U 0 (0),V 0 (0)).(b-d) Isentropic surface for 260 K at (b) t = 8 hr (c) 10 hr, and (d) 12 hr.The color indicates the height of the isentropic surface.Panels (e-g) same as panels (b-d) but for isentropic surface for 300 K.
Figure 12 .
Figure 12.Vertical sections of θ (contour) and u (color) at altitudes of 0-5 km along y = −20 km at t = 8, 10, and 12 hr.The contour interval is 2 K.The gray region indicates terrain.
Figure 15 .
Figure 15.(a, b) Horizontal maps of (a) ′ ′ and (b) ′ ′ at an altitude of 12 km at t = 12 hr.Black contours indicate terrain elevation with an interval of 150 m.Colors are almost logarithmic scale.Panels (c, d) same as panels (a, b) but for an altitude of 18 km.
Figure 16 .
Figure 16.A horizontal map of θ (color) at an altitude of 0.5 km at t = 12 hr with the elevation of terrain (thin gray contours).The contour interval is 50 m.Blue curves with arrows indicate directions of surface winds at t = 12 hr.A broken white line indicates the isentropic surface hump.A black arrow indicates the direction of the surface wind leeward of gravity current.The points for the reference of upwind and downwind of the front is denoted by A and B (gray open circles), respectively.
Figure 17 .
Figure 17.(a) A schematic of gravity current with a depth of H Gc .θ up and θ down indicates potential temperature upwind and downwind of the front, respectively.(b) Vertical profiles of θ at t = 12 hr at points A (red) and B (blue) shown in Figure 16.
Table 1
∕ .Here, c v is specific heat for constant volume and δ ij is the Kronecker delta.μ and κ are the Parameters of the Program of Antarctic Syowa MST/Incoherent Scatter Radar | 9,501.8 | 2024-01-30T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Load Balancing for Future Internet : An Approach Based on Game Theory
In recent years, countries all over theworld consider the future internet as the country’s strategic development directions, so projects about future internet have been launched by these countries. Load balancing algorithms and job allocations are main research problems in areas of resourcemanagement of future internet. In this paper, we introduce a load balancingmodel for future internet. We formulate the static load balancing problem in themodel proposed above as noncooperative game among users and cooperative game among processors. Based on this model, we derive a load balancing algorithm for computing center. Finally, we execute the algorithm presented in this paper with another three algorithms for comparison purpose. The advantages of our algorithm are better scalability to the model, improving system performance, and low cost on maintaining system information.
Introduction
Due to rapid development, Internet has become one of the most important infrastructures in information society.All countries in the world have considered Internet's sustainable development as an important method to occupy information technology and strategic demands for enhancing international competitiveness.Therefore, all countries have launched the research programs for future Internet.FIND under US National Natural Science Foundation mainly develops the future Internet architecture and answers "what Internet will be in the next 15 years, and how it will run." GENI project mainly studies the future Internet from the aspect of security, mobility, and sensor network and will build a large open experimental platform which could truly verify the future Internet's architectural design.European 4WARD and FIRE as well as Japanese AKARI have also carried out the study on the future Internet [1][2][3][4][5].China also launched the National Basic Research Program "Service-oriented Future Internet Structure and Mechanism Research." As one part of this program, this paper focuses on load balancing for computing resources of future internet.
The concept of load balancing is a little different between different researchers in different research areas.There is no description for load balancing in future internet.During our research on future internet, we find that users in future internet may only have a screen used to get access to internet, while all the calculations, the storages, the application services, and some other services provided by our PC nowadays are all offered by internet in future.So in this paper, we define the load balancing of future internet as a mechanism aiming to spread the whole internet's computing load, traffic load, and other items depending on networks' resources self-adaptively and self-organizationally to each resource center equally, to spread the whole work load to each working node in each resource center, to minimize the average task response time to users, to maximize the utilization of whole internet, and to establish a green future internet.In this paper, we focus on the computing load in computing center of future internet.
1.1.Load Balancing Algorithm.In traditional research, load balancing algorithms can be classified as centralized (e.g., [6,7]) and decentralized (e.g., [8,9]).In the centralized approach, there is only one node making load balancing decisions, and all the information have to go through this node.All the jobs in the system are allocated by this node to the other nodes to be processed.So there may be the single point of failure.In decentralized approach, all nodes involved in the load balancing decisions.Though, it is more robust than the centralized one, it is costly for many nodes to maintain load balancing information of whole system in decentralized approach.Most decentralized approaches have each node obtaining and maintaining only partial information locally to make suboptimal decisions [10].
According to the stage that the load balancing algorithm implements on, load balancing algorithm can be divided into static (e.g., [6,11,12]) and dynamic (e.g., [13][14][15]).In static load balancing algorithm, all the information about the system is known in advance, and the load balancing strategy has been made by load balancing algorithm at compile time.This load balancing strategy will be kept constantly during runtime of the system.In contrast, dynamic algorithm is implemented at running time, and the load balancing strategies change according to the real statement of the system.Though, the dynamic algorithm has better adaptability, it is sensitive to the accuracy of the load information or statement of system.It may cause terrible mistakes for dynamic approach if the accuracy of information is slightly less than 100%, and in real system 100% accuracy is impossible to achieve.
In recent years, the so-called hybrid scheduling has been receiving some attention [16,17].It combines the advantages of static and dynamic algorithms and minimizes their relative inherent disadvantages.
Related Work.
For load balancing problem, plenty of work has been done.For example, Bryhni et al. summarized load balancing algorithms in the area of scalable web service for comparison purpose [18], and Lu et al. proposed a novel load balancing algorithm for dynamic scalable web services [19].Soror et al. considered a common resource consolidation scenario, in which several database management system instances, each running in a virtual machine, are sharing a common pool of physical computing resources [20].
Most of researchers resolved load balancing problem by using game theory under the environment of distributed computing [11], grid computing [6,12,21,22], and cluster computing [23,24].Viscolani characterized the purestrategy Nash equilibria in a game with two competing profitmaximizing manufacturers who have access to a set of several advertising media [25].Nathani et al. proposed dynamic planning based scheduling algorithm to maximize resource utilization [26].Ye and Chen investigated noncooperative games on two resource allocation problems: the server load balancing and the VM placement problems, having proved the existence of a Nash equilibrium for both games [27].Wu et al. modeled a cooperative behavior control game where the individual utility function is derived from the energy efficiency in terms of the global max-min fairness with the outage performance constraint [28].Kong et al. investigated the problem of virtual resource allocation in noncooperative cloud environment, where computing resources are provided dynamically in pay-as-you-go manners and virtual machines can selfishly request resource to maximize its own benefit [29].
Recently, with the cloud technology development, many scientists shift their attentions to cloud and data center environment [30][31][32][33][34]. Khiyaita et al. gave an overview of load balancing in the cloud computing such as DNS, ZXTM LB, and AMAZON LOAD BALANCING by exposing the most important research challenges [35].And we find that scalability, energy efficiency, and green computing are another three objects for load balancing research in now and future, according to [28,[36][37][38][39].As future internet is a new research area, there is few work for load balancing using game theory approach.
In this paper, we present a load balancing model for future internet.Then we propose a semidecentralized solution to the load balancing problem of future internet.This solution is also a hybrid approach that combines a noncooperative game among users and a cooperative game among processors (NOCOG).In this model, all the nodes do not need to maintain as much information as in traditional method.So the advantages of our algorithm are better scalability to the model, improving system performance, better fairness between processor nodes, and low-cost on maintaining system information.
In Section 2, we discuss a load balancing model for future internet firstly.Then we formulate the static load balancing problem in the model presented above as noncooperative game among users and cooperative game among processors.And we derive a load balancing algorithm for future internet, described in Sections 3 and 4. Finally, we compare the algorithm proposed in this paper with the other three existing algorithms, described in Section 5.
System Model for Load Balancing in Future Internet
During our research on future internet from the viewpoint of management, we believe that the future internet is easily accessed, hierarchical, virtualized, perceptional, and personalized.As shown in Figure 1, future internet is virtualized into three management layers.Level 1 is a national layer of Resource control and processing center; level 2 is a region layer of Resource control and processing center; level 3 is networks routing nodes that can offer storage and calculating services.Users can get services from internet by accessing to the internet through their Phones, Pads, or some other devices.We assume that all the jobs sent by users can be considered as the requests to network resources.When users send requests to level 3, and if level 3 cannot deal with these requests, these requests may be sent to the upper level.So no matter what layer sending requests is, it acts as a role of user and the upper layer acts as the resource center.Based on this, we present a load balancing model for future internet, as shown in Figure 2.
The load balancing model for future internet proposed in this paper consists of users, load managers, and * processors.All users generate the jobs/requests and send them to the resource center.The jobs arrive at the resource center and are allocated to processors to be processed by load managers.In this paper, we assume the resource center as computing center offering calculation services.
In this model, all users send jobs to computing center, the jobs are received by load managers randomly, a load manager can get jobs from many users, and jobs from a user can be received by more than one load manager; a load manager dispatches the jobs to the processors which is managed as soon as it receives them; a job may be executed by a allocated processor and wouldn't be dispatched again to another processor; each processor maintains a queue that holds jobs to be executed; each job is processed on a firstcome-first-serve (FCFS) basis and then sends the results back to users.
We call processors managed by a load manager and this load manager a cluster, and information and job allocation of a cluster are charged by the load manager.In this model, user generates jobs and sends them to computing center with an average job generation rate ; load manager with average job/request arrival rate is in charge of dispatching jobs/requests it receives to a processor it manages with an job sending rate ; a processor managed by load manager is characterized by average processing rate and send the result back to users after execution.The vector is called the load balancing strategy of cluster managed by load manager .The vector = [ 1 , 2 , 3 , . .
. , 𝑆 𝑛 ]
is called the load balancing strategy profile of the whole cloud center.The vector is called the job allocation strategy of user , and = [ 1 , 2 , 3 , . . ., ] is called the load balancing strategy profile of the whole game.
Most of the previous work on static load balancing formulates the problem as cooperative game among processors, whose main objective is to minimize overall expected response time.Though, fairness index of processors can be reached, fairness between users is difficult to achieve.In this mechanism, some users may still wait for the response of their jobs, while other users' jobs have been processed and sent the results back to users, as this mechanism ignores the selfish character of the users.In the real world, users make their jobs allocation strategies to get the minimal response time.So the noncooperative approach has been proposed.By simulation, we find that it has bad fairness among processors in noncooperative approach.According to these above, we formulate the load balancing problem in future internet as noncooperative game among users.Then we formulate the load balancing problem in a cluster as cooperative game among processors.Our algorithm improves the fairness among processors and the expected system time.
Load Balancing Algorithm as Noncooperative Game among Users
A load manager dispatches the jobs to the processors it manage as soon as it receives them; there is no waiting queue at the load manager.So we consider a cluster managed by a load manager as a super processor with an average job processing rate, : where is the number of processors managed by load manager .And each cluster is modeled as an M/M/1 (poisson arrivals and exponentially distributed processing times) queuing system [40,41].In order to ensure that the model is effective, the average job arrival rate must be less than the total average processing rate of cluster : presents the fraction of workload that user sends to load manager , where As is shown in Figure 2, the problem is to decide how to distribute jobs received from a user to clusters, and once is determined by using our algorithm proposed in this paper, cluster receives the jobs from user at a rate given by Completion time of a job in such a queuing system involves transfer time and residence time in computing center which consists of executing time of the job and waiting time at the queue.As each cluster is modeled as an M/M/1 queueing system at the point of load manager, so the expected response time of a job at cluster is given by We introduce a new variable that defines the transferring time from user to cluster , and it may be relevant to the average size of a job, the distance between user and computing center, and the bandwidth available for users and some other factors.Thus, the overall expected response time of user with its job allocation decision The goal of a user is to find a load balancing strategy such that ( ) is minimized, and the decision of a user depends on the load balancing strategy of the other users.For noncooperative game, a Nash equilibrium can be achieved that no user can decrease its average expected response time by unilaterally changing its strategy [42,43].
So the problem can be translated to the following optimization problem [44]: minimizing the expected response time of a user given by (7) with the constrains given by ( 2), (3), and (4).From (7) it can be easily shown that ()/ The KT conditions imply that = [ Here we introduce a variable , representing the available processing rate at cluster as seen by user . is given by (10).Attention, the higher average processing rate a cluster has, the higher fraction of jobs are assigned to it, and the available processing rate of a cluster is According to (10), we can get Solving ( 9), we get the following: Due to constrain (4), we have = 0, and set this equation to (12): If we consider all the clusters in ( 12) and ( 13), the equation may show us how negative occurs.These are due to the cluster with low processing rate.So some clusters should be excluded, and the fraction of jobs assigned to these clusters should be set to 0. We sort the cluster according to (15), and 1 < 2 < 3 < ⋅ ⋅ ⋅ < , where is given by This means that there exists an index (1 ≤ ≤ ), making = 0 ( ≤ ≤ ).So we get the minimum index , which satisfies the inequality below ( 16), according to (3): We get , where satisfies (17): Finally is given by (18): Algorithm for noncooperative game among users: (see Algorithm 1).
Cooperative Game among Processors of a Cluster
A Cluster with average job arrival rate should dispatch jobs to the processors he manages immediately when he receives these jobs from users.Then the processors execute these jobs and send the result back to users.As mentioned in Section 4, each processor maintains a waiting queue and executes a job at a time with FCFS.We model each processor an M/M/1 queuing system.Here, we ignore the transfer time of a job from a load manager to a processor, as they are connected with an inner communication network in computing center.So the response time of a job in a cluster consists of its processing time ( proc ) and waiting time ( wait ) in the queue: We get the average response time of a job at a processor in a cluster , : where presents the average processing rate of processor in cluster and is the sending rate of load manager to processor it manages.As a job will not be transferred to another processor after being dispatched to a processor , is also the job arrival rate of processor of cluster .And all the jobs dispatched by load managers should be executed; that is Now we formulate this problem as a cooperative game among processors of a cluster, where (22) is the cost function of a processor.All the processors work cooperatively to finish all the jobs as fast as possible.So the Nash bargaining solution is determined by solving the following optimization problem:
S.t. 𝑆
From ( 22) it can be easily shown that / The Kuhn-Tucker conditions imply that , = 1, 2, . . ., , is the optimal solution to (22) From (26), we may get the following: Solving (27) we get the following: Considering ( 24) and ( 28), we get (29), which determines the Lagrange multiplayer : Due to the constrain ≥ 0, we set = 0 into (28), getting the following: If we consider all the processors in ( 29), ( 28) may show us how negative occurs.These are due to the processors with low processing rate.So some processors should be excluded, and the arrival rate of these processors should be set 0. We sort the processors by their value of , where satisfies the following (31): This means that there exists an index (1 ≤ ≤ ), making = 0 ( ≤ ≤ ).So we get the minimum index , which satisfies the inequality below (32): We get that satisfies (33) with index got in (32): Finally is given by ( 34) as follows:
Results and Discussion
We perform simulations to study the fairness between processors and between users at a certain utilization of computing center, expected response time for users, and effect of system load.In this simulation environment, there are 15 processors managed by 4 load managers as shown in Table 1.And there are 7 users generating jobs and sending them to computing center; the relative job generation rates are shown in Table 2.
Processors numbered 1 to 4 are managed by load manager 1, processors numbered 5 to 8 are managed by load manager 2, processors numbered 9 to 11 are managed by load manager 3, and processors numbered 12 to 15 are managed by load manager 4. Each relative processing rate is processing rate divided by the lowest processing rate in the center.A user can send his jobs to any load manager according to its job allocation policy, which is made by gaming with the other users.Each relative job generation rate is a user's job generation rate divided by total job generation rate of all users.A Load manager allocates the jobs to processors it manages immediately when it receives them from users.A processor maintains a waiting queue modeled as M/M/1 queuing system, processes jobs receiving from its load manager, and sends the results back to users.
As we assume that the total job generation rate cannot be faster than the total processing rate of center, the relative job generation rate presents the proportion of a user's job generation rate to the total job generation rate of all the users.Here we introduce a variable , which presents the utilization of computing center, and the real job generation rate of a user is given by (35) as follows: where is the number of load manager, is processor number of a cluster, and presents the relative job generation rate.During whole simulation, we assume that the average communication delay time is 0.01 second.
During simulation, we implement another three algorithms presented in [9,10,15] for comparision purpose.
Algorithm in [9] labeled as COG is formulated as a cooperative game problem among brokers, but each broker needs to maintain the information of all providers.The algorithm in [10] labeled as NOG is formulated as a non-cooperative game problem among users; the jobs are sent to the processor directly according to the load balancing profile of users.And algorithm in [15] labeled as PS is a proportional-scheme algorithm; each user should maintain the information of all processors.According to the information maintained, user in PS decides how to allocate its jobs to processer at a job sending rate , , where , is given below: where is the number of processors in the system.All these three algorithms are not hierarchical; jobs in these methods are sent to the processors to be processed directly, while the algorithm proposed in this paper is hierarchical, and the computing center simulates a noncooperative game among users then dispatches job to the load managers based on load balancing profile got from our algorithm; and load managers allocate them to processors.
Fairness.
Fairness is an important measure of quality to a load balancing algorithm.For users, fairness indicates that each user has the same response time, and the scenario that a user is still waiting for its response while the other users have finished their jobs cannot exist.For processors, fairness indicates that each processor has the same average job completion time.If a load balancing algorithm is 100% fair, the fairness index (FI) would be 1.0.The fairness index FI is given by (37): where is the average completion time of processor managed by load manager discussed in [45].In this part of our simulation, the utilization of computing center is varied from 10% to 90% increasing by 10%.In Figure 3, we present the fairness index for different values of computing center utilization ranging from 10% to 90% with 10% increase.It can be observed that, for users, algorithms NOCOG and PS maintain a fairness index of 1 over the whole range of computing center utilization.The NOG method has a fairness index close to 1 at the low utilization of the computing center and has a fairness index of 1 at the high utilization.Conversely, the fairness index of COG scheme is lower than that of the other three schemes.For processors, with increasing the utilization, the fairness index of NOCOG method is growing up to 0.95 from 0.58, which is better than the PS and NOG algorithms, while the COG scheme has a fairness index of 1 over the whole simulation.Above all, the algorithm proposed in this paper improves the fairness index of processors in the premise of guaranteeing the fairness of users, compared with the traditional noncooperative method such as NOG.And at the high utilization (from 70% to 90%), the processes fairness index of NOCOG algorithm is very closed to that of COG algorithm.
Response Time.
In this simulation, we use the same system configuration as before and set the utilization of the cloud to 70%.According to the fairness definition, we devise a job allocation method for each of the users in COG algorithm: And we devise a job allocation method for users in PS algorithm: And the expeted job response time for each user is shown in Figure 4, when the algorithm is at equilibrium.The average job completion consists of transferring time to computing center waiting time at the queue, and executing time of job itself.Figure 4 shows that the NOCOG and PS algorithm guarantee equal expected response time for all users but with the disadvantage of a higher expected response time, while the COG algorithm has a better expected response time for each user.We can also get that the NOCOG and PS algorithms have a better fairness index.
Effect of System Load.
In this part, we also vary the computing center utilization from 10% to 90% with 10% increase.All the simulation configurations are the same with the two simulations above.We get the real job generation rate of users via (35) then we calculate the load balancing profile of cloud center and the average job completion time of each processor.Then we get the average job completion time (avg) of cloud center at the utilization of computing center varied from 10% to 90% via (40): Figure 5 shows the average system time of each algorithm versus the utilization of computing center ranging from 10% to 90% with 10% increase.The average system time of PS algorithm is always higher than that of the other three algorithms, and the average system of COG algorithm is always lower than that of the other three algorithms.At the low utilization (from 10% to 40%) of computing center, our NOCOG algorithm is better than NOG.At the middle utilization (from 50% to 70%), the average system time of two algorithms is nearly the same.And at the high utilization of computing center, the NOCOG algorithm is better than NOG again.
In Figure 6, it is shown how many processors in the center are participating in job processing for each algorithm.For NOG and PS algorithms, all the processors participate in job processing no matter what system load is.For NOCOG and COG algorithms, the same number of processors is participating in job processing at 10%, 20%, 40%, 80%, and 90% load.And at another system load, there are less processors participating in job processing of NOCOG algorithm than those of COG algorithm.So our algorithm can achieve the goal of energy conservation and emission reduction.
Conclusion
In this paper, we define the concept of load balancing in future internet, discuss the probable architecture of future internet, and present a new framework to solve problem for computing center in future internet.As a cooperative game among processors approach considers the minimal system executing time as its goal; the fairness index of users is ignored.Conversely, a noncooperative game among users approach considers the minimal job response time of users as its goal; the fairness index of processors is also ignored.At the same time, we believe that the future internet is service-oriented, so we solve the load balancing problem in future internet from the perspective of users.According to the model we establish, we formulate the problem as noncooperative game among users and cooperative game among processors.From the simulation results, we can draw conclusions that the algorithm proposed in this paper improves the fairness index of processors in the premise of guaranteeing the fairness of users, improves the scalability and efficiency of system compared with other noncooperative game methods among users, and achieves the goal of energy conservation and emission reduction.
Figure 2 :
Figure 2: Load balancing model for future internet.
≥ 0 and 2 ()/ 2 ≥ 0; this means that () is a convex function in and all the constrains (2), (3), and (4) are linear.The first order Kuhn-Tucker (KT) conditions are necessary and sufficient for optimality.The Lagrangian is Lagrange =
𝑗 𝑘 ≥ 0 and 𝜕 2
/ 2 ≥ 0; this means that is a convex function in , and all the constrains (23) and (24) are linear.The firstorder Kuhn-Tucker conditions are necessary and sufficient for optimality.The Lagrangian is Lagrange =
Figure 3 :
Figure 3: Fairness index versus utilization of computing center.
Figure 4 :
Figure 4: Expected response time for each user.
Figure 5 :Figure 6 :
Figure 5: Average system time versus computing center utilization.
Table 1 :
Configure information of computing center.
Table 2 :
Relative job generation rate of users.User 2 User 3 User 4 User 5 User 6 User 7 | 6,270.6 | 2014-02-10T00:00:00.000 | [
"Computer Science"
] |
Some Recent Results on High-Energy Proton Interactions
Recent experimental results about the energy behavior of the total cross sections, the share of elastic and inelastic contributions to them, the peculiar shape of the differential cross section and our guesses about the behavior of real and imaginary parts of the elastic scattering amplitude are discussed. The unitarity condition relates elastic and inelastic processes. Therefore it is used in the impact-parameter space to get some information about the shape of the interaction region of colliding protons by exploiting new experimental data. The obtained results are described.
Introduction
Precise studies of particle interactions moved from cosmic rays to accelerator and collider data nowadays.Both elastic and inelastic processes are studied.There are numerous experimental results which are still waiting for their theoretical explanation.In spite of extreme successes of the strong interaction theory QCD (Quantum Chromodynamics) it happens still to be unable to describe many observational facts.Mathematical methods of QCD are yet not well enough developed to attack them directly.
The most widely used method in physics is the perturbative approach with its power series expansion using the smallness of the coupling constant.However it can be applied in QCD only to rather rare collisions with large transferred momenta (or masses) where the coupling strength becomes small due to the asymptotic freedom property, specific for QCD.It cannot be applied to main bulk of "soft" hadronic interactions with low transferred momenta because the coupling constant becomes large.Therefore several phenomenological approaches and ad-hoc assumptions have been attempted for description of experimental characteristics there and for getting reliable physical conclusions.Usually, many adjustable parameters have to be used for these models.Therefore their predictions become very flexible and less definite.In particular, the detailed spatial features of the hadron interaction region are not clearly established.To get them, more elaborate theoretical conclusions should be derived from QCD about the energy and transverse momenta behavior of the amplitudes of particle interactions.Some (quite limited) help can be gained from the general principles of analyticity and unitarity of the scattering amplitudes.
The structure of the paper (my "would-be talk" to Gariy) is as follows.First, general formulae are introduced.The corresponding new experimental data are described and briefly commented.They are used for getting some information about the spatial image of the proton interaction region.Final conclusions are presented at the very end.
Some General Formulae
The experimental information about elastic scattering of protons is obtained from measurements of their differential cross sections dσ/dt(s, t) as functions of two Lorentz-invariant variables-the transferred momentum −t = 2p 2 (1 − cos θ) at the scattering angle θ and momentum p in the center-of-mass system and the total energy squared s = 4E 2 = 4(p 2 + m 2 ) where m denotes the proton mass.In what follows, it is convenient to use the scattering amplitude f (s, t) directly normalized to the value of the differential cross section such that The dimension of f is GeV −2 .It must satisfy the general rigorous statement of the quantum field theory named the unitarity condition.The unitarity of the S-matrix SS + = 1 relates the amplitude of elastic scattering f (s, t) to the amplitudes of inelastic processes M n with n particles produced.In the s-channel they are subject to the integral relation (for more details see, e.g., [7,8]) which can be written symbolically as The non-linear integral term represents the two-particle intermediate states of the incoming particles.The second term describes the shadowing contribution of inelastic processes to the imaginary part of the elastic scattering amplitude.Following [9] it is called the overlap function.This terminology is ascribed to it because it defines the overlap within the corresponding phase space dΦ n of the matrix element M n of the n-th inelastic channel and its conjugated counterpart if one takes into account that the collision axis of initial particles must be deflected by an angle θ for proton elastic scattering.It is positive at θ = 0 (no deflection!) but can change sign at θ = 0 due to the relative phases of inelastic matrix elements M n 's.
At t = 0 the relation ( 2) is known as the optical theorem and leads to the general statement that the total cross section is the sum of cross sections of elastic and inelastic processes i.e., that the total probability of all processes is equal to one.The experimental data provide us the knowledge of the absolute value of the amplitude f .However, protons possess electric charges and the amplitude f should contain both nuclear and Coulomb terms.They become comparable and interfere at extremely low transferred momenta.Their interference helps to get some information about the real part of the nuclear amplitude at t ≈ 0 or about the ratio where Im f (s, 0) is given by the optical theorem (3).
The Differential Cross Section
The differential cross section of elastic scattering of protons has a specific dependence on the transferred momentum t which evolves with energy s as measured by the collaboration TOTEM [10][11][12].In Figures 1 and 2 it is shown for two energies of LHC.where Im f (s, 0) is given by the optical theorem (3).
The Differential Cross Section
The differential cross section of elastic scattering of protons has a specific dependence on the transferred momentum t which evolves with energy s as measured by the collaboration TOTEM [10][11][12].In Figures 1 and 2 it is shown for two energies of LHC.Let us discuss four regions which can be noticed in these Figures.
1.The region of extremely small transferred momenta near t = 0 has been used for getting information about the real part of the nuclear amplitude as described above.The value of the ratio ρ 0 Let us discuss four regions which can be noticed in these Figures. 1.The region of extremely small transferred momenta near t = 0 has been used for getting information about the real part of the nuclear amplitude as described above.The value of the ratio ρ 0 was obtained in experiments at LHC energies to be about 0.1-0.14.These values are in agreement with earlier theoretical predictions [13,14] derived from dispersion relations using the analytical properties of the amplitude.At Intersecting Storage Rings (ISR) energies this ratio is very close to 0 both in experiment and theory.Therefore, it follows that the contribution of the real part to the differential cross section is very small (at the level less than 1%).Thus, the extrapolation of the differential cross section to t = 0 determines the total cross section according to the optical theorem.
2. The second region of the diffraction peak at larger transferred momenta contributes mostly to the elastic cross section.It is characterized by approximately exponential decrease dσ/dt ∝ exp(Bt) up to the dip.The real part in near forward direction is small and should even diminish crossing 0 inside this region according to some reliable theoretical predictions [15].Thus it can at most produce slight violations of the purely exponential fall-off attributed to the imaginary part.Theoretically it is described by the dominance of Regge-behavior with the slope B increasing as ln s (due to s α(t) -behavior with linear Regge-trajectories α(t) = α 0 + α t).It is demonstrated in Figure 3 [16].The shrinkage of the diffraction cone suggests that protons become larger at higher energies since B is proportional to the squared radius according to the Fourier transform.
3. The third region near the dip gives rise to the speculation that the imaginary part becomes 0 near the dip.The real part contributes to the differential cross section there.It is very small.The dip position moves to smaller transferred momenta at higher energies in accordance with the shrinkage of the diffraction cone.
4. Finally, the largest momenta measured at 13 TeV (from about 0.7 GeV 2 to 3.5 GeV 2 , see Figure 2) surprised us by demonstrating again the exponential decrease with about 4 times smaller slope than in the diffraction cone.At somewhat lower energies the so-called Orear regime dominated in there with slower exp(−c |t|)-decrease [17,18].It was interpreted as the byproduct of multiple rescattering on the same object.At 13 TeV the tail is damped stronger than the Orear regime and behaves analogously to the diffraction cone.That indicates now that some new internal structure (a'la Rutherford finding!) of the twice smaller size enters the game.One can speculate that some smaller formations of quarks and gluons start playing the role there (diquarks, glueballs . . .?).
Energy Dependence of the Total, Elastic and Inelastic Cross Sections
The total cross section becomes known from extrapolation of the differential cross section to the optical theorem point t = 0.The integration of the differential cross section over all transferred momenta leads to the elastic cross section.
The physics results obtained at fixed-target accelerators dominated till 1970s.The proton-proton total cross section was steadily decreasing with energy increase.Theorists believed that at higher energies it will decrease further on either in a way similar to the cross section of the electron-positron annihilation or, at the best, tend asymptotically to some constant value somehow related to the proton sizes of the order of 1 fm.This belief was first strongly shuttered in 1971 [19] by measurements at Serpukhov fixed-target accelerator (the available energy √ s about 12 GeV in the center-of-mass system (c.m.s.)).The measured cross section of the interaction of positively charged kaons (K + ) with protons started to increase slightly at energies from 8 to 12 GeV.At the very beginning this effect was not taken seriously enough.However soon it became well recognized at the ISR collider being confirmed by the rise of the total cross section of proton-proton collisions by about 10% in the wider energy range from about 10 to 62.5 GeV [20].Nowadays much stronger effect is clearly seen at LHC up to 13 TeV as demonstrated in Figure 4 [16] for total, inelastic and elastic cross sections.The total cross section increases more than 2.5 times from ISR to LHC! Cosmic ray data are obtained by two collaborations Auger and Telescope Array.They also support this tendency up to higher energies almost 100 TeV even though with much less precision.Some of them are shown in Figure 4.Such a behavior tells us that the size of the interaction region of protons becomes larger at higher energies.An upper bound on the increase of the total cross section was theoretically imposed when it was shown that it cannot increase more rapidly than as the logarithm of the energy to the second power ("Froissart-Martin bound").However, it happens that the theoretical coefficient in front of the logarithm is very large.Therefore, phenomenologically, it does not exclude, at some interval of present energies, the use of a slow power-law energy dependence under this limit.Such a rise of hadronic cross sections is understood within scattering theory as being due to a virtual exchange of vacuum quantum numbers, known from Regge theory as a Pomeron.The power-like dependence can be ascribed to the exchange of the so-called "supercritical Pomeron" , i.e., the pole singularity with intercept exceeding 1.The very existence of such Pomeron or other suitable Reggeon singularity as well as their dynamical origin are still unclear.
Energy Dependence of the Ratio of Elastic to Total Cross Section
If the energy behavior of the total cross section can be phenomenologically interpreted in terms of Reggeon exchanges, the yet unsolved puzzle is provided by the completely unexpected energy dependence of the ratio of the elastic cross section to the total cross section.It is shown in Figure 5 [16] that this ratio is also increasing from ISR to LHC by more than 1.5 times.Probably, a more impressive way to express this is by the comparison of inelastic to elastic cross sections.The inelastic cross section is about 5 times larger than the elastic one at ISR while their ratio is less than 3 at LHC energies.Figure 5.The energy dependence of the ratio of the elastic to total proton-proton cross sections (the survival probability) reproduced in [5].
The ordinate axis of Figure 5 tells us that the survival probability of protons to leave the interaction region intact is high enough and, what is more surprising, increases at higher energies.In other words, even being hit stronger, they do not break up producing secondary particles in inelastic collisions but try to preserve their entity.That contradicts our intuition based on classical prejudices.Naively, one could imagine the protons as two Lorentz-compressed bags colliding with high velocities.The bag model was widely used for describing the static properties of hadrons with quarks and gluons immersed in a confining shell.The color forces between the constituents are governed by QCD.Somehow Nature forbids the emission of colored objects (quarks and gluons) as free states because they never were observed in experiment.Thus these constituents can be created only in colorless combinations manifested in inelastic collisions as newly produced ordinary particles (mostly, pions) and resonances.The dynamics of internal fields during collisions and color neutralization is yet unclear.However the quantum origin of these fields must be responsible for the observed increase of the survival probability.
One could imagine the classical analogy to the bag model as a Kinder-surprise toy with many unseen pieces (quarks, gluons) hidden inside it.Their colorless blobs appear outside if two such toys are broken in collision.These toys will never stay intact if hit strongly enough.Thus the increase of the survival probability of protons with increasing their collision energy is a purely quantum effect.
At the same time, one should keep in mind that this is a temporary effect because elastic cross section should never exceed the total one and their ratio must saturate asymptotically below 1.
The Spatial View of Interacting Protons
From earlier days of Yukawa prediction of pions, the spatial size of hadrons was ascribed to the pionic cloud surrounding their centers.The pion mass sets the scale of the size about 1 fm = 10 −13 cm.Numerous experiments using different methods confirmed this estimate with values of the proton radius ranging from 0.84 fm to 0.88 fm.This 5%-difference has been named as "proton radius puzzle".Different methods used in various experiments could be in charge of this discrepancy.Their sensitivity to central and peripheral regions may be different.Among new experiments, it is worth mentioning recent results from the Jefferson laboratory [21] which reveal the internal gravitational forces inside the proton.It happens that they are repulsive at the center (up to 0.6-0.7 fm) and attractive (strongest at about 0.9 fm) at the periphery ("an extremely high outward-directed pressure from the center of the proton, and a much lower and more extended inward-directed pressure near the proton's periphery").It is interesting also that the lattice calculations [22] showed that "the gravitational strength" of a proton (its mass) is compiled only by 9% from the Higgs mechanism which provides the origin of quark masses.It is almost equally shared in three parts (30 ± 5%) by kinetic energies of quarks and gluons and by their interactions.The three-quark content of protons is crucial for its static properties while the parton model is widely discussed for physics in collision.Surely, all these details of the proton substructure are important for their interaction.
Both central and peripheral regions play important roles in particle collisions.Traditionally, the hadron collisions were classified according to our prejudices about the hadron structure.The very external shell was considered to be formed by single pions as the easiest particle constituents.The deeper shells were constructed from heavier (2π, ρ-mesons etc.) objects.When treated in the quantum field theory terms, these objects contribute to the scattering amplitudes by their propagators which are damped down at the transferred momenta of the order of corresponding masses.According to the Heisenberg principle, the largest spatial extension is typical for the single pion exchange.That is why the one pion exchange model was first proposed in our early paper [23] for description of peripheral interactions of hadrons.It initiated the multiperipheral approach with exchange by a chain of pions.The very first and the simplest of them was the model of Amati et al [24] for pion-pion interactions with production of ρ-mesons.It predicted the cross section decreasing at high energies.That deficiency could be cured by creation of correlated groups of particles with larger masses (clusters) as described in the review papers [25][26][27].Later, more central collisions with exchange of ρ-mesons and all other Regge particles were considered and included in the multiperipheral models of inelastic hadron interactions.
Meantime, the partonic description of inelastic processes with quarks and gluons playing the role of partons became well developed, in particular, for the electron-positron annihilation.Particle correlations were at the center of my studies of these processes [28] as well.Also, I speculated that massless gluons can be similar to photons and produce hadronic Cherenkov effect when they pass through the nuclear medium [29].Then they can generate some specific correlations resulting in peculiar bumps of the angular distribution of produced particles similar to Cherenkov rings.Such bumps were observed in heavy-ion collisions.Alternative explanations were proposed also.Somewhat earlier, I got interested in the relation between elastic and inelastic processes imposed by the general principle of the quantum field theory called the unitarity condition (2).The elastic and inelastic processes are united in this condition by the statement that the total probability of them is equal 1.With its help, it happened to be possible to show [18,30] that some special behavior of the elastic scattering at large transferred momenta experimentally observed in some energy interval directly follows from this condition.From time to time, I discussed that with Gariy.Now I got again interested in this approach as applied to the spatial view of proton collisions.After almost four years since he passed away, I describe in this paper the new experimental data and theoretical topics of 2015-2018 which we had no chance to discuss together [1,3,5,6].
The Unitarity Condition and the Spatial Image of Proton Interactions
The space structure of the interaction region of colliding protons can be studied by using information about their elastic scattering with the help of the unitarity condition.For that purpose, the relation ( 2) must be transformed to the space representation.The whole procedure becomes simplified because in the space representation one gets an algebraic relation between the elastic and inelastic contributions to the unitarity condition in place of the more complicated non-linear integral term I 2 in Equation ( 2).
The Fourier transformation which is at the origin of the Heisenberg principle relating space-time to energy-momentum characteristics leads to the power-like dependences if applied directly to the propagators.However, the exponential fall-off is more typical for the general functional behavior of hadronic interactions as seen, for example, from the shape of the diffraction cone of their elastic scattering differential cross section.Therefore, one has to cure this deficiency and ascribe the exponents to the Regge type behavior of vertex functions.Thus the propagators lose their heuristic role leaving it to the phenomenological prescriptions.The simple spatial view is lost as well.To get back the direct insight to it one has to deal with the impact-parameter representation of the scattering amplitude.Its connection with experimental results on the transferred momentum dependence is established by the Fourier-Bessel transformation.The traditional prejudice is that the large impact parameters contribute mostly to the cross section at small transferred momenta.However, even though the exponential shape of the Fourier transform favors that, this statement depends strongly on the shape of the amplitude itself in the impact-parameter presentation.
It would be desirable to get some information about inelastic processes not from phenomenological models but from more general principles.At first glance, this way could be provided by one of them-the general unitarity condition relates directly elastic and inelastic amplitudes.The properties of elastic processes have been studied experimentally rather precisely.Using them, one can hope to learn some features of inelastic processes.This proposal leads to interesting results but requires additional assumptions discussed below in detail.
To define the geometry of the collision we must express all characteristics presented by the angle θ and the transferred momentum t in terms of the transverse distance between the trajectories of the centers of the colliding protons-namely the impact parameter, b.This is easily carried out using the Fourier-Bessel transform of the amplitude f .It retranslates the experimentally available momentum data to the corresponding transverse space features.The result is written as where J 0 is the Bessel function of 0-th order.The unitarity condition (2) expressed in the b-representation reads Thus some information about inelastic processes G(s, b) can be gained just from the elastic scattering amplitude f using Equations ( 6) and (7).The left-hand side (the overlap function in the b-representation) describes the transverse impact-parameter profile of inelastic collisions of protons.It is just the Fourier-Bessel transform of the overlap function g.
It is necessary to stress from the very beginning that the main difficulty in getting any information about G(s, b) is in calculation of Γ(s, b) which requires knowledge of both real and imaginary parts of f at all transferred momenta t for a given energy s.
The integral contribution of the real part of the amplitude f to the unitarity condition should be small (see estimates in Ref. [2] and Figures 6 and 7 At ISR energies the maximum value at b = 0 is less than 1, especially at lower energies of ISR [31] (see Figure 6 [32]).It becomes close to 1 at 7 TeV.
Shapes of the Interaction Region
Let us compare two extreme assumptions about Im f (s, t): (1) it is given either by + √ dσ/dt at all t or (2) it is positive inside the cone and negative − √ dσ/dt outside it (at |t| > |t 0 |, where t 0 is the minimum position).
The shapes of the interaction region have been computed for these two assumptions with spline interpolation of experimental data for dσ/dt used.They are shown in Figures 7 and 8 for 7 and 13 TeV.The region of small impact parameters is enlarged in Figure 9 because the most intriguing difference between various assumptions is seen just there.
It is clearly seen in Figure 9 that the assumption about the everywhere positive imaginary part leads to the dip G(s, 0) < 1 for central collisions at b = 0 (easily noticed at 13 TeV).The maximum G(s, b max ) = 1 moves to b max > 0 , i.e., the toroid-like shape is formed.The possibility of dip at b = 0 was first considered in Ref. [33].If the imaginary part becomes negative at large transferred momenta, no dip appears and the BEL-shape is recovered.
Thus one concludes that the assumption about the positivity of the imaginary part of the amplitude at all available t-values leads to the validity of the earlier speculation about the toroidal shape of the interaction region (see the review paper [3]).In particular, this conclusion was supported if the purely exponential shape of the imaginary part in the diffraction cone with experimental values for its slope B was extended to all transferred momenta [34,35] in place of using its experimental form at large |t|.Then the positive exponential tail of the elastic amplitude with rather large slope B/2 provides the slight dip at b = 0. Albeit it is much smaller than for our first variant due to the lower tail and is hard to resolve at the scale of Figures 7 and 8 where it is shown as ζ exp(−x 2 )(2 − ζ exp(−x 2 )).Analytically, for that case, the shrinkage of the diffraction cone at high energies (the energy increase of the slope B) is directly related to the energy increase of the ratio of elastic to total cross section (B ≈ σ el /16πσ 2 tot ), which, in its turn, determines the dip at b = 0 if σ el /σ tot > 0.25 as it happens at 13 TeV.Another shape of the interaction region of the BEL-type follows from the so-called kfk-model [36].This phenomenological model exploits some QCD inspired ideas and predicts both real and imaginary parts of the amplitude using numerous parameters derived from precise fits of measured cross sections.According to it, the real part becomes zero inside the diffraction cone region while the imaginary part possesses zero near the minimum of the differential cross section (it is seen in Figure 1).Its value at the minimum is filled by a small negative real part.The imaginary part becomes also negative at larger transferred momenta.It reminds the second variant.Therefore G(s, b) has no dip at the center b = 0.It is also demonstrated in Figures 7 and 8. Some analysis of the kfk-model showing its "anatomy" was done in Ref. [2].It is possible to verify our assumption about the smallness of the real part of the amplitude for this model.The real part of the amplitude has been computed at 7 and 13 TeV.Its contribution to the shape of G(s, b) in Equation (7) happens to be extremely small (within the limits of experimental accuracy) and can be neglected (see Figure 8).The model predicts the BEL-shape of the interaction region even at asymptotically high energies.
It would be interesting to confront its prediction with new results obtainable with the help of the Levy-interpolation method [37] by which both the real and imaginary parts of the amplitude can be found out.The model of Ref. [36] proposes the definite form of the elastic scattering amplitude inspired by QCD ideas.Its parameters are fitted by the existing experimental data and used for extrapolation to higher energies.At its turn, the Levy-approach [37] is aimed on the direct interpolation of the differential cross section by the complete orthonormal set of complex functions suited for exponential and power-like dependence on transferred momenta revealed in experiment.The comparison of the results obtained in these two approaches on their predictions for the dip at b = 0 would be very instructive.
Quite special feature of G(s, b) at ISR energies was noticed in [31] where genuine experimental data were used.At its tail of large impact parameters from 2 fm to 2.5 fm a slight bump was observed.No bump was obtained in [38] where some interpolation of the data was used.The results in Figures 7 and 8 do not show any indication on such a bump.The corresponding values of b = 2 √ 2B ≈ 2.5 fm are similar to those in [31].
Conclusions
As described above, numerous new experimental data lead to many new puzzles.Among them the problem of the spatial view of protons asks for some additional information.
According to Equations ( 6) and ( 7), the spatial shape of the proton interaction region is determined by the integrals of the elastic scattering amplitude over all transferred momenta.The knowledge of its modulus obtainable from measurable differential cross sections is not enough to compute them.The prescription Im f ≈ | f | ≈ + √ dσ/dt leads to the toroidal shape at the highest LHC energies.
In contrast, the negative values of Im f at large transferred momenta recover the BEL-regime.Thus, the problem of the spatial shape of the proton interaction region cannot be solved rigorously unless the behavior (and, especially, the sign of the imaginary part!) of the elastic scattering amplitude is known.Unfortunately, there seem to be no ways to get precise experimental or theoretical information about it now.Therefore, one has to rely on "reasonable" speculations and phenomenological models confronted with a wide spectrum of experimental data.
|t| [GeV 2 ]Figure 1 .
Figure 1.The differential cross section of elastic proton-proton scattering at the energy √ s = 7 TeV measured by the TOTEM collaboration.(Left) The region of the diffraction cone with the |t|-exponential decrease.(Right) The region beyond the diffraction peak.The predictions of five models are demonstrated.These Figures were reproduced in my review paper [1].
|t| [GeV 2 ]Figure 1 .
Figure 1.The differential cross section of elastic proton-proton scattering at the energy √ s = 7 TeV measured by the TOTEM collaboration.(Left) The region of the diffraction cone with the |t|-exponential decrease.(Right) The region beyond the diffraction peak.The predictions of five models are demonstrated.These Figures were reproduced in my review paper [1].
Figure 2 .
Figure 2. The differential cross section of elastic scattering of protons at 13 TeV reproduced in [5].
Figure 2 .
Figure 2. The differential cross section of elastic scattering of protons at 13 TeV reproduced in [5].
Figure 3 .
Figure3.The energy dependence of the slope B of the diffraction cone reviewed in[5].
Figure 4 .
Figure 4.The energy dependence of the total, elastic and inelastic proton-proton cross sections reproduced in [5].
below) and is neglected in what follows.Then the unitarity condition is written as G(s, b) = ζ(s, b)(2 − ζ(s, b)) = ReΓ(s, b)(2 − ReΓ(s, b)).(8) Thus, according to Equation (8), the shape of G(s, b) is determined by the integral contribution over all transferred momenta from the imaginary part of the elastic amplitude.The sign of Im f cannot be determined from dσ/dt.The absolute maximum of G(s, b) is reached if ReΓ(s, b)=1.
Figure 6 .Figure 7 .
Figure 6.The proton profile G(s, b) at 7 TeV (upper curve) compared to those at ISR energies 23.5 GeV and 62.5 GeV (the interpolation procedure of experimental data has been used) shown in [1]. | 6,738.2 | 2019-01-03T00:00:00.000 | [
"Physics"
] |
The Ever Elusive, Yet-to-Be-Discovered Twist-Bend Nematic Phase †
: The second, lower-temperature nematic phase observed in nonlinear dimer liquid crystals has properties originating from nanoscale, polar, and intermolecular packing preferences. It fits the description of a new liquid crystal phase discovered by Vanakaras and Photinos, called the polar-twisted nematic. It is unrelated to Meyer’s twist-bend nematic, a meta-structure having a macroscale director topology consistent with Frank–Oseen elastic theory.
Are the N X and N TB Phases Equivalent?
Over the last decade or so, Robert Meyer's elegant conjecture in the early 1970s about a twist-bend nematic director topology based on elastic continuum theory [1] has been conflated with another observed nematic phase, the N X phase, so named because its molecular organization was unfamiliar.In the N X phase, nonlinear dimer liquid crystals exhibit a very tight 1D twist modulation of the orientational ordering with a nanoscale pitch of ~10 nm.Meyer envisioned the twist-bend nematic phase (N TB ) specifically as a different kind of hierarchical meta-structure, a nematic having the apolar, uniaxial nematic director n (n ↔ −n, D ∞h symmetry) spontaneously bending on the macroscale (~1 µm).Nevertheless, Meyer's N TB phase has been used by some [2][3][4] to explain the nanoscale roto-translation modulation exhibited by bent or V-shaped dimer mesogens (e.g., CB-7-CB) in the N X phase, a second, lower-temperature nematic phase discovered in 1991 [5].This explanation suggests that Meyer's twist-bend nematic phase has already been discovered.A growing body of literature has accumulated around the putative N X = N TB equivalence without questioning it [3,4,6].
However, unbeknownst to many researchers, another theory about the N X phase, published in 2016, provides a closer match to the experimental values than the original 2011 article [2], claiming that N X is the sought-after N TB phase.This new theory proposed by Vanakaras and Photinos (VP) suggests that the N X phase is actually a novel liquid crystal (LC) phase-a new type of nematic that they call the polar-twisted nematic (N PT ).Initially described within a mean field approximation, the VP theory predicts spontaneous chiral symmetry breaking and the formation of chiral domains of opposite handedness [7].Their predictions were corroborated by simulations using detailed modeling of dimer LCs [8].Dunmur's 2022 article, Anatomy of a Discovery: The Twist-Bend Nematic Phase, declares an intention to provide a comprehensive review of the subject, in his words, "to present as fair and documented account as possible" [4].However, it includes no references to publications by Vanakaras and Photinos about their alternative model or to critiques of the N X = N TB supposition [9][10][11].Its sole reference to alternative views is to a critique of a critique [12], one that obscures Meyer's coherent conjecture, relegating it to a genealogical affiliation with an ill-defined "family" of twist-bend nematics.Dunmur cites only research based on the assumption that N X and N TB are one and the same, and, in this regard, it is one-sided, not a comprehensive historical account.Much of the cited research is solid and has advanced the field, but the work of VP demonstrates that its underlying equation of N X = N TB is erroneous.For that reason, the mass of papers (more than 600) citing the same 2011 base publication by Dunmur et al. [2] has had a twofold effect on the science of LCs: it has suppressed not only any investigation of the new polar-twisted nematic but also the search for Meyer's twist-bend nematic via the claim that such a phase has already been found.While there have always been researchers who have been more circumspect about the N X = N TB supposition (e.g., Chen et al., 2013 [13]), publications after 2016 should reference and fully address the advances in understanding made by the VP theory.Given that the VP theory is not mere speculation but a precise analysis and explanation of experimental findings, a failure to address the way it differentiates the N X and N TB phases in future will retard advances in the understanding of the nematic phases exhibited by nonlinear, achiral mesogens.
Achiral Mesogens with Chiral Nematic Phases
Meyer was the first to predict that nematics comprising achiral molecules could exhibit form chirality-the spontaneous adoption of left-and right-handed helical supramolecular structures.At the time Meyer proposed the twist-bend nematic phase, it was envisioned for rodlike (calamitic) mesogens, not nonlinear molecules.(Since the N TB phase has not been observed to date, little can be said about prerequisite mesogen chemical structures other than that the molecular symmetry should be C 2V or lower.)Meyer's so-called twistbend nematic N TB , derives from the interplay between the bend elasticity of the nematic director field n(r) and its associated flexoelectric polarization P(r) [1].A half century later, Vanakaras' and Photinos' analysis of the nematic phase of nonlinear dimer molecules predicted spontaneous chiral symmetry breaking, the so-called polar-twisted nematic N PT [7].Some have claimed that Vanakaras' and Photinos' new understanding of the N X phase only differs from prior interpretations semantically [12]; this is not the case, as was explained in 2020 [11].
Appreciating the VP modeling of the Nx phase requires an understanding of the limitations of Meyer's theory, namely, its foundation in continuum elasticity theory in nematics; my colleagues and I have published articles explaining the constraints imposed on Meyer's conjecture by elasticity theory in 2020 [9] and 2021 [10].
There are two critical differences between Meyer's N TB theory and the N X phase, which Vanakaras' and Photinos' N PT theory illuminates: (1) The pitch of the form chirality is measured in micrometers in N TB theory versus nanometers in the N X phase.(2) The twisting entity is qualitatively different in the two theories, a nematic director n in N TB theory versus a polar director m in the N X phase, as delineated by N PT theory.
The Polar-Twisted Nematic Phase
The polar-twisted nematic phase N PT is an orientationally ordered fluid phase without density modulations (i.e., it is nematic), and, like Marvin Freiser's 1970 predicted biaxial nematic phase (N B ) [14], it is also derived from theory utilizing minimal molecular modeling [7].The most striking property of the new N PT nematic is its nanoscale modulation of the local polar orientational order (Figure 1 Left), a prediction of the VP model that is in remarkable quantitative agreement with experimental observations in the N X phase (Figure 1 Center)-the pitch of the nanoscale orientational modulation is ~3 L (three dimer lengths) [13,15].As Figure 1 emphasizes, the nominal pitch in the N TB phase (Right) is 100× larger than the observed pitch in the N X phase (Center); the difference is so large that the figure cannot accommodate it without the dotted blue lines leaving the page (Right).
The polar-twisted nematic phase NPT is an orientationally ordered fluid phase without density modulations (i.e., it is nematic), and, like Marvin Freiser's 1970 predicted biaxial nematic phase (NB) [14], it is also derived from theory utilizing minimal molecular modeling [7].The most striking property of the new NPT nematic is its nanoscale modulation of the local polar orientational order (Figure 1 Left), a prediction of the VP model that is in remarkable quantitative agreement with experimental observations in the NX phase (Figure 1 Center)-the pitch of the nanoscale orientational modulation is ~3L (three dimer lengths) [13,15].As Figure 1 emphasizes, the nominal pitch in the NTB phase (Right) is 100 larger than the observed pitch in the NX phase (Center); the difference is so large that the figure cannot accommodate it without the dotted blue lines leaving the page (Right).The nanoscale modulation of orientational ordering predicted by the VP polartwisted nematic model (L PT ~10 nm) is much lower because it derives from the local, polar, molecular packing of dimer LCs.The dimers displaying the NX phase have a bent or Vshaped contour with associated polarity (electrostatic and/or shape) and a C2v average molecular symmetry.In the VP simulations, the dimer C2 symmetry axes locally align (Figure 1 Left magnified inset), generating a polar phase director m that roto-translates about a local axis h, yielding a 1D modulation of orientational order with a pitch that is in agreement with experimental observations in the NX phase, i.e., L PT = L X .The polar molecular packing in the polar-twisted phase defining m is unconstrained by elasticity theoretical considerations, and m is free to tightly spiral spontaneously about h to circumvent low-entropy, ferroelectric polarity [7,8].Polarity is averaged out over one pitch length in the direction of the modulation, and since both left-and right-handed chiral packing (spirals) are equally probable, in macroscopic regions, the NPT phase is predicted to be uniaxial with balanced chirality as indeed observed experimentally in the NX phase.
The Twist-Bend Nematic Phase
The macroscopic director topology in Meyer's twist-bend nematic minimizes the elastic energy associated with gentle perturbations of a uniaxial nematic's director fieldin the language of the Frank-Oseen elastic theory of nematics, the curvatures of the nematic director field are "soft" [16].The director n tilts with a constant cone angle θc and spirals about a macroscopic direction z in the NTB phase (Figure 1 Right).(The resulting The nanoscale modulation of orientational ordering predicted by the VP polar-twisted nematic model (L PT ~10 nm) is much lower because it derives from the local, polar, molecular packing of dimer LCs.The dimers displaying the N X phase have a bent or V-shaped contour with associated polarity (electrostatic and/or shape) and a C 2v average molecular symmetry.In the VP simulations, the dimer C 2 symmetry axes locally align (Figure 1 Left magnified inset), generating a polar phase director m that roto-translates about a local axis h, yielding a 1D modulation of orientational order with a pitch that is in agreement with experimental observations in the N X phase, i.e., L PT = L X .The polar molecular packing in the polar-twisted phase defining m is unconstrained by elasticity theoretical considerations, and m is free to tightly spiral spontaneously about h to circumvent lowentropy, ferroelectric polarity [7,8].Polarity is averaged out over one pitch length in the direction of the modulation, and since both left-and right-handed chiral packing (spirals) are equally probable, in macroscopic regions, the N PT phase is predicted to be uniaxial with balanced chirality as indeed observed experimentally in the N X phase.
The Twist-Bend Nematic Phase
The macroscopic director topology in Meyer's twist-bend nematic minimizes the elastic energy associated with gentle perturbations of a uniaxial nematic's director field-in the language of the Frank-Oseen elastic theory of nematics, the curvatures of the nematic director field are "soft" [16].The director n tilts with a constant cone angle θ c and spirals about a macroscopic direction z in the N TB phase (Figure 1 Right).(The resulting N TB structure is asymptotically related to that of a traditional cholesteric or chiral nematic N* phase, where θ c = 90 • .)Since the twist-bend structure must conform to the physics of Frank-Oseen elastic theory, continuum elasticity restricts the magnitude of such deformations.As a result, the lower bound on the pitch L TB of the spiraling nematic director in the twist-bend phase is in the order of microns, i.e., in order for the 1D modulations of the orientational order in the N TB phase to be consonant with Frank-Oseen theory, the L TB pitch must be ≥100L X (Figure 1 Center).
Meyer's N TB phase conforms to the limitations of Frank-Oseen theory.De Gennes carefully explained those limitations in his canonical text of 1974 [17], updated in 1993 [18].Meyer certainly understood those limitations when he conjectured that (flexoelectric) polarization could spontaneously drive the formation of a space filling director topology that is labeled the twist-bend nematic: "Although a state of uniform torsion is possible, a state of constant splay is not possible in a continuous three-dimensional object.A state of pure constant bend is also not possible, although a state of finite torsion and bend is possible.The latter is a modified helix in which the director has a component parallel to the helix axis.In laboratory coordinates, The magnitude of the bend is t 0 sinϕcosϕ" [1] p. 320.The "finite torsion and bend" are crucial in understanding the role of Frank-Oseen elastic theory in Meyer's twist-bend conjecture for the twist and bend elastic deformations of the director n; t 0 is the helical wavenumber and in Figure 1 Right, θ c = ϕ.In fact, he prefaces his description of the N TB phase with language that parallels that of de Gennes [17] pp.58, 61; [18] p. 100: "Changes in the magnitude of the order parameters in a nematic phase are high energy local processes.However gradual changes in the orientation of the director are low energy processes capable of being induced by small external perturbations.A continuum elasticity theory has been developed to describe these curvature structures" [1] p. 291.
This quotation and the one above confirm that Meyer's formulation of the twist-bend topology of the director field were established within Frank-Oseen theory of uniaxial nematics.
The Second Nematic Phase in Dimer LCs
Meyer's proposed director topologies are constrained to the class of deformations having macroscale strains-slow splay, twist, and bend deformations of n.Such restrictions apply to all continuum elastic theories of nematics, even those with extreme elastic constants [19].Those limitations effectively exclude Dunmur and collaborators' original assumption that N X = N TB [2]; that equivalence requires the director field n(r) to twist through an angle of π over a distance of approximately three molecular lengths (~9 nm), a distance scale over which n itself is undefined.In other words, there are not enough molecules in a volume element around r to specify n(r).As emphasized before, "The issue is not the mere definition of the director in some volume v, but the deformation of n(r), described by the curvatures of the director field, i.e., it has to do with lengths.In order to describe the curvatures of n(r), the director has to be definable over a small volume v around r, and of course such a description is meaningful only if the length scale of the curvature of n(r) is much larger than the dimensions of v." [10].Such constraints do not apply to the N PT phase, where the ferroelectric polarization associated with the molecular packing of V-shaped molecules is alleviated by nanoscale torsion-roto-translation of the polar director m on the scale of a few molecular dimensions [7,8].
Despite the clear differences between the molecular organization in the N X and N TB phases, recent citation practices continue to assume they are identical.A 2023 report reviews a variety of dimer LCs exhibiting the N X phase, calls it the N TB phase, and appears to be exhaustively documented (100 references) [20]; yet, it does not cite Meyer's original work.Instead, it relies on a 2001 elasticity-based model by Dozov [19], one sourced in visually inspiring simulations [21] that are, however, technically flawed [8].Its claim that the Dozov model supports the N X = N TB proposition stretches the limits of continuum elasticity; that model computes a twist pitch "≤100 L ∼ = 300 nm, rather small but still macroscopic" [19], a value 30× larger than the measured pitch in the N X phase.The VP model of the N PT phase requires no such gymnastics.
A New Heuristic
Perhaps the continuing propagation of the erroneous N X = N TB assumption can be rectified with a new heuristic, one that clearly differentiates the possible representations of nematics.To that end, we might consider a classification scheme delineating two length scales: I.
Local nematic ordering: local organization dictated directly by intermolecular attractive dispersion forces regulated by excluded volume considerations [22], e.g., the uniaxial, biaxial, polar-twisted (N U , N B , N PT ) nematic phases.II.Topological nematic ordering: defined on a larger length scale by soft trajectories, reflecting analogously "soft residual" molecular interactions, of the (typically uniaxial) nematic director n, e.g., chiral, splay-bend, twist-bend (N*, N SB , N TB ) nematic phases.
There may even be a third category for disclination-mediated director topologies as well (e.g., the so-called blue phases [23]).
In an effort to make clear the meaning of topological nematic ordering, an exaggerated example is shown below.It is a fictional "knot" nematic (N K ) phase comprising a contiguous (chiral) director field; it is displayed below for pedagogical purposes.It shows soft/slow changes in the trajectory of the uniaxial director n(r); its associated flexoelectric polarization P(r) is aligned along the bend vector nx(∇xn).It is devised to communicate the unique, hierarchical, macroscale meta-structure in the types of phases Meyer predicted a half century ago (i.e., twist-bend and splay-bend nematics).A failure to differentiate these two types of nematic organizations, local molecular (I) and topological (II), will continue to obfuscate distinctions between nematic phases and to confound interpretations of new experimental findings (e.g., NMR [24] and simulations [25]) in bent-core and dimer liquid crystals.
visually inspiring simulations [21] that are, however, technically flawed [8].Its claim that the Dozov model supports the NX = NTB proposition stretches the limits of continuum elasticity; that model computes a twist pitch " 100L ≅ 300 nm, rather small but still macroscopic" [19], a value 30 larger than the measured pitch in the NX phase.The VP model of the NPT phase requires no such gymnastics.
A New Heuristic
Perhaps the continuing propagation of the erroneous NX = NTB assumption can be rectified with a new heuristic, one that clearly differentiates the possible representations of nematics.To that end, we might consider a classification scheme delineating two length scales: I. Local nematic ordering: local organization dictated directly by intermolecular attractive dispersion forces regulated by excluded volume considerations [22], e.g., the uniaxial, biaxial, polar-twisted (NU, NB, NPT) nematic phases.II.Topological nematic ordering: defined on a larger length scale by soft trajectories, reflecting analogously "soft residual" molecular interactions, of the (typically uniaxial) nematic director n, e.g., chiral, splay-bend, twist-bend (N*, NSB, NTB) nematic phases.
There may even be a third category for disclination-mediated director topologies as well (e.g., the so-called blue phases [23]).
In an effort to make clear the meaning of topological nematic ordering, an exaggerated example is shown below.It is a fictional "knot" nematic (NK) phase comprising a contiguous (chiral) director field; it is displayed below for pedagogical purposes.It shows soft/slow changes in the trajectory of the uniaxial director n(r); its associated flexoelectric polarization P(r) is aligned along the bend vector nx(∇xn).It is devised to communicate the unique, hierarchical, macroscale meta-structure in the types of phases Meyer predicted a half century ago (i.e., twist-bend and splay-bend nematics).A failure to differentiate these two types of nematic organizations, local molecular (I) and topological (II), will continue to obfuscate distinctions between nematic phases and to confound interpretations of new experimental findings (e.g., NMR [24] and simulations [25]) in bent-core and dimer liquid crystals.
Concluding Remarks
Vanakaras' and Photinos' prediction of the polar-twisted nematic NPT phase is similar to Freiser's predicted biaxial nematic NB phase.The latter reaffirmed the notion that biased, intermolecular interactions (correlated azimuthal angles among board-shaped
Concluding Remarks
Vanakaras' and Photinos' prediction of the polar-twisted nematic N PT phase is similar to Freiser's predicted biaxial nematic N B phase.The latter reaffirmed the notion that biased, intermolecular interactions (correlated azimuthal angles among board-shaped mesogens) could manifest on a macroscale, and its alleged discovery was greatly acclaimed [26,27].The organization in N PT similarly derives from unique intermolecular interactions (correlated polar ordering), but its predictions continue to be ignored [4,20] even though the N PT nematic phase perfectly describes the molecular organization in the N X phase and may even account for incompletely understood behavior in other bent-core mesogens [24].The V shape of the CB-7-CB dimers are inherently biaxial, and, in its N X phase, the dimers assume ferroelectric packing arrangements.The low entropy of such configurations is averted by winding them into tight, right-and left-handed helices having a nanoscale pitch.
In summary, two decades after the N X phase was discovered [5], some thought that it represented the long-lost phase postulated by Meyer [2].Then, Vanakaras and Photinos showed that, on the contrary, the N X phase was a new phase, which they christened the N PT phase.The nanoscale modulation of the orientational order in dimer LCs unequivocally precludes the second, lower temperature N X phase from being Meyer's (or Dozov's) twist-Crystals 2023, 13, 1648 6 of 7 bend nematic.Those topological nematic models diverge from the observations in the N X nematic; the N PT model reveals this divergence.
Figure 1 .
Figure 1.Molecular organization in nematic phases.Left: Schematic of the locally polar supramolecular structure of the polar-twisted nematic phase.The local polarization m spirals about a helix axis h and generates a 1D modulation of the polar orientational order with pitch L PT ~10 nm.Center:Freeze-fracture transmission electron microscopy image of the NX phase of CB-7-CB, illustrating a 1D modulation pitch L X = 8 nm, excerpted from Figure4ain ref.[15].Right: The apolar director n(r) has a heliconical trajectory about h in the twist-bend nematic phase with an anticipated pitch L TB of ~1000 nm.The magnified view in the insets showing that the microscopic structural organizations of average mesogen shapes are approximately to scale.
Figure 1 .
Figure 1.Molecular organization in nematic phases.Left: Schematic of the locally polar supramolecular structure of the polar-twisted nematic phase.The local polarization m spirals about a helix axis h and generates a 1D modulation of the polar orientational order with pitch L PT ~10 nm.Center:Freeze-fracture transmission electron microscopy image of the N X phase of CB-7-CB, illustrating a 1D modulation pitch L X = 8 nm, excerpted from Figure4ain ref.[15].Right: The apolar director n(r) has a heliconical trajectory about h in the twist-bend nematic phase with an anticipated pitch L TB of ~1000 nm.The magnified view in the insets showing that the microscopic structural organizations of average mesogen shapes are approximately to scale. | 5,040 | 2023-11-29T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Identication of Genomic Rare Variants by Whole Genome Sequencing in Primary Torsion Dystonia
Background:Primary torsion dystonia (PTD) is a group of related movement disorders characterized by abnormal repetitive, twisting postures due to the involuntary co-contraction of opposing muscle groups. The research is based on whole genome sequencing technology of PTD patients to analyze the pathogenic genes and mutation sites in patients with primary dystonia, the relationship among genotype, clinical phenotype and prognosis. Methods: In order to investigate the association between the familial disease and its molecular mechanisms, 100 normal Han Chinese donors were also examined. The DNA of all the samples was sequenced using whole genome sequencing technique.The participants was conducted and submitted to the Macrogen Group (Seoul, Korea) for analysis. Results: We had detected the data output of precursor is 112.91G, throughut mean depth is 39.50X, mappable mean depth is 35.70X, genome coverage ratio is 99.50%.A novel heterozygous missense variant of uncertain signicance (VUS) in ANO3 of Primary Torsion Dystonia had be found, but not in healthy control groups. Conclusions: Together, our results report a new mutation that may be similar in phenotype to known pathogenic genes, which will lay the foundation for future work. More families will be sequenced to identify more informations, which can help us to make the correct molecular diagnosis of the disease and to provide better genetic information.
Background
Primary torsion dystonia (PTD) is a disease of the external vertebral body characterized by abnormal posture and movement, which is caused by uncoordinated or excessive contraction of the active muscle and the antagonist muscle. The pathogenesis of PTD is not completely understood. Over the last few decades, several novel disease associated genes (DYT1-27) have been identi ed in dystonic syndromes, but the underlying genetic diagnosis remains elusive in most patients [1] .But,almost all primary dystonia have a genetic basis [2][3] .
At present, the study of genetic diseases mainly lies in genes, and the method used in the study of genetic diseases is gene sequencing. With the development of whole genome sequencing, especially the cost of single human sequencing has dropped to ten thousand yuan, it has brought new opportunities for the study of genetic diseases.Many problems, such as too few members in the family, sporadic cases, heterogeneity of gene loci, penetrance, and too many candidate clones in the targeting region, have been resolved in the traditional cloning technology [4] . Genome wide sequencing has made up for the lack of disease related structural variations and non coding region variation in whole genome exon sequencing. Genome wide sequencing can detect genomic changes that can not be detected in other ways, such as noncoding mutations, including promoters, enhancers, introns, and noncoding RNA (including tiny RNA). Chromosomal rearrangements can be detected, including inversions, tandem repeats, and deletions. A large number of genetic differences can be found to achieve genetic evolution analysis and the importance of candidate gene prediction [5] . It involves many elds such as clinical medicine research, population genetics research, association analysis and evolutionary analysis. Compared with exome sequencing technology, the whole genome sequencing technology has been tested more widely, and the result analysis is more thorough to the study of genetic disease [6] .
So far, the identi cation of PTD and genetic risk factors has proved to be a di cult task, and the introduction of the latest genome-wide sequencing technologies could drive progress in these areas [7] . As we can nd its advantages which compare with HiSeq 2000 from the table 1. ANO3 encodes a homodimeric protein that is structurally related and encodes a Ca + 2 -activated chloride ion channel and a protein of a membrane phospholipid antibody having a different expression pattern. ANO3 consists of eight hydrophobic transmembrane helices that act as Ca + 2 sensors for regulating calcium homeostasis [8] .
The exact function of ANO3 is unclear, and recent experiments have shown that it does not act as a Ca + 2 -activated chloride ion channel and may actually act as a Ca + 2 -dependent phospholipid fragment [9] . ANO3 appears to play a role in the regulation of neuronal excitability and is highly expressed in the striatum, hippocampus and cortex [10] . Mechanisms, pathogens in ANO3 may lead to striatal-neuronal excitability abnormalities, which manifested as uncontrolled dystonia movement. The expression level of ANO3 mRNA is the highest in striatum, 5.30 times that in frontal cortex and 70 times in cerebellum [11] , and its abnormality can affect endoplasmic reticulum related calcium ion gated chloride channel, which leads to disease [12] .
At present, the disease mainly rely on drugs and stereotactic surgical treatment, but the treatment is only symptomatic treatment, and there are many limitations, and the pathogenesis of dystonia is not completely clear; Therefore, it is necessary to screen new loci of DYT gene, discover new related genes, and study the mutant genes and related proteins.The research is based on whole genome sequencing technology of PTD patients to analyze the pathogenic genes and mutation sites in patients with primary dystonia, the relationship among genotype, clinical phenotype and prognosis. Detecting genetic mutations in genetic diseases and discovering new genes or mutations can help us to make the correct molecular diagnosis of the disease and to provide better genetic information.
Human Samples
In order to investigate the association between the familial disease and its molecular mechanisms, 100 normal Han Chinese donors were examined. The diagnosis of Primary torsion dystonia was based on typical clinical and laboratory measurements. Peripheral blood was collected in anticoagulation tubes from all study participants and genomic DNA was extracted from leukocytes of family members and normal donors using the phenol-chloroform protocol, following standard procedures. The protocol of the present study was approved by the Ethics Committee of The Second Clinical Medical College, Jinan University, Shenzhen People's Hospital (Shenzen, China) and written informed consent was obtained from all the participants.
Whole Genome Resequencing
The DNA of all the samples was sequenced using whole genome sequencing technique.Each sequenced sample is prepared according to the Illumina TruSeq DNA sample preparation guide to obtain a nal library of 300-400 bp average insert size. The libraries were sequenced using Illumina HiSeq X sequencer, the reader is a double end 150 bp. One microgram (TruSeq DNA PCR-free library) or 100 nanogram (TruSeq Nano DNA library) of genomic DNA is fragmented by covaris systems,There are converted into blunt ends using an End Repair Mix.Following the end repair, the appropriate library size is selected using different ratios of the Sample Puri cation Beads.
PCR is used to amplify the enriched DNA library for sequencing. And we perform quality control analysis on the sample library and quanti cation of the DNA library templates. Illumina utilizes a unique "bridged" ampli cation reaction that occurs on the surface of the ow cell. Sequencing-by-Synthesis chemistry utilizes four proprietary nucleotides possessing reversible uorophore and termination properties. Each sequencing cycle occurs in the presence of all four nucleotides at a time. This cycle is repeated, one base at a time, generating a series of images each representing a single base extension at a speci c cluster.
Variant Analysis
The participants was conducted from genomic DNA isolated from blood and was submitted to the Macrogen Group (Seoul, Korea) for analysis. After the sample passes the quality inspection, the shotgun libraries are respectively constructed. Through the public database ltering of the about three million SNP obtained, we exclude the common variation in the population, and focus on the variation that may cause the protein's advanced structure change and affect the gene function. We also excavate the candidate pathogenic variation combined with the sample family situation. The public databases for ltering includes dbSNP, 1000 Genomes Project and ESP6500. Through structural annotations and database annotations, hundreds of rare variations in protein encoded amino acids are found in the samples, as shown in Table 2.
Results
In this report we describe a patient with a novel heterozygous missense variant of uncertain signi cance (VUS) (Chr11(N294H): g.26556013 A > C; NM_031418.2 (ANO3) in ANO3 who had a Primary Torsion Dystonia of disease. A total of 240G bp data is obtained through sequencing. As we can see the overall data output and comparison from
The sequencing data of precursor and healthy control groups
The data output of precursor is 112.91G, throughut mean depth is 39.50X, mappable mean depth is 35.70X, genome coverage ratio is 99.50%, the number of SNPs is 3550305, the number of indels is 558038, and the number of small insertions is 280611, the number of small deletions is 277427, the number of CNVs is 824, the number of copy number gains is 542, the number of copy number losses is 282, the number of SV is 8222. The data output of healthy control group is 127.90G, throughut mean depth is 44.70X, mappable mean depth is 40.30X, genome coverage ratio is 98.90%, the number of SNPs is 3602141, the number of indels is 587127, and the number of small insertions is 298385, the number of small deletions is 288742, the number of CNVs is 831, the number of copy number gains is 577, the number of copy number losses is 254. the number of SV is 8431. The total number of sequencing data is over 100G bp in this experiment. The effective sequencing coverage is more than 30X. The genome coverage rate is about 99%, which satis es the requirements of subsequent gene mutation analysis. The number of SNP/InDel detection is between 3 million and − 500 million, which is in line with the routine test.
The pedigree analysis for the patient and healthy control samples
The prevalence of detection based on samples, we conducted a pedigree analysis, according to the patients and healthy controls, 96 mutations were found, mutations in genes: SAMD11, CNR2, CYB5RL, CACHD1, EVI5, mutation type contains a missense, frameshift mutation, protein termination, splicing and other types of abnormal. Some of these genes are recorded in OMIM and ClinVar, and have certain pathogenicity. We found that based on these 96 mutant genes and the clinical symptoms of probands, the location is 26556013 on chromosome 11 may be the precursor's mutation site. And we can see the basic information from the Table 4, and we do not nd the rare coding variants in healthy control groups. Table 1 the comparation of HiSeq X and HiSeq 2000 1. Introduction of primary torsion dystonia contraction of active muscles and antagonists, and about 75% of the patients with dystonia are PTD [13] . Of the primary localized dystonia in adults, 15-30% can be developed to the other parts of the limb [14][15] . Dystonia by clinical features and etiology of the two main lines for classi cation [16] . Clinical features include age of onset, physical distribution, temporal patterns and concomitant manifestations (manifestations of other dyskinesia or other neuropathies); etiology includes neurological pathology and genetic patterns.
Human genome sequencing opens up a new way to improve human health
With the development and progress of gene sequencing technology, shorter sequencing time and lower cost, scientists have used genome sequencing technology to get genome sequences of large numbers of species. On this basis, through whole genome sequencing technology (whole -genome resequencing, WGR) analysis of genome sequencing and sequence comparison of different individuals of a known genome sequence, genetic information can be obtained by individual differences in species, including a large number of single nucleotide polymorphisms (SNP), copy number variation (CNV), indel loci (InDel), structure (SV), mutation obtained genetic characteristics of biological population. Gene sequencing technology applied in the early phage, bacteria and viruses to gradually applied to animal and plant when applied to the human body, has achieved fruitful results, deepen the understanding of various organisms in various elds, and gradually transform and create great scienti c and social bene ts. The sequencing of human genome accelerates the understanding of human genetic characteristics, genetic diseases, rare and common diseases, and opens up a new way to improve human health.
Relationship betweenAno3 and gene expression in human PTD
Whole genome sequencing identi ed ANO3 as a candidate gene. The identi cation of the ANO3 gene needs to be con rmed from independent studies, which can also estimate its frequency compared to other dystonia genes. In addition, additional families are required to unravel the complete phenotype spectrum of DYT24. Although the pathophysiology is not clear, but the function of this gene is fascinating, because it is the rst time the ion channel dysfunction as the pathophysiology of dystonia [17] . ANO3 mutations in some patients were the only initial manifestations, no (or later) affected by tremor with slight minor dystonia posture, misdiagnosed as ET [18] . Charlesworth [19] and others used a combination of linkage analysis and whole exon sequencing to carry out genetic analysis of an autosomal dominant cranial neck dystonia in England and found that ANO3 may be its causative gene in 2012. This study coincides with the results of his research.
In this case, we report a new mutation that may be pathogenic in known genes with similar phenotypes. As we can see, clinical genetics studies may not be su cient to con rm pathogenicity, but may require other functional studies. However, taking this into account, it is essential to develop robust functional determinations to truly re ect the underlying disease mechanisms, that is to say that not all functional effects are equal. When we better understand the pathways and mechanisms of the DYT24 gene, and in general, clari cation of the dystonia, a rare variant will better guide the targeted drug design and clinical trials. This will provide the basis for future work in which more families will be sequenced to identify more informations.
Conclusions
Together, our results report a new mutation that may be pathogenic in known genes with similar phenotypes, which will provide the basis for future work in which more families will be sequenced to identify more informations.Detecting genetic mutations in genetic diseases and discovering new genes or mutations can help us to make the correct molecular diagnosis of the disease and to provide better genetic information. | 3,273.2 | 2020-05-22T00:00:00.000 | [
"Medicine",
"Biology"
] |
Favorable Propagation and Linear Multiuser Detection for Distributed Antenna Systems
Cell-free MIMO, employing distributed antenna systems (DAS), is a promising approach to deal with the capacity crunch of next generation wireless communications. In this paper, we consider a wireless network with transmit and receive antennas distributed according to homogeneous point processes. The received signals are jointly processed at a central processing unit. We study if the favorable propagation properties, which enable almost optimal low complexity detection via matched filtering in massive MIMO systems, hold for DAS with line of sight (LoS) channels and general attenuation exponent. Making use of Euclidean random matrices (ERM) and their moments, we show that the analytical conditions for favorable propagation are not satisfied. Hence, we propose multistage detectors, of which the matched filter represents the initial stage. We show that polynomial expansion detectors and multistage Wiener filters coincide in DAS and substantially outperform matched filtering. Simulation results are presented which validate the analytical results.
INTRODUCTION
In recent years, distributed antenna systems (DASs) have emerged as a promising candidate for future wireless communications thanks to their open architecture and flexible resource management [1,2]. A DAS involves the use of a large number of antennas, allowing the accommodation of more users, higher data rates, and effective mitigation of fading. Extensive studies indicate that besides lower path loss effects to improve the coverage, a DAS has many attractive advantages over its centralized counterpart such as macro-diversity gain and higher power efficiency [3,4]. Users' energy consumption is reduced and transmission quality is improved by reducing the access distance between users and geographically distributed access points (APs). DASs have been extensively studied in downlink, see, e.g., [5,6] and references therein. In uplink, results on the sum capacity of DAS can be found in [7][8][9]. In [8,9], a mathematical framework based on Euclidean random matrices was proposed to analyze the fundamental limits of DASs in terms of capacity per unit area in the large scale regime.
The concept of DASs has recently reappeared under the name cell-free (CF) massive MIMO [10,11]. The new terminology is used for networks consisting of a massive number of geographically distributed single-antenna APs, which jointly serve a much smaller number of users distributed over a wide area. CF massive MIMO should combine the mentioned benefits of DAS with the advantages of massive MIMO. In principle, an optimal utilization of a DAS requires joint multiuser detection at a central unit. However, optimum detection schemes such as maximum likelihood are prohibitively complex to be implemented for a large system and low complexity linear multiuser detectors become appealing. Interestingly, in massive MIMO systems, as the number of antennas at the centralized base station increases the channels of different users with the base station tend to become pairwise orthogonal and the low complexity matched filters become asymptotically optimum detectors [12]. This appealing phenomenon is referred to as favorable propagation [13]. Under the assumption that a similar property holds in DAS, CF massive MIMO systems have been studied in [14] adopting matched filters at the central processing unit. In this paper, our system model includes as special case CF massive MIMO systems when the intensity of receivers is much higher than the intensity of transmitters. We investigate the properties of channels in DAS through an analysis of the MIMO channel eigenvalue moments and analytically show that favorable propagation is limited also asymptotically as the AP's intensity tends to infinity while the users' intensity is kept constant. In this case, matched filtering is not almost optimum and the use of linear multiuser detectors capable to combat multiuser interference at an affordable computational cost becomes really appealing. Thus, we analyze the performance of multistage detectors that can be implemented with low complexity at the expense of a certain performance degradation. We consider both polynomial expansion detectors [15] and [16] and show their equivalence in DAS. Additionally, their performance analysis confirms that even low complexity multiuser detectors outperform considerably matched filtering.
The rest of paper is organized as follows. Section 2 describes the system and channel model. A recursive expression to obtain the eigenvalue moments of the channel covariance matrix of DASs is presented in Section 3. In Section 4, we analyze the conditions of favorable propagation and the performance of multistage detectors for DASs. Simulation results are illustrated in Section 5. Finally, Section 6 draws some conclusions.
Notation: Throughout the paper, i = √ −1, superscript T and H represent the transpose and Hermitian transpose operator, respectively. Uppercase and lowercase bold symbols are utilized to denote matrices and vectors, respectively. The expectation and the Euclidean norm operators are denoted by E(.) and | . |, respectively. tr(.) and diag(.) denote the trace and the squared diagonal matrix consisting of the diagonal elements of matrix argument, respectively.
SYSTEM MODEL
We consider a DAS in uplink consisting of N T users and N R APs in the Euclidean space R. Each user and AP are equipped with a single antenna and are independently and uniformly distributed over A L = − L 2 , + L 2 , a segment of length L. All the APs are connected to and controlled by a central processing unit through a backhaul network such that detection and decoding are performed jointly.
We denote the channel coefficient between the j-th user and i-th AP by h(r i , t j ), where r i and t j denote the Euclidean coordinates of AP i and user j, respectively. Furthermore, we assume line of sight (LoS) and large scale fading such that the where d 0 is a reference distance, α is the path loss factor, and λ is the radio signal wavelength. Note the |r i − t j | is the Euclidean distance between the j-th user and i-th AP denoted in the following as d i,j and h(r i , t j ) depends on r i and t j only via their distance. Then, when convenient, we denote h(r, t) as h(d). In (1), the phase rotation depends on the distance d i,j and is given by exp(−i2πλ −1 d i,j ). In (1), we ignore the shadowing effect and model the large scale fading as pure pathloss d −α i,j . It is well-known that the function d −α models properly a LoS channel for a large range of distances when the plane wave approximation holds, i.e., d is sufficiently large. At small distances, this decaying model introduces a clear artifact: for d < d 0 , the transmit signal is amplified beyond the transmit signal level and the amplification presents a vertical asymptote for d → 0. In order to remove this artifact while keeping the model simple 1 , we assume negligible the signal attenuation in a close neighborhood of a transmitter and we fix the attenuation equal to 1.
The transmitting users do not have any knowledge of the channel and transmit with equal power P . The receivers are impaired by additive white Gaussian noise (AWGN) with variance σ 2 . The received signal vector at the central processor and discrete time instant m is given by where , and n(m) is the additive white Gaussian noise vector whose i-th component is the noise at AP i. For the sake of analytical treatability, as in [9], we assume that users and APs are located on a grid in A L . Let τ > 0 be an arbitrary small real such that L = θ τ with θ positive, even integer. We denote by A # L the set of points regularly spaced in A L by τ, i.e., A # L ≡ w|w = (−θ + 2k)τ /2, k = 0, 1 . . . θ − 1}. We model the distributed users and APs as homogeneous point processes Φ T and Φ R in A # L characterized by the parameters β T = ρ T τ and β R = ρ R τ , where ρ T and ρ R are the intensities, i.e., the number per unit area, of transmitters and receivers, respectively. Observe that N T = ρ T L = β T θ and
PRELIMINARY MATHEMATICAL TOOLS
In this section, we introduce mathematical tools for the analysis and design of DAS. Communication systems modeled by random channel matrices can be efficiently studied via their covariance eigenvalue spectrum [17,18]. Then, in order to analyze DASs, we characterize the spectrum of the channel covariance matrix C = H H H in terms of its eigenvalue moments m where µ and f C (µ) denote the eigenvalue and eigenvalue distribution of the matrix C, respectively.The expectation is with respect to the two homogeneous point processes Φ T and Φ R . Following the approach in [8,9], we decompose H as follows where T is a θ × θ matrix depending only on the function h(d), Ψ R and Ψ T are an N R × θ and N T × θ random matrices depending only on random AP's and user's location, respectively. In order to define the matrices Ψ T , Ψ R , and T, we consider the θ × θ channel matrix H of a system with θ transmit and receive antennas regularly spaced in A # L . It is easy to recognize that H is a band Toeplitz matrix and, asymptotically, for θ → ∞, it admits an eigenvalue decomposition based on a θ × θ Fourier matrix F [19]. Then, we consider the decomposition H = FTF H , where the matrix T is a deterministic, asymptotically diagonal matrix depending on the function h(d) via its discrete time Fourier transform. The random matrices Ψ T and Ψ R are obtained by extracting independently and uniformly at random N T and N R rows of F. For the sake of conciseness, we omit here a detailed analytical definition of the three matrices since not required for further studies and refer the interested reader to [8,9] for their detailed definition.
Further analysis requires m (n) T , the n-order eigenvalue moment of T as L, θ → +∞. Let us consider the sequence {h(kτ )}| k∈Z obtained by sampling the function h(d) with period τ. Asymptotically, for L, θ → +∞, the eigenvalues of the matrix T are given by H(ω), with ω ∈ [−π, +π], the discrete-time Fourier transform of the sequence {h(kτ )}| k∈Z [19] and m (n) In order to obtain the moments m (l) C , we follow the approach in [9,20] and approximate the random matrices Ψ R and Ψ T by the independent matrices Φ R and Φ T , respectively, consisting of i.i.d zero mean Gaussian elements with variance θ −1 . This approximation enables the application of classical techniques from random matrix theory and free probability. In the following we introduce an algorithm for the recursive computation of m (n) C , the n-order eigenvalue moment of the channel covariance matrixC =H HH = Φ T TΦ H R Φ R TΦ H T and C (n) kk , the diagonal elements ofC n . The derivation and proof is based on techniques similar to the ones utilized in [21,22] and it is omitted due to space constraints.
The algorithm holds asymptotically for θ, N R , N T → +∞ with N T /θ → β T and N R /θ → β R and it is based on the relations between the matrixC l and the matrices T, D =HH H , T , m (1) Step l: Compute (1) T . By applying the previous algorithm we obtain the following eigenvalue moments.
FAVORABLE PROPAGATION AND MULTIUSER DETECTION IN DAS
In this section, we analyze the property of favorable propagation in DAS through the characteristics of their channel eigenvalue moments. In a favorable propagation environment, when the users have almost orthogonal channels, the channel covariance matrix R satisfies the following properties where m (l) R denotes the l-order eigenvalue moment of matrix R. These properties are asymptotically satisfied for centralized massive MIMO systems, in rich scattering environments, when the number of users stays finite while the number of antennas at the central base station tends to infinity.
By making use of the observation that in large DAS, as T ) l such that (5) specializes for DAS and l = 2, 3 as follows As β R goes to infinity while β T is kept constant, i.e., for β T /β R → 0 and β T > 0, m and conditions (5) are not satisfied. Systems with favorable propagation can efficiently utilize the low complexity matched filter at the central processing unit since it achieves almost optimal performance in such environments. However, when conditions (5) are not satisfied, even linear multiuser detectors are expected to provide substantial gains compared to the matched filter. In the following, we consider low complexity multistage detectors including both polynomial expansion detectors, e.g., [15], and multistage Wiener filters [16] and we analyze their performance in terms of their signal to interference and noise ratio (SINR) by applying the unified framework proposed in [21,23]. In [21], it is shown that both design and analysis of multistage detectors with M stages can be described by a matrix S(X) defined as · · · X (M +2) + σ 2 X (M +1) . . . . . . . . .
and a vector s(X) = (X (1) , X (2) , ..., X (M ) ) T where X = mC for polynomial expansion detectors and X =C kk for multistage Wiener filters. From the asymptotic property that C C for any k and l, we can conclude that multistage Wiener filters and polynomial expansion detectors are equivalent in DAS. Additionally, we can determine the performance of a centralized processor implementing multistage detectors by applying the following expression [21] SINR = s T (mC)S −1 (mC)s(mC) 1 − s T (mC)S −1 (mC)s(mC) .
It is worth noting that for M = 1, a multistage detector reduces to a matched filter and (6) can be applied also for the performance analysis of matched filters.
SIMULATION RESULTS
In this section, we validate the analytical results in Section 3 and 4. We consider systems with pathloss factor α = 2 and d 0 = 1. For Fig. 1, we consider a system with transmitters homogeneously distributed with intensity ρ T = 20 over a segment of length L while the receivers' intensity varies in the range ρ R = [20,200]. Fig. 1 compares the forth eigenvalue moments of LoS channels obtained analytically for L → ∞ by the algorithm in Section 3 and the forth eigenvalue moments of systems with L finite and with and without Gaussian approximation. The comparison shows that the asymptotic approximation matches very well practical systems. For Fig. 2 and 3, we assume L = 100, ρ T = 0.5, and ρ R = [1, 5]. Fig. 2 shows the ratio m (l) C /tr[ diag(C) l ] versus β T /β R for l = 2, 3 to corroborate the analytical result that the conditions for favorable propagation are not satisfied. In fact, the curves do not tend to 1 for small ratios β T /β R . Finally, we consider a system with average signal to noise ratio (SNR) at the transmitters equal to 20dB and show the usefulness of multiuser detection in DAS. More specifically, Fig. 3 shows the SINR(dB) of matched filters (M = 1) and multistage detectors with two and three stages versus the intensity of receive antennas. For increasing values of ρ R , the performance gap between the matched filter and the multistage detector is substantial and does not tend to vanish.
CONCLUSIONS
In this paper, we considered a system consisting of randomly distributed transmit and receive antennas and investigated to which extent the phenomenon of favorable propagation, widely exploited in massive MIMO systems, is present and can be utilized in DAS. The properties of DAS systems were analyzed using channel eigenvalue moments. We showed analytically that the conditions of favorable propagation are not satisfied. A final comparison between the performance of multistage detectors and matched filters corroborates the usefulness of multiuser detection in DAS. | 3,785.6 | 2020-05-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
Sine‐wave electrical stimulation initiates a voltage‐gated potassium channel‐dependent soft tissue response characterized by induction of hemocyte recruitment and collagen deposition
Abstract Soft tissue repair is a complex process that requires specific communication between multiple cell types to orchestrate effective restoration of physiological functions. Macrophages play a critical role in this wound healing process beginning at the onset of tissue injury. Understanding the signaling mechanisms involved in macrophage recruitment to the wound site is an essential step for developing more effective clinical therapies. Macrophages are known to respond to electrical fields, but the underlying cellular mechanisms mediating this response is unknown. This study demonstrated that low‐amplitude sine‐wave electrical stimulation (ES) initiates a soft tissue response in the absence of injury in Procambarus clarkii. This cellular response was characterized by recruitment of macrophage‐like hemocytes to the stimulation site indicated by increased hemocyte density at the site. ES also increased tissue collagen deposition compared to sham treatment (P < 0.05). Voltage‐gated potassium (KV) channel inhibition with either 4‐aminopyridine or astemizole decreased both hemocyte recruitment and collagen deposition compared to saline infusion (P < 0.05), whereas inhibition of calcium‐permeable channels with ruthenium red did not affect either response to ES. Thus, macrophage‐like hemocytes in P. clarkii elicit a wound‐like response to exogenous ES and this is accompanied by collagen deposition. This response is mediated by KV channels but independent of Ca2+ channels. We propose a significant role for KV channels that extends beyond facilitating Ca2+ transport via regulation of cellular membrane potentials during ES of soft tissue.
Introduction
Repair of soft tissue damage is a critical need of living organisms that involves basic restoration of anatomical structures and functions to damaged tissue. This is a delicate process and its execution is not always optimal, as either excessive healing (e.g., fibrosis and adhesions) or inadequate healing (e.g., chronic wounds and ulcers) can lead to diminished or lacking restoration of function (Lazarus et al. 1994;Diegelmann and Evans 2004). An essential mediator of the wound repair process is the infiltration of the wound region with macrophages (Clark 1988). Disruption of normal macrophage activity in wound healing can contribute to decreased inflammatory cytokines, neutrophil removal, angiogenesis, fibroblast proliferation and collagen deposition (Gardner et al. 1999;Mirza et al. 2009;Koh and DiPietro 2011;Clark 1988). Conversely, activation or introduction of macrophages at the site of the wound accelerates wound healing in experimental models of impaired wound healing (Danon et al. 1989;Chen et al. 2008). Wound induced electrical fields (EFs) are an intrinsic property of damaged tissues that are vital to the wound healing process across different species (Chiang et al. 1991;Jenkins et al. 1996;Reid et al. 2005;Wang and Zhao 2010;Messerli and Graham 2011). Also, macrophages are sensitive to electrical fields and this property may be somewhat responsible for the positive effects of electrical stimulation (ES) on wound healing and repair of neural tissue (Moriarty and Borgens 1998;Cho et al. 2000).
These findings suggest that wound induced EFs direct macrophage migration to the wound site but few studies identify the mechanisms responsible for facilitating this action. There are long-standing hypotheses that endogenous ionic currents act to control cell dynamics in development, wound healing and regeneration (Jaffe and Nuccitelli 1977;Borgens et al. 1979;Jaffe 1981;€ Ozkucur et al. 2010). However, the mechanisms utilized by cells to detect the EF and translate it into a discernable message to drive specific cell behaviors, such as migration, proliferation, and differentiation, are not well understood. A better understanding of how cells are able to sense EFs and react to them is vital to understanding the global physiology involved in tissue repair. Ion channel signaling provides a reasonable cellular response for mediating these effects based on their documented involvement in cell proliferation, migration, and differentiation (Lang et al. 2005;Prevarskaya et al. 2007;Schwab et al. 2012). Specifically, voltage-gated and calcium-sensitive potassium channels are involved in regulating a variety of macrophage functions including activation, migration and cytokine secretion (Gallin 1984;Mackenzie et al. 2003;Dong et al. 2013).
This study investigated this phenomenon using an invertebrate homolog of macrophages, crustacean hemocytes, by measuring how they respond to exogenous ES and how this response is mediated by both potassium and calcium channel signaling. The response to ES was assessed using basic histological techniques and pharmacological antagonists were used to assess the role of K + and Ca 2+ channels in hemocyte recruitment during ES.
Animal preparation
Adult Procambarus clarkii ranging from 3 to 5 inches in length were procured from Atchafalay Biological in Raceland, LA and allowed to acclimate to the laboratory aquatic living environment for at least 2 weeks. Animals that were in the process of molting or had recently molted were excluded from the study. Animals selected for this study were anesthetized in ice water for 30 min before being instrumented for the experiment. A small hole (2 mm 9 2 mm) was made in the third segment of the dorsal tail carapace. Stainless steel electrode tips (4 mm 9 2 mm) were implanted between the carapace and the tail muscle surface with the cathode on the left and the anode on the right. For experiments requiring infusion of pharmacological agents or vehicle, polyethylene tubing connected to an infusion pump was inserted into the same hole as the cathode (Fig. 1). Carapace opening closure and securing of the instrumentation was conducted using fast drying cyanoacrylate (commercial superglue) applied to the openings. Animals then were placed in individual water-filled enclosures and allowed 2-4 recovery days before beginning the experiment.
Treatment protocol
Following the 4-day recovery period, drug or vehicle infusion was initiated. Acclimation for 24 h was allowed to ensure total blockade of targeted ion channels before initiating electrical stimulation. Tail muscles were intermittently stimulated with sine-wave impulses at 450 mV and 2 Hz (50 msec on to 450 msec off) continuously for 4 days. Immediately following the 4-day stimulation period, animals were anesthetized in ice water and killed by decapitation. Evans Blue dye was injected (10 lL) through the electrode opening of the carapace to mark the site of electrode implantation. The tail muscle then was exposed and a~20 mm cross section containing the tissue subjected to ES was excised. Tail muscle was fixed by submersion in 4% paraformaldehyde for 12-18 h. Tissues were rinsed in PBS and cryoprotected in 15% sucrose for four hours followed by 30% sucrose overnight, frozen and stored at À20°C. Tissues were embedded in OTC compound and cut into 15 micron tissue sections by cryostat sectioning and mounted onto gelatin-coated slides. Tissue sections were stained with hematoxylin and eosin (H&E), Masson's trichrome, or picrosirius red according to standard protocols.
Ion channel inhibition
To examine the role of potassium and calcium channels, pharmacological antagonists were solubilized in standard crayfish saline and infused at 1.25 lL/min before and during the ES. To block voltage-gated potassium (K V ) currents, 4-Aminopyridine (4-AP) was infused (1.25 lL/ min) at a concentration of 10 lmol/L and Astemizole (AZ) was infused (1.25 lL/min) at a concentration of 5 lmol/L. Calcium (Ca²⁺) signaling was investigated by infusion of ruthenium red (RR) at 10 lmol/L (Hirano et al. 1998;Taglialatela et al. 1998;Nattel et al. 2000;Clapham et al. 2001;Wulff et al. 2009). Crayfish saline was infused as vehicle control. Infusion commenced 24 h prior to initiation of the experiment to ensure the entire tail muscle was thoroughly bathed in each compound preceding ES onset. When infusion was initiated, crayfish were observed out of their environments for 20 min to ensure there was no leakage of the infusate around the surgical sites or other areas. In total, there were five treatment groups: Sham electrode implantation + saline infusion (Sham/Saline), ES electrodes + saline infusion (Stim/Saline), ES electrodes + 4-AP infusion (Stim/ 4-AP), ES electrodes + AZ infusion (Stim/AZ), and ES electrodes + RR infusion (Stim/RR). All groups were n = 6.
Analysis of hemocyte activation and collagen deposition
To measure the level of hemocyte activation, histological H & E stained sections were examined. Hemocytes within 500 lm of the electrode implantation site were morphologically identified and assigned to one of three groups: granulocytes, semigranulocytes, and hyaline cells. Hyaline hemocytes are characterized by their relatively small size, elongated oval shape, a centralized nucleus, high nuclear/cytoplasmic ratio, and an absence of cytoplasmic granules. Granulocytes are larger with an eccentric, oblong nucleus, lower nuclear/cytoplasmic ratio, and an abundance of eosinophilic granules in the cytoplasm. Semigranulocytes are similar in size and nuclear/ cytoplasmic ratio to granulocytes with an eccentric, spherical nucleus, and a reduced number of eosinophilic hemocytes in the cytoplasm (Lanz et al. 1993;Parrinello et al. 2015). Each group then was normalized to tissue section area. Values are expressed as hemocytes per 10,000 lm 2 . Collagen deposition was assayed both qualitatively and semiquantitatively. Qualitative evaluation was achieved with Masson's trichrome staining producing blue collagen, red cytoplasm/muscle fiber, and black nuclei. Semiquantitative assessment was achieved by measuring the percentage of tissue area, within 500 lm (a total tissue area of 0.5 mm 2 ) of electrode implantation, staining positive for collagen by polarized light imaging of picrosirius red stained tissue sections.
Statistical analysis
Statistical differences between or among groups were determined using Student's t-test or one-way ANOVA. Post hoc analysis was conducted using a Student-Newman-Keuls (SNK) method to determine statistical differences between mean values for each treatment and the control group. All data are presented as mean AE standard deviation. The 0.05 level of probability was utilized as the criterion for significance in all datasets.
Results
Intermittent sine-wave ES elicits hemocyte activation and collagen deposition in crayfish tail muscle Following continuous ES over a 4-day period, the tissue area directly under the site of electrode implantation exhibited an aggregation of hemocytes analogous to the first stage of the wound healing response as described in both penaeid shrimp and freshwater crayfish (Fontaine and Lightner 1973;Fontaine 1975). This hemocyte aggregation was not observed in sham-treated animals without ES (Fig. 2). Total hemocyte density in animals exposed to exogenous ES was increased (60.29 AE 23.73 hemocytes/10,000 lm 2 ) compared to sham-treated animals (1.87 AE 0.59 hemocytes/10,000 lm 2 , P < 0.05), but this effect was not specific to one hemocyte subtype population (Fig. 2). Another defining characteristic of the crayfish wound response is the deposition of collagen and subsequent tissue fibrosis (Fontaine and Lightner 1973;Fontaine 1975). Separate histological techniques (Masson's trichrome and picrosirius red staining) were employed to assay for collagen deposition and scarring. Masson's trichrome stain identifies tissue fibrosis and collagen deposition by red cytoplasm, black nuclei, and blue collagen fibers. Sham-treated animals had minimal collagen deposition adjacent to the electrode implantation site (Fig. 3A). In contrast, animals exposed to exogenous ES exhibited significant collagen deposition directly under the site of electrode implantation (Fig. 3B). These tissues were also stained with picrosirius red and imaged under both bright field (yellow cytoplasm and red collagen) for qualitative assessment and polarized light (yellow-orange birefringence for thick collagen fibers and green birefringence for thin collagen fibers) for a semiquantitative measurement of total collagen in the tissue adjacent to the site of electrode implantation. Figure 3C and D depict representative sham-treated (C) and ES (D) animals. Significant collagen deposition is observed in ES animals but not in sham-treated animals. Using the picrosirius red images taken under polarized light, the percent area of fibrosis was measured. Exogenously stimulated animals exhibited fibrosis in a higher percentage (16.35 AE 5.20%) of tissue adjacent to the electrode implantation site compared with sham-treated animals (1.47 AE 1.03%, Fig. 4; P < 0.05).
ES mediated hemocyte activation is dependent on K V channels
To assess the role of voltage-dependent potassium currents in mediating the response to ES, pharmacological modulators were infused continuously (1.25 lL/min) from 24 h before initiation of ES to the end of the experiment. Blockade of K V channels with either 4-AP (10.53 AE 6.65 hemocytes/10,000 lm 2 ) or astemizole (8.64 AE 3.89 hemocytes/ 10,000 lm 2 ) decreased the total hemocyte response to ES when compared with saline infusion (60.29 AE 23.73 hemocytes/10,000 lm 2 , P < 0.05). This effect was not limited to any specific hemocyte subtype population (Fig. 4).
Discussion
The tissue response to low-amplitude, low-frequency sine-wave ES in the tail muscle of adult P. clarkii was characterized and compared to documented crustacean wounding responses (Fontaine and Lightner 1973; 500 µm 1975). The response of crayfish hemocytes was of particular interest considering the distinct similarities hemocytes share with macrophages and the documented role of macrophages in vertebrate wound healing (Danon et al. 1989;Hose et al. 1990;Moriarty and Borgens 1998;S€ oderh€ all et al. 2003;Chen et al. 2008;Koh and DiPietro 2011). Second, the study identified K V channels as one of the molecular determinants responsible for interpreting the electrical signal into a discernable message to direct cell activity.
Several studies have indicated that wound healing is dependent on a wound-induced electrical field and that this electrical field can be modulated by exogenous ES (Chiang et al. 1991;Jenkins et al. 1996;Wang and Zhao 2010;Messerli and Graham 2011). This study demonstrated that exogenous ES is sufficient to elicit a soft tissue response in P. clarkii tail muscle in the absence of soft tissue injury characterized by hemocyte accumulation and collagen deposition. Hemocyte/macrophage populations have been indicated as essential and early participants in the wound healing process. They play important roles in matrix degradation at the wound site, phagocytosis to remove debris and cytokine secretion to attract other important cell types (Danon et al. 1989;Montagnani et al. 2001;Brancato and Albina 2011;Koh and DiPietro 2011;Clark 1988). In this study, an aggregation of hemocytes was observed in response to ES indicating that this exogenous electrical field stimulates hemocyte recruitment as would happen early on in the wound response. This is not surprising, as in vitro and in vivo studies have shown that macrophages respond to ES. Whether or not ES is directly acting on hemocytes or influencing the action of other cells that cause hemocyte infiltration is unclear but in vitro studies have established a relationship between macrophages and EFs. (Orida and Feldman 1982;Cho et al. 2000;Hoare et al. 2016) Another typical and consistent characteristic of a wound response is the deposition of collagen (Clark 1988). As shown in Figure 3, the application of ES led to significant collagen deposition. Previously characterized models of crustacean wound responses in penaeid shrimp and P. clarkii describe a similar histological response to what has been described in these data (Fontaine and Lightner 1973;Fontaine 1975). These results indicate that low-amplitude, low-frequency sine-wave ES of the crayfish tail muscle brings about some of the typical characteristics of crayfish wound healing in the absence of tissue insult. This provides a basis and rationale for the in vivo study of the molecular mechanisms involved in wound-induced EF mediated repair processes. Considering the documented role of potassium channels in both wound healing and macrophage activity, it was reasonable to suspect that they had a role in interpreting the electrical signal to elicit the results seen in this study (Gallin 1984;Blunck et al. 2001;Shin et al. 2002;And erov a et al. 2004;Kan et al. 2016). In other studies, K V channels have been shown to regulate proliferation in multiple tumor cell lines, with strong evidence specifically indicating K V 10.1 and K V 11.1 (EAG and hEAG) channels (Bianchi et al. 1998;Conti 2004). Multiple cell types have demonstrated a dependence on K + channel signaling for proper direction of cell migration (Schwab and Oberleithner 1995;Da Silva-Santos et al. 2002;Dal-Secco et al. 2008;Jin et al. 2008;Silver et al. 2015). Recently, a role for K + channel signaling has been demonstrated in regulating both the proliferation of macrophages and their ability to recruit the macrophage precursor Ly6C monocytes (Zhang et al. 2015). The fact that collagen synthesis and deposition is reduced is not surprising, considering that previous studies have found that macrophages stimulate collagen synthesis and scar formation and that specific ablation of macrophage populations before wounding results in reduced collagen deposition (Hunt et al. 1984;Portera et al. 1997;Mirza et al. 2009).
The data fall short of revealing the exact mechanism of K V channels' involvement in this response and K V blockade does not completely inhibit the response. ES has been shown to manipulate multiple cellular behaviors including migration, proliferation and cytokine production and it could be that K V are only involved in some of these processes. (Fitzsimmons et al. 1992;Li and Kolega 2002;Wang et al. 2003;Kim et al. 2009;Zhao 2009) Further research is needed to understand if K V impacts hemocyte proliferation, migration, cytokine secretion, or some other mechanism in the context of wound healing. This study has shown that hemocyte infiltration can be induced via ES and that this model of ES induced hemocyte infiltration also produced placement depicted by red dash). Collagen deposition was reduced by potassium channel blockade by both 4-aminopyridine (H) and astemizole (I) but was not affected by Ca 2+ channel blockade with ruthenium red (G) compared to sham-treated animals (F). (J) Total collagen (% of 0.5 mm 2 tissue section area; black box) was significantly reduced with blockade of K V 11.1 with either astemizole or 4-aminopyridine but not by blockade of TRP Ca 2+ channels via infusion of ruthenium red. (* = P < 0.05 compared to sham/saline; one-way ANOVA with SNK post hoc). collagen deposition. Although, when hemocyte infiltration is blocked, ES alone is not sufficient to induce collagen deposition.
Ca 2+ permeable channels have been shown to be critical regulators of cell function and are sensitive to potassium channel signaling (Lallet-Daher et al. 2009;Billeter et al. 2014;Schilling et al. 2014). The classic Ca 2+ channel inhibitor ruthenium red had no effect on either hemocyte activation or collagen deposition (Figs. 4 and 5). This indicates that Ca 2+ channel signaling is not a required mediator of the response to ES, although ruling out a complete role for Ca 2+ signaling in facilitating this response may be premature. Chloride channels were not investigated in this study but many of them (CFTR, glycine-gated and CLICs) have been indicated as regulators of macrophage and other immune cell function via phagosomal acidification, cytokine production, Ca 2+ influx, and superoxide production. (Ikejima et al. 1997;Wheeler and Thurman 1999;Wheeler et al. 2000;Di et al. 2006;Jiang et al. 2012) The results of this study clearly indicate that exogenous ES induces a response characterized by hemocyte aggregation and collagen deposition that closely resembles a documented crustacean wound response and that K V channels are a critical component of this response (Fontaine and Lightner 1973;Fontaine 1975). | 4,296.6 | 2016-06-01T00:00:00.000 | [
"Biology"
] |
Fragmentation of Fast Josephson Vortices and Breakdown of Ordered States by Moving Topological Defects
Topological defects such as vortices, dislocations or domain walls define many important effects in superconductivity, superfluidity, magnetism, liquid crystals, and plasticity of solids. Here we address the breakdown of the topologically-protected stability of such defects driven by strong external forces. We focus on Josephson vortices that appear at planar weak links of suppressed superconductivity which have attracted much attention for electronic applications, new sources of THz radiation, and low-dissipative computing. Our numerical simulations show that a rapidly moving vortex driven by a constant current becomes unstable with respect to generation of vortex-antivortex pairs caused by Cherenkov radiation. As a result, vortices and antivortices become spatially separated and accumulate continuously on the opposite sides of an expanding dissipative domain. This effect is most pronounced in thin film edge Josephson junctions at low temperatures where a single vortex can switch the whole junction into a resistive state at currents well below the Josephson critical current. Our work gives a new insight into instability of a moving topological defect which destroys global long-range order in a way that is remarkably similar to the crack propagation in solids.
Scientific RepoRts | 5:17821 | DOI: 10.1038/srep17821 THz radiation 9 , or polycrystalline superconducting resonator cavities for particle accelerators 10 , and have broader implications for other systems with long-range order.
We start with a standard theory of a Josephson vortex in a long junction described by the sine-Gordon equation for the phase difference of the order parameter θ(x, t) = ϕ 1 − ϕ 2 between two bulk electrodes 3,4 : Here the prime and the overdot denote partial derivatives with respect to the dimensionless coordinate x/λ J and time ω J t, ω J = (2πJ c /φ 0 C) 1/2 is the Josephson plasma frequency, J c is the tunneling critical current density, C is the specific capacitance of the junction, λ J = (φ 0 /4πμ 0 λJ c ) 1/2 is the Josephson penetration depth, λ is the London penetration depth, η = 1/ω J RC is the damping constant due to the ohmic quasiparticle resistance R, and β = J/J c is the driving parameter controlled by a uniform transport current density J.
The sine-Gordon equation has been one of the most widely used equations to describe topological defects in charge and spin density waves 11 , commensurate-incommensurate transitions [12][13][14] , magnetic domain walls 15 , dislocations in crystals 16,17 , kinks on DNA molecules 18 moving with a constant velocity v, where c s = ω J λ J is the Swihart velocity of propagation of electromagnetic waves along the junction 3 . As v increases, the vortex shrinks at η ≪ 1 and expands at η > 1 4 .
Unlike the sine-Gordon equation, the nonlocal equation (2) at η = 0 is not Lorentz-invariant, so a uniformly moving vortex can radiate Cherenkov waves δθ(x, t) ∝ exp(ikx − iω k t) with the phase velocities ω k /k smaller than v 23,29 . The condition of Cherenkov radiation at η = 0 is given by: 0 0 2 , and G(k) is the Fourier image of G(x). Here G(k) decreases as 1/k at k > Λ −1 so equation (4) is satisfied for k > k c where the maximum wavelength 2π/k c increases with v 30 . To address the effect of Cherenkov radiation on the moving vortex, we performed numerical simulations of equation (2) for SIS junctions of different geometries.
Shown in Fig. 1 are the numerical results for a planar bulk junction at η = 0.05 and the large ratio λ J /λ = 10 usually described by the sine-Gordon equation (1). Yet the more general integral equation (2) reveals the effects which are not captured by equation (1), particularly a trailing tail of Cherenkov Scientific RepoRts | 5:17821 | DOI: 10.1038/srep17821 radiation behind a vortex moving with a constant velocity 29 . Moreover, as the amplitude and the wavelength of radiation increase with v, the vortex becomes unstable at β > β s , the instability is triggered at the highest maximum of Cherenkov wave where θ m reaches a critical value θ c ≈ 8.65-8.84, depending on η, λ/Λ , and the junction geometry 30 . Here θ c is confined within the interval 5π/2 < θ c < 3π in which a uniform state of a Josephson junction is unstable 3,4 . As the velocity increases, the domain where 5π/2 < θ(x − vt) < 3π behind the moving vortex widens and eventually becomes unstable as its length exceeds a critical value. This suggests a qualitative picture of the vortex instability caused by the appearance of a trailing critical nucleus being in the unstable π-junction state 3,4 caused by strong Cherenkov radiation. The latter appears entirely due to the Josephson nonlocality described by equation (2), which has no steady-state vortex solutions at J > J s where J s can be well below J c at which the whole junction switches into a resistive state.
The dynamic solutions of equation (2) at β > β s change strikingly. Our simulations have shown that the instability originates at the highest maximum θ = θ m of the trailing Cherenkov wave which starts growing and eventually turning into an expanding vortex-antivortex pair 30 , as shown in Fig. 1. As the size of this pair grows, it generates enough Cherenkov radiation to produce two more vortex-antivortex pairs which in turn produce new pairs. Continuous generation of vortex-antivortex pairs results in an expanding dissipative domain in which vortices accumulate at the left side, antivortices accumulate at the right side, while dissociated vortices and antivortices pass through each other in the middle 30 . As a result, θ(x, t) evolves into a growing "phase pile" with the maximum θ m (t) increasing approximately linear with time and the edges propagating with a speed which can be both smaller and larger than c s , the phase difference θ(∞) − θ(− ∞) = 2π between the edges remains fixed. We observed the phase pile dynamic state for different junction geometries and η ranging from 10 −3 to 0.5 30 . For instance, Figs 2 and 3 show the 3D images of the initial stage of dynamic separation of vortices and antivortices calculated for a bulk junction and a thin-film edge junction. Here the local magnetic field B(x, t) oscillates strongly at the moving domain edges but becomes rather smooth away from them, as shown in Fig. 4. In the most part of the phase pile overlapping vortices are indistinguishable, yet the net flux φ = φ 0 of this evolving multiquanta magnetic dipole remains quantized. Shown in Fig. 5 are the steady-state vortex velocities v(β) calculated for different junction geometries. The instability corresponds to the endpoints of the v(β) curves which have two distinct parts. At small β η the velocity v(β) increases sharply with a slope limited by a weak quasiparticle viscous drag. At larger β η the increase of v(β) with β slows down, as the vortex velocities are mostly limited by radiation friction 29 and depend weakly on the form of dissipative terms in equation (2). For a low-J c junction with λ J /λ = 10, the effect of Cherenkov radiation on v(β) is weak, but for a high-J c bulk junction with λ λ / = 10 J and η ≪ 1, radiation friction dominates at practically all β, significantly reducing both v(β) and β s . For thin film edge junctions, the critical splitting current density J s gets reduced down to J s ≈ 0.4J c at η = 10 −3 , as shown in Fig. 5. In the extreme nonlocal limit described by equations (2) . Dynamics of θ(x, t) in the nonlocal limit at J > J s is similar to that is shown in Figs 1-3, except that the edges of phase pile can propagate with "superlu- 30 . Once vortex-antivortex pairs start replicating, the speed of leading vortices at the edges gradually increases from v s to a limiting value v ∞ , for instance, from v s ≈ 0.72lω J to v ∞ ≈ 1.12lω J for an edge junction with l = Λ /2 and η = 0.1 30 .
The effects reported here are most pronounced in underdamped SIS junctions between s-wave superconductors at low temperatures for which the viscous drag coefficient η ∝ exp(− Δ /T) due to thermally-activated quasiparticles 3 is small. Here η ≪ 1 also implies that a moving vortex does not generate additional quasiparticles because the induced Josephson voltage θ = ′ / V v eL 2 m is smaller than Δ /e, where θ ′ m is the maximum phase gradient. These conditions are satisfied for the parameters used in This equation shows that the power P is independent of J c and is greatly reduced in the underdamped limit at low temperatures as the quasiparticle resistance R of SIS junctions becomes exponentially large at T ≪ T c . To estimate P, it is convenient to write equation (5) in the form ηε ω 2 is a characteristic line energy of Abrikosov vortex 31 . For an edge junction in a Nb film with t = 1 nm, λ = 40 nm, ε 0 ~ 10 4 kelvin/nm, and ω J = 100 GHz much smaller than ∆/ . 2 4 THz 10 , equation (5) yields P ~ 0.16 nW at η = 10 −2 . Local overheating δT = PY K caused by vortex dissipation is further reduced in thin film junctions for which the energy transfer to the substrate due to ballistic phonons is much more effective than diffusive phonon heat transport in thick samples, where Y K is the Kapitza interface thermal resistance 32 . Such weak overheating caused by a moving vortex cannot result in thermal bistability and hysteric switching due to hotspot formation 32 .
Proliferation of vortex-antivortex pairs triggered by a moving Josephson vortex can be essential for the physics and applications of weak link superconducting structures where the formation of expanding phase pile patterns can switch the entire junction into a normal state at currents well below the Josephson critical current, > ( . − . ) J J J 0 4 0 7 s c . Such dynamic vortex instability can result in hysteretic jumps on the V-I curves which appear similar to those produced by heating effects 4,9 , yet this instability is affected by neither cooling conditions nor the nonequilibrium kinetics of quasiparticles. Indeed, heating is most pronounced in overdamped junctions with η > 1 in which Cherenkov radiation is suppressed. By contrast, the Cherenkov instability is characteristic of the weakly-dissipative underdamped limit η ≪ 1, although Fig. 5 shows that this instability in thin film edge junctions can persist up to η = 0.5. Therefore, the crucial initial stage of the phase pile formation at η ≪ 1 is unaffected by heating which may become more essential at the final stages of the transition of the entire junction into the normal state. At η ~ 1 the Cherenkov instability may be masked by heating effects, particularly in bulk junctions for which heat transfer to the coolant is less efficient than in thin films.
It should be emphasized that the instability reported here does not require special junctions with J c ~ J d . In fact, even for the seemingly conventional bulk junction with λ J = 10λ shown on the top panel of Our results can be essential for other topological defects such as crystal dislocations or magnetic domain walls described by the generic nonlocal equation (2) in which the integral term results from a common procedure of reduction of coupled evolution equations for several relevant fields to a single equation. For Josephson junctions, such coupled fields are θ and B, but for domain walls in ferromagnets, the nonlocality can result from long-range magnetic dipolar interactions 35 . For dislocations, the nonlocality and Cherenkov radiation of sound waves in equation (2) come from the discreteness of the crystal lattice 17 and long-range strain fields 16 , although the dynamic terms in the Peierls equation 36,37 are more complex than those in equation (2). Dynamic instabilities of dislocations have been observed in the lattice Frenkel-Kontorova models 17 in which sonic radiation can also result from periodic acceleration and deceleration of a dislocation moving in a crystal Peierls-Nabarro potential 16 . The latter effect becomes more pronounced as the dislocation core shrinks at higher velocities and becomes pinned more effectively by the lattice. By contrast, the instability reported here results entirely from Cherenkov radiation, the condition (4) can be satisfied for any system in which G(k) in equation (4) decreases with k. This instability can thus have broader implications: for instance, the phase pile dynamics of Josephson vortices appears similar to a microcrack propagation caused by a continuous pileup of subsonic dislocations with antiparallel Burgers vectors at the opposite tips of a growing crack described by equations (2) and (3) 16 .
Our results give a new insight into breakdown of a global long-range order which has been usually associated with either thermally-activated proliferation of topological defects (like in the Berezinskii-Kosterletz-Thouless transition) or static arrays of quenched topological defects pinned by the materials disorder 2 . Here we point out a different mechanism in which a long-range order is destroyed as a single topological defect driven by a strong external force becomes unstable and triggers a cascade of expanding pairs of topological defects of opposite polarity.
Methods
We have developed an efficient MATLAB numerical code to solve the main integro-differential equation (2) using the method of lines 38 . By discretizing the integral term in equation (2) it was reduced to a set of coupled nonlinear ordinary differential equations in time which were solved by the multistep, variable order Adams-Bashforth-Moulton method 39 . We have checked our numerical results using a slower iterative method to make sure that the logarithmic singularity of G(x − u) is handled properly, the absolute and relative error tolerances were kept below 10 −6 . The length L b of computational box x 1 < x < x 1 + L b along the x-axis (either co-moving with the vortex or expanding with the phase pile) was taken large enough to assure no artifacts coming from possible reflected waves at x = x 1 and x = x 1 + L b . We set 6 and made sure that changing L b does not affect the results, where L b was typically taken at least three times larger than the spatial extent of θ(x, t), be it a single vortex or expanding phase pile. The steady state phase distribution θ(x − vt) in a uniformly moving vortex at a given β was computed by solving the full dynamic equation (2) using the single-vortex solution calculated at a smaller preceding value of β as an initial condition. The code then run until the velocity of the vortex stabilizes to the accuracy better than 0.1%. | 3,547.6 | 2015-12-07T00:00:00.000 | [
"Physics"
] |
Recent Developments in Neutrino/Antineutrino - Nucleus Interactions
Recent experimental results and developments in the theoretical treatment of neutrino-nucleus interactions in the energy range of 1-10 GeV are discussed. Difficulties in extracting neutrino-nucleon cross sections from neutrino-nucleus scattering data are explained and significance of understanding nuclear effects for neutrino oscillation experiments is stressed. Detailed discussions of the status of two-body current contribution in the kinematic region dominated by quasi-elastic scattering and specific features of partonic nuclear effects in weak DIS scattering are presented.
Introduction
Recent interest in neutrino interactions in the few GeV energy region comes from neutrino oscillation experiments and their need to reduce systematic errors. Neutrino fluxes used in contemporary long and short baseline experiments (K2K, T2K, MINOS, NOvA, MiniBooNE) are peaked in the 1 -5 GeV energy domain and during the last ∼ 10 years there has been considerable theoretical and experimental activity in the investigation of neutrino cross sections in this domain with reference [1] being a good summary of the lower-energy situation. Several new cross section measurements have been performed by neutrino oscillation collaborations and there are two dedicated cross section experiments (SciBooNE and MINERvA) which have been launched at Fermilab.
Even with this degree of activity, the precision with which the basic neutrino-nucleon cross sections are known is still not better than 20 − 30%. There are two main reasons for this: the poor knowledge of neutrino fluxes and the fact that all the recent cross section measurements have been performed on nuclear targets. It is important to recall that what current neutrino experiments are measuring are events that are a convolution of energy-dependent neutrino flux ⊗ energy-dependent cross section ⊗ energy-dependent nuclear effects. The experiments have, for example, then measured an effective neutrino-carbon cross sections and to extract a neutrino-nucleon cross sections from these measurements requires separation of nuclear physics effects that can be done with only limited precision. For many oscillation experiments, using the same nuclear targets for their near and far detectors is a good start. However, even with the same nuclear target near-and-far, that there are different near and far neutrino energy spectra due to oscillations means there is a different convolution of cross section ⊗ nuclear effects near and far and there is no automatic cancellation between the near-and-far detectors. For a thorough comparison of measured neutrino-nucleon cross sections with theoretical models, these convoluted effects have to be understood.
Some of the new cross section measurements raised doubts in the areas which seemed to be well understood. The list of new puzzles is quite long and seems to be expanding. What is the value of the quasielastic axial mass? How large is the two-body current contribution that can mimic genuine quasielastic interactions? How large is CC (charged current) coherent pion production at a few GeV neutrino energies? What is behind the large discrepancy between MiniBooNE pion production measurements and theoretical model predictions? It can be seen as a paradox that the more than 30-year old ANL and BNL low statistics deuterium pion production data, with its minimal nuclear corrections, is still used as the best source of information about the nucleon-∆ transition matrix element.
Analysis of neutrino scattering data is certainly more complicated than the analysis of electron scattering data. In the electron case one knows exactly the initial electron energy and so also the values of energy-and momentum-transfer. It is then possible to explicitly study separate interesting kinematical regions like QE (quasielastic) peak or the ∆ peak. Neutrino scattering data is always flux (often wide band!) integrated. Interacting neutrino energy must be evaluated based on kinematics of particles in the final state taking into account detector acceptance and measurement accuracy.
For neutrino-nucleon interactions one can distinguish: Charged Current Quasielastic (CCQE), Neutral Current elastic (NCEl), Resonance production (RES) and more inelastic reactions up to the deep-inelastic (a rather misleading "DIS" term is often used to describe all the interactions which are neither CCQE/NCEl nor RES) domain. Quite different theoretical tools are used to model each of them. The simplest neutrino hadronic reaction is the charge current quasielastic (CCQE) interaction: ν + n → − + p with two particles: charged lepton and proton in the final state. One would like to extend this definition to the neutrino-nucleus interaction occurring on bound neutrons. The obvious question arises: what is the experimental signature of CCQE on a nuclear target? The ejected proton is not necessarily seen in a detector because quite often its momentum is below the acceptance threshold. However, events with a single reconstructed charged lepton track can result from a variety of initial interactions eg. from a two body current interaction or from real pion production and its subsequent absorption. Similar problems arise in other type of interactions. It is becoming clear that interpretation of neutrino-nucleus interaction must rely on a careful data/Monte Carlo (MC) comparison done with reliable MC neutrino event generators. This is why we decided to include in the review some information about development of MC event generators.
From the experimental point of view it is natural to speak about events with no pions in the final state, with only one pion etc. In fact, in several recent experimental measurements that investigated quantities defined in this way, the dependence on assumptions of Monte Carlo event generators were minimal. To compare with the experimental data given in this format one must add contributions from various dynamical mechanisms and also to model FSI effects. Several ingredients of the theoretical models are verified simultaneously. It is clear that in order to validate a model one needs many samples of precise neutrino-nucleus scattering measurements on variety of nuclear targets with various neutrino fluxes.
Our review is organized as follows, we review recent inclusive measurements in the lower E region and then concentrate on exclusive states in increasing W, the mass of the hadronic system. Due to the limited length of this review, we do have to limit our coverage to only the most recent developments.
Neutrino Charged Current and Neutral Current Inclusive Reactions 0.2.1 Recent measurements
There are four recent CC inclusive neutrino and antineutrino cross sections measurements in the E ν ≤ 10 GeV energy region [2], see Fig. 1. We notice a mild tension between SciBooNE and T2K measurements. In the following sections QE, RES and DIS contributions will be discussed separately.
Theory. General formulae: outgoing lepton differential cross sections
In this paper, we will discuss the neutrino CC or NC (neutral current) inclusive reaction: The generalization of the expressions to antineutrino induced reactions is straightforward. In the equation above, the outgoing lepton could be either a negatively charged lepton, − , of flavor or a neutrino ν , for CC or NC processes, respectively. The double differential cross section, with respect to the outgoing lepton kinematical variables, for the process of Eq. (1) is given in the Laboratory (LAB) frame by with k and k the LAB lepton momenta, E = ( k 2 + m 2 ) 1/2 and m the energy and the mass of the outgoing lepton, G F = 1.1664 × 10 −11 MeV −2 , the Fermi constant and L and W the leptonic and hadronic tensors, respectively. Besides, η takes the values 1 or 4 for CC or NC processes, respectively. The leptonic tensor is given by (in this convention, 0123 = +1 and the metric is g µν = (+, −, −, −)): The hadronic tensor includes a collection of non-leptonic vertices and corresponds to the charged or neutral electroweak transitions of the target nucleon or nucleus, i, to all possible final states. It is thus given by with P µ the four-momentum of the initial target, M 2 i = P 2 the target mass square, P f the total four momentum of the hadronic state f and q = k − k the four momentum transferred to the hadronic system. The bar over the sum denotes the average over initial spins.
The hadronic tensor is completely determined by six independent, Lorentz scalar and real, structure functions W i (q 2 , q · P ), Taking q in the z direction and P µ = (M i , 0), it is straightforward to find the six structure functions in terms of the W 00 , W xx = W yy , W zz , W xy and W 0z components of the hadronic tensor. After contracting with the leptonic tensor, one obtains that for massless leptons only three of them are relevant, namely with E ν the incoming neutrino energy, M the nucleon mass, x = −q 2 /2M q 0 , y = q 0 /E ν , while the nuclear structure functions F ν 1,2,3 are given by, The cross section for the CC antineutrino induced nuclear reaction is easily obtained by i) changing the sign of the parity-violating term, proportional to F 3 , in the differential cross section (this is because L σµ .), Eq. (6), and ii) using j µ cc− = j µ † cc+ in the definition/computation of the hadron tensor in Eq. (4). In the case of antineutrino NC driven processes, it is only needed to flip the sign of the term proportional to F 3 in the differential cross section, since the hadron NC is not affected.
The hadronic tensor is determined by the W or Z gauge boson selfenergy, Π µρ W,Z (q), in the nuclear medium. To evaluate this latter object requires a theoretical scheme, where the relevant degrees of freedom and nuclear effects could be taken into account.
In the next two sections we will discuss CCQE and pion production reaction. The general formalism described above will be used in the section devoted to DIS.
Charged Current Quasielastic
As discussed in the Introduction, we define CCQE as either the reaction on a free nucleon or on a quasi free nucleon inside a nucleus yielding a muon and nucleon. In the case of neutrino nucleus scattering we also use the term CCQE-like reaction defined as one in which there are no pions in the final state. It then includes events with real pion production followed by absorption. Such a definition may seem awkward but as will be seen, it is close to what was experimentally measured by the MiniBooNE collaboration.
A theoretical description of the free nucleon target CCQE reaction is based on the conserved vector current (CVC) and the partially conserved axial current (PCAC) hypotheses. The only unknown quantity is the nucleon axial form-factor G A (Q 2 ) for which one typically assumes a dipole form ) −2 with one free parameter, the axial mass M A . The non-dipole axial form factor was investigated e.g. in [3].
The difference between MiniBooNE and NOMAD measurements could come from different definitions of the CCQE signal. In the case of MiniBooNE a sample of 2-subevents (Cherenkov light from muon and from decay electron) is analyzed and ejected protons are not detected. In the case of NOMAD 1-track (muon) and 2-tracks (muon and proton) samples of events are analyzed simulateuosly. With a suitable chosen value of the formation zone parameter τ 0 values of M A extracted separately from both data samples are approximately the same, see Table 9 in [10]. We note that the procedures in which the formation zone concept is applied to nucleons that already exist may seem little controversial. We would like to mention also the CCQE data not yet published in peer review journals. MINOS tried to evaluate better the pion production background [11]. A function of Q 2 which corrects Monte Carlo (NEUGEN) RES predictions was proposed. The shape of the curve is similar to MiniBooNE's DATA/MC correction function (see below) but in the case of MiniBooNE for Q 2 > 0.1 GeV 2 the correction factor is > 1. The new MINOS best fit value of M A is 1.16 GeV and the error was reduced by a factor of 3 with respect to [8]. SciBooNE showed partial results of the CCQE analysis [12]. Results are given in terms of fits for CCQE cross-section DATA/MC multiplicative factors a j (j label true neutrino energy bins) and a scaling factor F N . The obtained best fit values in the neutrino energy region E ν ∈ (0.6, 1.6) GeV are between 1.00 and 1.09 which with F N = 1.02 and the value of the axial mass used in the NEUT Monte Carlo generator (1.2 GeV) should translate to the axial mass value M A ∼ 1.25 − 1.3 GeV. In the SciBooNE analysis there are some instabilities in the wider region of E ν (see Fig. 11.2 in [13]). A use of the universal background scaling factor a bcg for three different event samples is perhaps not sufficient (its best fit value is as large as 1.37 GeV).
An important antineutrino CCQE measurement was reported by MiniBooNE [14]. The DATA/MC average cross-section ratio was reported to be 1.21 ± 0.12 which is a surprising result because in the NUANCE carbon CCQE computations the M A value was set to be 1.35 GeV. In the experimental analysis, it was important to evaluate correctly neutrino contamination in the anti-neutrino flux. Three independent measurements indicate that the ν µ flux in the antineutrino beam should be scaled down by a factor of ∼ 0.8 with an obvious important impact on the final results.
The most recent MINERvA preliminary results for CCQE antineutrino reaction are still subject to large flux normalization uncertainties but they seem to be consistent with M A = 0.99 GeV [15].
MiniBooNE data
In recent discussions of the CCQE, MiniBooNE measurement plays a special role. For the first time the data was presented in the form of double differential cross section in muon scattering angle and kinetic energy. Such data is the actual observable for the MiniBooNE experiment and more complete than a distribution of events in Q 2 which is calculated assuming an obviously incorrect nuclear model (the nucleon is assumed to be at rest). The signal events form a subset of events with no pions in the final state. MiniBooNE subtracted as a background, events with real pion production and subsequent absorption and also a contribution from pionless ∆ decays implemented in the NUANCE MC [16] as constant fractions of ∆ ++ and ∆ + decays, following the approach of Ref. [17]. The background estimate, based on MC predictions, was later corrected by a Q 2 dependent function, which accounts for a data/MC discrepancy in the sample of events containing one π + in the final state. The shape of the correction function is not well understood [18] but it has an important impact on the extracted value of M A . The function quantifies a lack of understanding of processes like pion absorption and can have a significant effect on the understanding of both samples of events.
MiniBooNE also provided data for the CCQE signal plus background together as the measurement of the cross section of the process in which there are no pions in the final state, the observable which is maximally independent of MC assumptions.
Theoretical approaches to CCQE -generalities
Several approaches have been followed/derived to compute the relevant gauge boson absorption modes (self-energy) to describe the CCQE process. For moderate and intermediate neutrino energies, in the few GeV region, the most relevant ones are: the absorption by one nucleon, or a pair of nucleons or even three nucleon mechanisms, real and virtual meson (π, ρ, · · ·) production, excitation of ∆ of higher resonance degrees of freedom, etc. (for example, some absorption modes are depicted in Fig. 2 for the case of neutrino CC processes). A review of theoretical model results can be found in [19]. Almost all QE−like ! π,ρ,... π,ρ,...
approaches, used at intermediate neutrino energies, deal with hadron, instead of quarks and gluons, degrees of freedom. In addition they consider several nuclear effects such as RPA or Short Range Correlations (SRC). The free space couplings between hadrons and/or the weak W and Z bosons are parametrized in terms of form factors, which are fitted to the available data on electroweak scattering off free nucleons. In the few GeV energy region, theoretical models rely on the impulse approximation (IA) and neutrino-nucleus CCQE interactions are viewed as a two step process: primary interaction and Final State Interactions (FSI), propagation of resulting hadrons through the nucleus. The validity of the IA is usually related to typical values of the momentum transfer q. Experience from the electron scattering tells us that for q > 300 − 500 MeV/c IA based models are able to reproduce the data well. Thus, the expectations is that for a few GeV neutrino interactions IA is an acceptable approach and if necessary simpler nuclear models computations can be supplemented with RPA corrections for lower momentun transfers (see below). In the neutrino nucleus cross section measurements a goal is to learn about neutrino free nucleon target scattering parameters (an obvious exception is coherent pion production). Effective parameters like sometimes discussed quasi elastic axial mass M ef f A are of little use as their values can depend on the neutrino flux, target and perhaps also on the detection technique/acceptance.
The definition of neutrino-nucleus CCQE scattering can be made more rigorous in the language of many body field theory. CCQE process originates from a first step mechanism where the gauge boson is being absorbed by just one nucleon. This corresponds to the first of the selfenergy diagrams depicted in Fig. 2 (contribution (a)). This contribution, that from now on we will call genuine QE, has been computed within different theoretical models and used to predict the corresponding outgoing lepton differential cross section.
The simplest model, commonly used in Monte Carlo event generators, is the relativistic Fermi gas (RFG) model proposed by Smith and Moniz more than 35 years ago [20] corresponding to only one many body Feynman diagram. The model combines the bare nucleon physics with a model to account for Fermi motion and nucleon binding within the specific nucleus. The model can be made more realistic in many ways 1 to achieve better agreement with a broad range of electron scattering data. For example, the inclusion of a realistic joint distribution of target nucleon momenta and binding energies based on short range correlation effects leads to the spectral function (SF) approach. Spectral functions for nuclei, ranging from carbon (A = 12) to iron (A = 56) have been modeled using the Local Density Approximation (LDA) [21], in which the experimental information obtained from nucleon knock-out measurements is combined with the results of theoretical calculations in nuclear matter at different densities, and they have been extensively validated with electron scattering data. Calculations by Benhar et al., [22] and Ankowski et al.. [23] show that the SF effects moderately modify the muon neutrino differential cross sections, and they lead to reductions of the order of 15% in the total cross sections. This is corroborated by the results obtained within the semi-phenomenological model (density dependent mean-field potential in which the nucleons are bound) [24] employed within the GiBUU model to account for these effects.
Inclusion of nucleon-nucleon long-range correlations leads to RPA (Random Phase Approximation) which improves predictions at lower momentum transfers (and also low Q 2 ). RPA corrections have been discussed by many authors in the past and recently included in computations of three groups (IFIC, Lyon and Aligarh 2 ) in Refs. [25,26], [27,28], and [29] respectively. When the electroweak interactions take place in nuclei, the strengths of electroweak couplings may change from their free nucleon values due to the presence of strongly interacting nucleons. Indeed, since the nuclear experiments on β decay in the early 1970s [30], the quenching of axial current is a well-established phenomenon. The RPA re-summation accounts for the medium polarization effects in the 1p1h contribution ( Fig. 2(a)) to the W and Z selfenergy by substituting it by a collective response as shown diagrammatically in the top left panel of Fig. 3. Evaluating these effects, requires an in-medium baryon-baryon effective force, which in both sets (IFIC and Lyon) of calculations was successfully used/tested in previous works on inclusive nuclear electron scattering. RPA effects are important as can be appreciated in the top right panel of Fig. 3. In this plot, we show results from both IFIC and Lyon models, presented in Refs. [31] and [32], respectively for the CC quasielastic ν µ − 12 C double differential cross sections convoluted with the MiniBooNE flux [33]. There, we also see that predictions of both groups for these genuine QE contribution, with and without RPA effects, turn out to be in a quite good agreement. Finally, it is important to stress also that RPA corrections strongly decrease as the neutrino energy increases, while its effects should account for a low Q 2 deficit of CCQE events reported by several experimental groups (see bottom panels of Fig. 3). Continuum RPA (CRPA) computations for neutrino scattering were performed by the Ghent group [34].
Other theoretical developments
In [35,36,37] the bound-state wave functions are described as self-consistent Dirac-Hartree solutions, derived within a relativistic mean field approach by using a Lagrangian containing σ and ω mesons [38]. to the W or Z self-energies. Top Right: MiniBooNE flux-averaged CC quasielastic νµ− 12 C double differential cross section per neutron for 0.8 < cos θµ < 0.9 as a function of the muon kinetic energy. Bottom: Different theoretical predictions for muon neutrino CCQE total cross section off 12 C, as a function of the neutrino energy (left) and q 2 (right), obtained from the relativistic model of Ref. [25]. In all cases MA ∼ 1.05 GeV.
This scheme also accounts for some SF effects. Moreover, these models also incorporate the FSI between the ejected nucleon and the residual nucleus. The final nucleon is described either, as a scattering solution of the Dirac equation [36,37] in the presence of the same relativistic nuclear mean field potential applied to the initial nucleon, or adopting a relativistic multiple-scattering Glauber approach [35].
The relativistic Green s function model [39] would be also appropriate to account for FSI effects between the ejected nucleon and the residual nucleus for the inclusive scattering, where only the outgoing lepton is detected. There, final-state channels are included, and the flux lost in each channel is recovered in the other channels just by the imaginary part of an empirical optical potential and the total flux is thus conserved.
Another interesting approach starts with a phenomenological model for the neutrino interactions with nuclei that is based on the superscaling behavior of electron scattering data. Analysis of inclusive (e, e ) data have demonstrated that for momentum transfers q >∼ 500 MeV/c at energy transfers below the QE peak superscaling is fulfilled rather well [40]. The general procedure consist on dividing the experimental (e, e ) cross section by an appropriate single-nucleon cross section to obtain the experimental scaling function, which is then plotted as a function of a certain scaling variable for several kinematics and for several nuclei. If the results do not depend on the momentum transfer q, then scaling of the first kind occurs, if there is no dependence on the nuclear species, one has scaling of the second kind. The simultaneous occurrence of scaling of both kinds is called superscaling. The superscaling property is exact in the RFG models, and it has been tested in more realistic models of the (e, e ) reaction. The Super-Scaling approach (SuSA) is based on the assumed universality of the scaling function for electromagnetic and weak interactions [41]. The scaling function thus determined from (e, e ) data is then directly taken over to neutrino interactions [41,42]. There are no RPA correlations or SF corrections explicitly taken into account, but they may be contained in the scaling function. Nevertheless, such approach is far from being microscopic. Moreover, it is difficult to estimate its theoretical uncertainties, as for example to what extent the quenching of the axial current, that is due to RPA corrections, is accounted for by means of scaling functions determined in (e, e ) experiments, which are driven by the vector current.
Theretical models versus MiniBooNE 2D data
The MiniBooNE data [9] have been quite surprising. Firstly, the absolute values of the cross section are too large as compared to the consensus of theoretical models [19,43]. Actually, the cross section per nucleon on 12 C is clearly larger than for free nucleons. Secondly, their fit to the shape (excluding normalization) of the Q 2 distribution done within the RFG model leads to the axial mass, M A = 1.35 ± 0.17 GeV, much larger than the previous world average (≈ 1.03 GeV) [5,10]. Similar results have been later obtained analyzing MiniBooNE data with more sophisticated treatments of the nuclear effects that work well in the study of electron scattering. For instance, Refs. [44,45] using the impulse approximation with state of the art spectral functions for the nucleons fail to reproduce data with standard values of M A . Large axial mass values have also been obtained in ref. [46] where the 2D differential cross section was analyzed for the first time using RFG model and spectral function. Similar results were obtained in Ref. [47], where the data have been analyzed in a relativistic distorted-wave impulse approximation supplemented with a RFG model.
Multinucleon mechanisms
A plausible solution to the large axial mass puzzle was firstly pointed out by M. Martini 3 et al. [27,28], and later corroborated by the IFIC group [31,49]. In the MiniBooNE measurement of Ref. [9], QE is related to processes in which only a muon is detected in the final state. As was already discussed above, besides genuine QE events, this definition includes multinucleon processes ( Fig. 2(e) 4 ), where the gauge boson is being absorbed by two or more nucleons, and others like real pion production followed by absorption ( Fig. 2(c) and (d)). The MiniBooNE analysis of the data attempts to correct (through a Monte Carlo estimate) for some of these latter effects, such as real pion production that escapes detection through reabsorbtion in the nucleus leading to multinucleon emission. But, it seems clear that to describe the data of Ref. [9], it is necessary to consider, at least, the sum of the selfenergy diagrams depicted in Figs. 2(a) and (e). Those correspond to the genuine QE (absorption by just one nucleon), and the multinucleon contributions, respectively. The sum of these two contributions contribute to the CCQE-like cross section 5 .
The inclusion of the 2p2h contributions enables [31,32] the double differential cross section d 2 σ/dE µ d cos θ µ and the integrated flux unfolded cross section 6 measured by MiniBooNE, to be described with values of M A (nucleon axial mass) around 1.03 ± 0.02 GeV [5,10]. This is re-assuring from the theoretical point of view and more satisfactory than the situation envisaged by some other works that described the MiniBooNE data in terms of a larger value of M A of around 1.3-1.4 GeV, as mentioned above.
Similarites and differences between multinucleon ejection models Figure 4: MiniBooNE flux-averaged CC quasielastic νµ− 12 C double differential cross section per neutron for 0.8 < cos θµ < 0.9, as a function of the muon kinetic energy. Experimental data from Ref. [9] are multiplied by 0.9. In all the cases MA ∼ 1.05 GeV.
As shown in the top panel of Fig. 3, the IFIC group predictions [31,49] for QE cross-sections agree quite well with those obtained in Refs. [27,28,32] (Lyon group). However, both above presented approaches considerably differ (about a factor of two) in their estimation of the size of the multinucleon effects, as can be appreciated in Fig. 4. IFIC predictions, when the 2p2h contribution is included, favor a global normalization scale of about 0.9 (see [31]). This is consistent with the MiniBooNE estimate of a total normalization error as 10.7%. The IFIC evaluation in [49,31], of multinucleon emission contributions to the cross section is fully microscopical and it contains terms, which were either not considered or only approximately taken into account in [27,28,32]. Indeed, the results of these latter works rely on some computation of the 2p2h mechanisms for the (e, e ) inclusive reaction ( [50]), which results are simply used for neutrino induced processes without modification. Thus, it is clear that these latter calculations do not contain any information on axial or axial-vector contributions 7 . For 5 Also for simplicity, we will often refer to the multinucleon mechanism contributions, though they include effects beyond gauge boson absorption by a nucleon pair, as 2p2h (two particle-hole) effects. 6 We should warn the reader here, because of the multinucleon mechanism effects, the algorithm used to reconstruct the neutrino energy is not adequate when dealing with quasielastic-like events, a distortion of the total flux unfolded cross section shape could be produced. We will address this point in Subsect. 0.3.5. 7 The evaluation of the nuclear response induced by these 2p2h mechanisms carried out in Ref. [27] is approximated, as acknowledge there. Only, the contributions in [27] that can be cast as a ∆−selfenergy diagram should be quite similar to those derived in [49] by the IFIC group, since in both cases the results of Ref. [17] for the ∆−selfenergy are used. antineutrinos the IFIC model predicts, contrary to the results of the Lyon group, also a sizeable effect of 2p2h excitations.
Another microscopic approach to 2p2h excitations was proposed by Amaro et al. These authors have used the empirical (e, e ) SuSA scaling function to describe the CCQE MinibooNE data, including some 2p2h contributions due to MEC (meson exchange currents) [51,52]. The approach, used in these latter works, to evaluate the 2p2h effects, though fully relativistic, does not contain the axial contributions. The authors of [51,52] also find an increase of the inclusive cross section for neutrinos; at forward muon angles the calculations come close to the data, but the MEC contributions die out fast with increasing angle so that the cross section is significantly underestimated at backward angles. As a consequence the energy-separated (flux unfolded) cross section obtained for the Mini-BooNE experiment while being higher than that obtained from SuSA alone still underestimates the experimental result even when 2p2h contributions are added. Recently, a strong difference between neutrino and antineutrino cross sections has been obtained within this model, with the 2p2h effects being significantly larger for antineutrinos than for neutrinos [52].
Two other effective models to account for MEC/2p2h effects have been proposed by Bodek et al. [53] [transverse enhancement model (TEM)] and Lalakulich et al. [54]. The TEM can easily be implemented in MC event generators [55]. It assumes that it is sufficient to describe properly an enhancement of the transverse electron QE response function keeping all other ingredients as in the free nucleon target case. Thus, some effective proton and neutron magnetic form factors are fitted to electron-nucleus data and later they are used, together with the free nucleon axial current, to study CCQE processes. It is to say, the TEM assumes that there are no nuclear medium effects (RPA, 2p2h mechanisms, etc...) affecting those nuclear response functions induced by the nucleon axial-vector current. Despite of a certain phenomenological success to describe the MiniBooNE data [53,55], such assumption seems quite unjustified.
In the model of Ref. [54], the multinucleon mechanism contributions are parametrized as phase space multiplied by a constant which is fitted to the difference of the energy-separated MiniBooNE data and the calculated QE cross section. RPA effects are not taken into account in [54]. Since these tend to lower the cross section in particular at forward muon angles, the model of [54] underestimates the contributions of 2p2h effects there. Indeed, the authors of this reference find that the shape and over-all size of the 2p2h contribution turns out to be rather independent of the muon angle. This is in sharp contrast with the microscopical results obtained within the IFIC [49,31] and SuSa models [52], that find the 2p2h contribution becomes significantly less important as the muon scattering angle increases.
Perspectives to measure the MEC/2p2h contribution
The unambiguous experimental measurement of the MEC contribution to the CC inclusive cross section can be made by detecting hadrons in the final state. All the microscopic models provide up to now only the MEC/2p2h contribution to the muon inclusive 2D differential cross section: d 2 σ ν /dΩ(k )dE . Such models cannot describe detailed exclusive cross sections (looking into the nucleon side), as explicit FSI effects, that modify the outgoing nucleon spectra, have not been addressed yet in these microscopical models. It is reasonable to assume that at the level of the primary reaction mechanism, they produce only slightly changes in d 2 σ ν /dΩ(k )dE , leaving almost unchanged the integrated cross sections [22,23].
A model to describe hadrons in the final state was proposed in [55]. It was implemented in the NuWro MC event generators and its predictions were used in the analysis of recent MINERvA antineutrino CCQE data.
In the papers [54,55] various observables are discussed which can be used to detect MEC contribution. One option is to look at proton pairs in the final state. Another possibility is to investigate the distribution of visible energy which allows to include contributions from protons below reconstruction threshold. The basic intuition from the electron scattering is that MEC events populate the region between QE and ∆ peaks. Typically, to have a MEC event more energy must be transfered to the hadronic system than for a CCQE one. However, it should be stressed that the precision with which FSI effects are currently handled in MC codes can make such a measurement difficult. During last few years FSI studies were focused on pions only [56] aiming at understanding recent pion production data on nuclear targets [57]. Nucleons in the final state were never studied with a similar precision so there is less data to benchmark nucleon FSI effects.
Monte Carlo event generators
Monte Carlo codes (GENIE, NuWro, Neut, Nuance, etc) describe CCQE events using a simple RFG model, with FSI effects implemented by means of a semi-classical intranuclear cascade. NuWro offers also a possibility to run simulations with spectral function and an effective momentum dependent nuclear potential. It is also by now the only MC generator with implementation of MEC dynamics. Since the primary interaction and the final state effects are effectively decoupled, FSI do not change the total and outgoing lepton differential cross sections.
Neutrino energy reconstruction
Neutrino oscillation probabilities depend on the neutrino energy, unknown for broad fluxes and often estimated from the measured angle and energy of the outgoing charged lepton only. This is the situation of the experiments with Cherenkov detectors where protons in the final state are usually below the Cherenkov threshold. Then, it is common to define a reconstructed neutrino energy E rec (neglecting binding energy and the difference of proton and neutron masses) as: which would correspond to the energy of a neutrino that emits a lepton, of energy E and threemomentum p , with a gauge boson W being absorbed by a free nucleon of mass M at rest in a CCQE event. Each event contributing to the flux averaged double differential cross section dσ/dE d cos θ defines unambiguously a value of E rec . The actual ("true") energy, E, of the neutrino that has produced the event will not be exactly E rec . Actually, for each E rec , there exists a distribution of true neutrino energies that give rise to events whose muon kinematics would lead to the given value of E rec . In the case of genuine QE events, this distribution is sufficiently peaked (the Fermi motion broadens the peak and binding energy shifts it a little) around the true neutrino energy to make the algorithm in Eq. (7) accurate enough to study the neutrino oscillation phenomenon [58] or to extract neutrino flux unfolded CCQE cross sections from data (assuming that the neutrino flux spectrum is known) [59,60]. The effect of this assumption on the much more demanding measurement of CP-violation effects is currently being evaluated. However, and due to presence of multinucleon events, there is a long tail in the distribution of true energies associated to each E rec that makes the use of Eq. (7) unreliable. The effects of the inclusion of multinucleon processes on the energy reconstruction have been noticed in [55] and investigated in Ref. [59], within the Lyon 2p2h model and also estimated in Ref. [61], using the simplified model of Ref. [54] for the multinucleon mechanisms. This issue has been more recently also addressed in the context of the IFIC 2p2h model in Ref. [60], finding results in a qualitative agreement with those of Refs. [59] and [61].
In Ref. [60] it is also studied in detail the 12 C unfolded cross section published in [9]. It is shown there that the unfolding procedure is model dependent. Moreover, it is also shown that the MiniBooNE published CCQE cross section as a function of neutrino energy differs from the real one. This is because the MiniBooNE analysis assumes that all the events are QE. The authors of [60] finally conclude that the MiniBooNE unfolded cross section exhibits an excess (deficit) of low (high) energy neutrinos, which is mostly an artifact of the unfolding process that ignores multinucleon mechanisms.
NC elastic
MiniBooNE has also measured flux integrated NC elastic reaction cross-section [62]. Using these data, the best fit value of the axial mass was found te be M A = 1.39 ± 0.11 GeV. The measurement was possible because the MiniBooNE Cherenkov detector can observe also scintillation light from low momentum nucleons. An attempt was done to measure the nucleon strange quark component using the proton enriched sample of events with a result consistent with zero: ∆s = 0.08 ± 0.26.
Theoretical considerations
The MiniBooNE NCEl data were analyzed in [63]. The fit was done to the Q 2 distribution of events with the best fit value of M A equal to 1.28 ± 0.05 GeV. Moreover the authors of [64] concluded that axial mass as large as 1.6 GeV is still too small to reproduce the MiniBooNE NCEl data. Critical discusson of this statement can be found in Ref. [65].
The Resonance Region
In the RES region the degrees of freedom are hadronic resonances, with the most important being the ∆(1232). Typical final states are those with a single pion. During the last five years several new pion production measurements have been performed. In all of them the targets were nuclei (most often carbon) and interpretation of the data in terms of the neutrino-nucleon cross section needed to account for nuclear effects, impossible to do in a model independent manner. Because of that it has become a standard that the published data include nuclear effects with most uncertain FSI. Perhaps not surprisingly, in several papers old deuterium ANL and BNL pion production data were re-analyzed aiming to better understand the pion production reaction on free nucleons. Theoretical models became more sophisticated and the major improvement was a development of well justified mechanisms for the non-resonant contribution in the ∆ region. Some papers addressed the problem of higher resonances, a topic which will be investigated experimentally with future MINERvA results. On the other hand, there has been a lot of activity in the area of the coherent pion production and this subject will be discussed separately.
Experimental Results
NCπ 0 Neutral current π 0 production (NCπ 0 ) is a background to ν e appearance oscillation signal. One is interested in a π 0 leaving the nucleus and recent experimental data are given in this format with all the FSI effects included. Signal events originate mostly from: NC1π 0 primary interaction with a π 0 not being affected by FSI and NC1π + primary interaction with the π + being transformed into π 0 in a charge exchange FSI reaction. An additional difficulty in interpreting the NCπ 0 production comes from a coherent (COH) contribution. In the case of MiniBooNE flux neutrino-carbon reactions (< E ν >∼ 1 GeV) it is estimated to account for ∼ 20% of signal events [66].
Four recent measurements of NCπ 0 production (K2K [67], MiniBooNE neutrinos, MiniBooNE antineutrinos [68], SciBooNE [69]) are complementary. They use three different fluxes: (K2K, Fermilab Booster neutrinos and anti-neutrinos) and three targets: H 2 O (K2K), CH 2 (MiniBooNE) and C 8 H 8 (SciBooNE). MiniBooNE presented the results in the form of absolutely normalized cross-section while K2K and SciBooNE reported only the ratios σ(N C1π 0 )/σ(CC). There is an important difference in what was actually measured: K2K and MiniBooNE present their results as measurements of final states with only one π 0 and no other mesons. SciBooNE defines the signal as states with at least one π 0 in the final state so that a contamination from 1π 0 1π ± , 2π 0 and > 2π (with > 1π 0 ) final states is included and its fraction can be estimated to be 17% [57]. Final results are presented as flux averaged distributions of events as a function of the π 0 momentum, and in the case of MiniBooNE and SciBooNE also as a function of the π 0 production angle.
CCπ + MiniBooNE measured CC 1π + production cross sections, where the signal is defined as exactly one π + in the final state with no other mesons [70]. A variety of flux integrated differential cross sections, often double differential were reported in Q 2 and final state particles momenta. Also absolute π + production cross sections as a function of neutrino energy are provided in Ref. [70]. The cross section results are much larger than NUANCE MC predictions and the difference is on average 23%. In MiniBooNE measured also CC 1π 0 production cross sections. As before, the signal is defined as exactly one π 0 in the final state [71]. Various differential distributions are available. There is a dramatic discrepancy between the measured CC 1π 0 production cross section as a function of neutrino energy and NUANCE MC predictions in the region of lower energies. On average the data is larger by 56 ± 20%, but for E ν < 1 GeV the disagreement is as large as a factor of 2. In Fig. 5 on the right GiBUU predictions for CCπ + are compared to the MiniBooNE data.
Ratio σ(CC1π + )/σ(CCQE) Another useful MiniBooNE measurement was the ratio σ(CC1π + )/σ(CCQE) [72]. The ratio of CC1π + -like (one pion in the final state) to CCQE-like cross-sections on CH 2 as a function of neutrino energy was measured with an accuracy of ∼ 10% in bins with highest statistics. This measurement puts constraints on the theoretical models which include QE, ∆ excitation and MEC/2p2h dynamics. But still, in order to compare with theoretical model predictions to these data, FSI effects must be included. In order to make such a comparison easier, MiniBooNE provided also FSI corrected data representing the ratio of CC1π + /CCQE cross sections at the primary interaction. The corrected results are biased by MC assumptions and in particular they neglect most of the MEC/2p2h contributions which is contained in the QE-like sample of events. Finally, MiniBooNE re-scaled their results in order to get data points for an isoscalar target and enable comparison to old ANL and also more recent K2K data [73]. K2K measured ratio of cross sections on bound nucleons inside the nucleus corrected for FSI effects. CC1π + events were not identified on an event-by-event basis.
Theoretical Considerations
Due to nuclear effects a comparison to the new data is possible only for MC event generators, sophisticated computation tools like GiBUU and also a few theoretical groups which are able to evaluate FSI effects.
Most of the interesting work was done within GiBUU. It turned out to be very difficult reproduce the MiniBooNE CC1π + and CC1π 0 results: the measured cross section is much larger than theoretical computations. In the case of CC 1π + production the discrepancy is as large as 100%. It was also noted that the reported shape of the distribution of π + kinetic energies is different from theoretical calculations and does not show a strong decrease at T π + > 120 MeV located in the region of maximal probability for pion absorption.
The authors of [74] mention three possible reasons for the data/GiBUU predictions discrepancy: (i) the fact that ∆ excitation axial form factor was chosen to agree with the ANL data only, neglecting the larger cross section BNL measurements; (ii) hypothetical 2p-2h-1π pion production contribution analogous to 2p-2h discussed in the Sect. 0.3.3; (iii) flux underestimation in the MiniBooNE experiment. For the last point, the argument gets support from the better data/theory agreement found for the ratio, as discussed below.
In the case of NCπ 0 production, a systematical comparison was done with NuWro MC predictions with an updated FSI model for pions [57]. The overall agreement is satisfactory. Shapes of the distributions of final state π 0 's are affected by an interplay between pion FSI such as absorption and formation time effects, understood here as an effect of a finite ∆ life-time. It is argued that NCπ 0 production data can be very useful for benchmarking neutrino MC event generators. Because of the apparent data/MC normalization discrepancy for the CC π + production, the interesting data is that for the ratio σ(CC1π + − like)/σ(CCQE − like). This observable is free from the overall flux normalization uncertainty. However, it is not a direct observable quantity because in the experimental analysis it is necessary to reconstruct the neutrino energy and the procedures applied for the denominator and numerator are different. Three theoretical predictions for the ratio were published. The Giessen group compared to the MiniBooNE ratio data using the model described in [75] with the FSI effects modeled by the GiBUU code [76]. There is a significant discrepancy between the model and the data points: the calculated ratio is smaller. For the K2K data, the GiBUU model computations are consistent with the experimental results.
The σ(CC1π + )/σ(CCQE) ratio was also analyzed in Ref. [77]. In this analysis many nuclear effects were included: the in medium ∆ self-energy (both real and imaginary parts), FSI effects within the cascade model of Ref. [78], RPA corrections for the CCQE... Computations did not include contributons from the non-resonant background and from higher resonances. The contribution from the coherent pion production evaluated with the model of Ref. [79] (about 5% of the π + signal, a surprisingly large fraction) was also included in computations. The model predictions agree with MiniBooNE measurement for E ν < 1 GeV and are below MiniBooNE data for larger neutrino energies.
Finally, NuWro MC results for the ratio given in Ref. [80] are slightly below the data points for larger neutrino energies.
Theoretical Analyses
It has been known since ANL and BNL pion production measurements that although being a dominant mechanism, ∆ excitation alone cannot reproduce the data and that nonresonant background terms must be included in the theoretical models. There were many attempts in the past to develop suitable models but usually they were not very well justified from the theoretical point of view.
Nonresonant background
A general scheme to analyze weak pion production in the ∆ region based on the chiral symmetry was proposed a few years ago in [81]. The model is supposed to work well in the kinematical region W < 1.3 − 1.4 GeV i.e. in the ∆ region. The background contribution is particulary important at the pion production threshold, for values of W near M + m π . Vector form factors are taken from the electroproduction data and fits to helicity amplitudes [82]. Although particulary important for the channels ν n → − pπ 0 and ν n → − nπ + the background terms contribute also to the channel ν p → − pπ + changing the fitted values of the nucleon-∆ transition matrix elements. A comparison to existing NC pion production data was done as well and a good agreement was also found. An interesting question raised by the authors of [81] is that of unitarity. Their approach does not satisfy requirements of the Watson theorem and this can have some consequences e.g. worse agreement with the antineutrino pion production data.
The model of the nonresonant background was used by the Giessen group which made several qualitative comparisons to both the ANL and BNL pion production data in the region W < 1.4 GeV neglecting deuterium effects [83]. In the case of neutron channels the model predictions are much below the BNL data points and this is because the axial form factor parameters were optimized to the ANL data only. This choice goes back to the paper [82] where the authors came to the conclusion that the ANL and BNL data for the ∆ ++ excitation are not compatible.
Reanalysis of old bubble chamber data
The issue of nucleon-∆ transition matrix element was discussed also in other papers. The questions are: what is the value of the C A 5 (0)? How relevant are deuterium nuclear effects in dealing with ANL and BNL data? How much tension is there between both data samples?
In Ref. [81] a fit was done to the ANL data in the ∆ ++ channel only with the results: C A 5 (0) = 0.867 ± 0.075 and M A∆ = 0.985 ± 0.082 GeV. The obtained value of C A 5 (0) was very different from what follows from off-diagonal Goldberger-Treiman relations (C A 5 (0) ≈ 1.15). The authors of [80] made a fit to both ANL and BNL data including in the χ 2 , terms with the overall flux normalization uncertainties, separate for ANL and BNL. In the fit the deuterium nuclear effects were included as correction factors to the Q 2 distributions of events, using the results of [84]. The main conclusion was that ANL and BNL data are in fact consistent. This statement was verified in a rigorous way using parameter goodness of fit method [85]. In the dipole parameterization of the C A 5 (Q 2 ) form factor the best fit values were found to be C A 5 (0) = 1.14 ± 0.08 and M A = 0.95 ± 0.04. Only ∆ ++ channel was analyzed and like in Ref. [75] non-resonant background contributions was not included.
So far the most complete analysis of both ANL and BNL data was performed in [86]: a nonresonant background was included and also deuterium effects were taken into account in the systematic way. The authors made several fits with various assumptions (see Table I) and in the fit IV they obtained C 5 A (0) = 1.00 ± 0.11.
Other theoretical approaches
In Ref. [87] the dynamical pion cloud effects are imposed on bare quark N − ∆ transition matrix elements. The model is able to reproduce both ANL and BNL weak pion production data. The authors of [88] focus on the consistent use of the ∆ propagator. They show that the computations relying on the standard Rarita-Schwinger propagator could lead to an underestimation of the weak pion production cross section.
Coherent pion production
In coherent pion production (COH) the target nucleus remains in the ground state. There are four possible channels, for CC and NC reactions, neutrinos and anti-neutrinos. A clear experimental signal for the COH reaction for high energies was observed and the aim of recent measurements was to fill a gap in the knowledge of a region around ∼ 1 GeV COH cross-sections. At larger neutrino energies a recent measurement was done by MINOS which reported a NC reaction cross section at < E ν >= 4.9 GeV to be consistent with the predictions of the Berger-Sehgal model (see below).
Experimental Results
In the case of the NC reaction, MiniBooNE [66] and SciBooNE [89] searched for the COH component. SciBooNE [89] evaluated the ratio of the COH NCπ 0 production to the total CC cross-section as (1.16 ± 0.24)%.
For the NC reaction MiniBooNE evaluated the COH component (plus possible hydrogen diffractive contribution about which little is known) in the NCπ 0 production as 19.5% (at < E ν >∼ 1 GeV) and then the overall flux averaged overall NC1π 0 cross-section as (4.76 ± 0.05 ± 0.76) · 10 −40 cm 2 /nucleon. Unfortunately, it is difficult to translate both measurements into the absolutely normalized value of the NC COH cross-section because of strong dependence on the NUANCE MC generator used in the data analysis. In NUANCE, RES, COH and BGR (nonresonance background) NCπ 0 reactions are defined according to primary interaction and COH pions are also subject to FSI. In the MiniBooNE analysis the fit is done for the composition of the sample of NCπ 0 events in terms of three components, and the COH fraction is defined as x COH /(x COH + x RES ).
In the case of the CC reaction, K2K [90] and SciBooNE [91] reported no evidence for the COH component. For the K2K analysis, the 90% confidence limit upper bound for the COH cross-sections on carbon was estimated to be 0.6% of the inclusive CC cross-section. The SciBooNE upper limits (also for the carbon target) are: 0.67% at < E ν >∼ 1.1 GeV, and 1.36% at < E ν >∼ 2.2 GeV. SciBooNE reported also the measurement of the ratio of CC COH π + to NC COH π 0 production and estimated it as 0.14 +0. 30 −0.28 . This is a surprisingly low value, which disagrees with results from the theoretical models which at SciBooNE energies typically predict values somehow smaller 2. For massless charged leptons isospin symmetry implies the value of 2 for this ratio and the finite mass corrections make the predicted ratio smaller.
Theoretical developments
Higher neutrino energy (E ν >∼ 2 GeV) COH production data (including recent NOMAD measurement) were successfully explained with a PCAC based model [92]. Adler's theorem relates σ COH (ν + X → ν + X + π 0 ) at Q 2 → 0 to σ(π 0 + X → π 0 + X). Subsequently, the model for the CC reaction, has been upgraded [93] to include lepton mass effects important for low E ν studies. The new model predicts the σ COH (π + )/σ COH (π 0 ) ratio at E ν = 1 GeV to be 1.45 rather than 2. Another important improvement was to use a better model for dσ(π + 12 C → π + 12 C)/dt in the region of pion kinematical energy 100 MeV< T π < 900 MeV. As a result, the predicted COH cross section from the model became reduced by a factor of 2-3 [94]. The PCAC based approach was also discussed in [95] and critically re-derived in Ref. [96].
At lower energies the microscopic ∆ dominance models for the COH reaction [97] are believed to be more reliable. Within microscopic models there are still various approaches e.g due to differences in the treatment of the nonresonant background. The absolute normalization of the predicted cross-section depends on the adopted value of the N → ∆ form factor C A
MC generators
Almost all MC events generators rely on the old Rein-Sehgal resonance model for pion resonance production [98]. The model is based on the quark resonance model and includes contributions from 18 resonances covering the region W < 2 GeV. The model is easily implementable in MC generators and it has only one set of vector and axial form factors. In the original model, the charged lepton is assumed to be massless and prescriptions to cope with this problem were proposed in Refs. [93,99]. It was also realized that the RS model can be improved in the ∆ region by modyfying both vector and axial form factors using either old deuterium or new MiniBooNE pion production data [18,100] . As for coherent pion production, all the MCs use the Rein-Sehgal COH model [92]. The analysis of of MC event generators and theoretical models done in [19] show that in the 1 − 2 GeV energy region, the Rein Sehgal COH model predictions disagree significantly with all the recent theoretical computations and experimental results.
A crucial element of MC is the FSI model. These are typically semiclassical intra-nuclear cascade models. The topic of FSI goes far beyond the scope of this review and we only note that the progress in understanding the experimental data requires more reliable FSI models. The existing models should be systematically benchmarked with electro and photoproduction data as it was done in the case of GIBUU.
Duality
Bridging the region between RES and DIS (where with a good approximation interactions occur on quarks) dynamics is a practical problem which must be resolved in all MC event generators. In MC event generators "DIS" is defined as "anything but QE and RES", what is usually expressed as a condition on the invariant hadronic mass of the type W > 1.6 GeV or W > 2 GeV or so.
Notice however that such a definition of "DIS" contains a contribution from the kinematical region Q 2 < 1 GeV 2 which is beyond the applicability of the genuine DIS formalism. RES/DIS transition region is not only a a matter of an arbitrary choice but is closely connected with the hypothesis of quark-hadron duality.
Investigation of structure functions introduced in the formalism of the inclusive electron-nucleon scattering led Bloom and Gilman to the observation that the average over resonances is approximately equal to the leading twist contribution measured in the completely different DIS region. One can distinguish two aspects of duality: (i) resonant structure functions oscillate around a DIS scaling curve; (ii) the resonant structure functions for varying values of Q 2 slide along the DIS curve evaluated at fixed Q 2 DIS . In order to quantify the degree in which the duality is satisfied one defines the ratio of integrals over structure functions from RES and DIS: The integrals are in the Nachtmann variable ξ(x, Q 2 ) = 2x 1+ √ 1+4x 2 M 2 /Q 2 and the integration region is defined as W min < W < W max . Typically W min = M + m π and W max = 1.6, ..., 2.0 GeV. In the case of DIS, the value of Q 2 DIS is much larger and as a consequence the integral over ξ runs over a quite different region in W .
Neutrino-nucleon scattering duality studies are theoretical in their nature because the precise data in the resonance region are still missing. The duality was studied in three papers: [87,101,102]. For neutrino interactions the duality can be satisfied only for the isospin average target. This is because the RES structure functions for proton are much larger than for neutron and in the case of DIS structure functions the situation is opposite.
Theoretical studies were done with a model which contains resonances from the first and second resonance region but not the background contribution and with the Rein-Sehgal model which is commonly used in MC event generators. If the resonance region is confined to W < 1.6 GeV the duality as defined in Eq. (8) is satisfied at the 75-80% level. If the resonance region is extended to W < 2 GeV the value of the integral in Eq. (8) is only about 50%. These results are to some extent model dependent but a general tendency is that for larger W, DIS structure functions are much larger than the resonance contribution, as clearly seen from Fig 3 in [101] and Fig. 7 in [102]. As shown in [102] there is also a 5% uncertainty coming from an arbitrary choice of Q 2 DIS . Two component duality hypothesis states that resonance contribution is dual to the valence quarks and the nonresonant background to the sea. Investigation done within the Rein Sehgal model with W < 2 GeV revealed no signature of two component duality. Quark-hadron duality was also investigated in the case of neutrino nucleus interactions [103].
As a practical procedure for addressing this region, Bodek and Yang [104] have introduced and refined a model that is used by many contemporary neutrino event generators such as NEUGEN and its successor GENIE to bridge the kinematic region between the Delta and full DIS. The model has been developed for both neutrino-and electron-nucleon inelastic scattering cross sections using leading order parton distribution functions and introducing a new scaling variable they call ξ w .
Non-perturbative effects that are prevalent in the kinematic region bridging the resonance and DIS regimes are described using the ξ w scaling variable, in combination with multiplicative K factors at low Q 2 . The model is successful in describing inelastic charged lepton-nucleon scattering, including resonance production, from high-to-low Q 2 . In particular, the model describes existing inelastic neutrino-nucleon scattering measurements.
Their proposed scaling variable, ξ w is derived using energy momentum conservation and assumptions about the initial/final quark mass and P T . Parameters are built into the derivation of ξ w to account (on average) for the higher order QCD terms and dynamic higher twist that is covered by an enhanced target mass term.
At the juncture with the DIS region, the Bodek-Yang model incorporates the GRV98 [105] LO parton distribution functions replacing the variable x with ξ w . They introduce "K-factors", different for sea and valence quarks, to multiply the PDFs so that they are correct at the low Q 2 photo-production limit. A possible criticism of the model is the requirement of using the rather dated GRV98 parton distribution functions in the DIS region so the bridge to the lower W kinematic region is seamless.
ν-A Deep-inelastic Scattering: Introduction
Although deep-inelastic scattering (DIS) is normally considered to be a topic for much higher energy neutrinos, wide-band beams such as the Fermilab NuMI and the planned LBNE beams do have real contributions from DIS that are particularly important in feed-down to the background that must be carefully considered. In addition, there are x-dependent nuclear effects that should be taken into account when comparing results from detectors with different nuclei and even when comparing results from "identical" near and far detectors when the neutrino spectra entering the near and far detectors are different.
For this review, the definition of deep-inelastic scattering (DIS) is the kinematic based definition with W ≥ 2.0 GeV and Q 2 ≥ 1.0 GeV. This is mostly out of the resonance production region and allows a fit to parton distribution functions. As said in Introduction, this is unfortunately not the definition used by several modern Monte Carlo generators that do not differentiate between simply "inelastic" interactions and deep-inelastic interactions calling everything beyond the Delta simply DIS. This is an unfortunate confusing use of nomenclature by the generators.
In general, deep-inelastic scattering offers an opportunity to probe the partonic structure of the nucleon both in its free state and when the nucleon is bound in a nucleus. Description of the partonic structure can include parton distribution functions (PDFs) giving the longitudinal, transverse and spin distributions of quarks within the nucleon as well as, for example, the hadron formation zone giving the time/length it takes for a struck quark to fully hadronize into a strong-interacting hadron.
Neutrino scattering can play an important role in extraction of these fundamental parton distribution functions (PDFs) since only neutrinos via the weak-interaction can resolve the flavor of the nucleon's constituents: ν interacts with d, s, u and c while the ν interacts with u, c, d and s. The weak current's unique ability to "taste" only particular quark flavors significantly enhances the study of parton distribution functions. High-statistics measurement of the nucleon's partonic structure, using neutrinos, could complement studies with electromagnetic probes.
In the pursuit of precision measurements of neutrino oscillation parameters, large data samples and dedicated effort to minimize systematic errors could allow neutrino experiments to independently isolate all SIX of the weak structure functions F νN 3 (x, Q 2 ) and xFν N 3 (x, Q 2 ) for the first time. Then, by taking differences and sums of these structure functions, specific parton distribution functions in a given (x, Q 2 ) bin can in turn be better isolated. Extracting this full set of structure functions will rely on the y-variation of the structure function coefficients in the expression for the cross-section. In the helicity representation, for example: where F L is the longitudinal structure function representing the absorption of longitudinally polarized Intermediate Vector Boson. By analyzing the data as a function of (1 − y) 2 in a given (x, Q 2 ) bin for both ν and ν, all six structure functions could be extracted. Somewhat less demanding in statistics and control of systematics, the "average" structure functions F 2 (x, Q 2 ) and xF 3 (x, Q 2 ) can be determined from fits to combinations of the neutrino and antineutrino differential cross sections and several assumptions. The sum of the ν and ν differential cross sections, yielding F 2 then can be expressed as: where R L is equal to σ L / σ T and now F 2 is the average of F ν 2 and F ν 2 and the last term is proportional to the difference in xF 3 for neutrino and antineutrino probes, ∆xF 3 = xF ν 3 − xF ν 3 . In terms of the strange and charm parton distribution function s and c, at leading order, assuming symmetric s and c seas, this is 4x (s − c).
The cross sections are also corrected for the excess of neutrons over protons in the target (for example the Fe correction is 5.67%) so that the presented structure functions are for an isoscalar target. A significant step in the determination of F 2 (x, Q 2 ) in this manner that affects the low-x values is the assumed ∆xF 3 and R L (x, Q 2 ). Recent analyses use, for example, a NLO QCD model as input (TRVFS [106]) and assumes an input value of R L (x, Q 2 ) that comes from a fit to the world's charged-lepton measurements [107]. This could be an additional problem since, as will be suggested, R L (x, Q 2 ) can be different for neutrino as opposed to charged-lepton scattering.
The structure function xF 3 can be determined in a similar manner by taking the difference in ν and ν differential cross sections.
The Physics of Deep-inlastic Scattering
There have been very few recent developments in the theory of deep-inelastic scattering. The theory has been well-established for years. The most recent developments in neutrino DIS scattering involve the experimental determination of parton distribution functions of nucleons within a nucleus, socalled nuclear parton distribution functions (nPDF). The more contemporary study of ν nucleus deep-inelastic scattering using high-statistics experimental results with careful attention to multiple systematic errors began with the CDHSW, CCFR/NuTeV ν Fe, the NOMAD ν C and the CHORUS ν Pb experiments. Whereas NuTeV [108] and CHORUS [109] Collaborations have published their full data sets, NOMAD [110] has not yet done so. This short summary of DIS physics will concentrate on nuclear/nucleon parton distribution functions.
Low-and-High Q 2 Structure Functions: Longitudinal and Transverse
Since the current and future neutrino beams designed for neutrino oscillation experiments will be concentrating on lower energy neutrinos (1 -5 GeV), many of the interactions will be at the lower-Q edge of DIS or even in the "soft" DIS region -namely, W ≥ 2.0 GeV however, with Q 2 ≤ 1.0 GeV 2 . Understanding the physics of this kinematic region is therefore important.
Since both the vector and axial-vector part of the transverse structure function F T go to 0 at Q 2 = 0 (similar to ± charged-lepton vector current scattering), the low-Q 2 region ν and ν cross sections are dominated by the longitudinal structure function F L . The longitudinal structure function is composed of a vector and axial-vector component F V C L and F AC L and the low-Q 2 behavior of these components is not the same as in the transverse case. The conservation of the vector current (CVC) suggests that F V C L behaves as the vector current in charged-lepton scattering and vanishes at low Q 2 . However, the axial-vector current is not conserved and is related to the pion field via PCAC, so there is a surviving low Q 2 contribution from this component [111] and F AC L dominates the low Q 2 behavior. Consequently, the ratio R = F L /F T is divergent for neutrino interactions. This is substantially different from the scattering of charged leptons for which R is vanishing as Q 2 and using measurement of R from charged lepton scattering to determine F 2 for neutrino scattering is obviously wrong for lower Q. In addition, this non-vanishing and dominant longitudinal structure function could be important for the interpretation of low-Q 2 nuclear effects with neutrinos to be described shortly.
Low-and-High Q 2 Structure Functions: 1/Q 2 Corrections Using a notation similar to that of reference [112], the total structure function can be expressed in a phenomenological form: where i = 1, 2, 3 refers to the type of the structure function. Using i = 2 as an example, then F LT 2 is the leading twist component that has already included target mass corrections (TMC) and C 4 is the coefficient of the twist-4 term, the first higher-twist term proportional to 1/Q 2 ). There are, of course, further higher-twist terms + · · · proportional to ever increasing powers of 1/Q 2 however, for most phenomenological fits, the dominant leading twist plus twist-4 term are sufficient to describe the data. The target mass corrections are kinematic in origin and involve terms suppressed by powers of M 2 /Q 2 while the higher twist terms are dynamical in origin and are suppressed as mentioned by powers of 1/Q 2 . These higher-twist terms are associated with multi-quark or quark and gluon fields and it is difficult to evaluate their magnitude and shape from first principles. As with the kinematic target mass corrections, these must be taken into account in analyses of data at low Q 2 and especially at large x. At higher Q 2 the contribution of the HT terms is negligible and there are various global fits [113,114] to the structure functions (among various scattering input) to determine the parton distribution functions (PDFs) that do not include any HT terms.
The analysis of nuclear PDFs to be described shortly uses data from a TeVatron neutrino experiment at very high neutrino energies and thus is one of the analyses that does not need to be concerned with higher-twist corrections. However, the current neutrino-oscillation oriented beam-lines are not high-energy and the analyses of this data may indeed need to consider both target mass corrections and higher-twist. If indeed inclusion of higher-twist in these analyses becomes necessary, the authors of [112] stress the importance of explicitly including both the target mass corrections and the higher twist corrections, even though they have very different physical origin and can have very different x dependence. It is important to note, as mentioned, that there are both nucleon and nuclear PDFs depending on the target. The relation between them, called nuclear correction factors, are currently being studied for both ν-A and ± A. There are early indications that the nuclear correction factors for these two processes may not be the same.
Recent DIS measurements: Neutrino Iron Scattering Results
The difficulty, of course, is that modern neutrino oscillation experiments demand high statistics which means that the neutrinos need massive nuclear targets to acquire these statistics. This, in turn, complicates the extraction of free nucleon PDFs and demands nuclear correction factors that scale the results on a massive target to the corresponding result on a nucleon target. The results of the latest study of QCD using neutrino scattering comes from the NuTeV experiment [108]. The NuTeV experiment was a direct follow-up of the CCFR experiment using nearly the same detector as CCFR but with a different neutrino beam. The NuTeV experiment accumulated over 3 million ν and ν events in the energy range of 20 to 400 GeV off a manly Fe target. A comparison of the NuTeV results with those of CCFR and the predictions of the major PDF-fitting collaborations (CTEQ and MRST [113,114] ) is shown in Figure 6.
The main points are that the NuTeV F 2 agrees with CCFR for values of x Bj ≤ 0.4 but is systematically higher for larger values of x Bj culminating at x Bj = 0.65 where the NuTeV result is 20% higher than the CCFR result. NuTeV agrees with charged lepton data for x Bj ≤ 0.5 but there is increasing disagreement for higher values. Although NuTeV F 2 and xF 3 agree with theory for medium x, they find a different Q 2 behavior at small x and are systematically higher than theory at high x. These results can be summarized in four main questions to ask subsequent neutrino experiments: • At high x, what is the behavior of the valence quarks as x → 1.0?
• At all x and Q 2 , what is yet to be learned if we can measure all six ν and ν structure functions to yield maximal information on the parton distribution functions?
• At all x, how do nuclear effects with incoming neutrinos differ from nuclear effects with incoming charged leptons?
This last item highlights an overriding question when trying to get a global view of structure functions from both neutrino and charged-lepton scattering data. How do we compare data off nuclear targets with data off nucleons and, the associated question, how do we scale nuclear target data to the comparable nucleon data. In most PDF analyses, the nuclear correction factors were taken from ± -nucleus scattering and used for both charged-lepton and neutrino scattering. Recent studies by a CTEQ-Grenoble-Karlsruhe collaboration (called nCTEQ) [116] have shown that there may indeed be a difference between the charged-lepton and neutrino correction factors.
The data from the high-statistics ν-DIS experiment, NuTeV summarized above, was used to perform a dedicated PDF fit to neutrino-iron data [117]. The methodology for this fit is parallel to that of the previous global analysis [118] but with the difference that only Fe data has been used and no nuclear corrections have been applied to the analyzed data; hence, the resulting PDFs are for a proton in an iron nucleus -nuclear parton distribution functions 8 By comparing these iron PDFs with the free-proton PDFs (appropriately scaled) a neutrino-specific heavy target nuclear correction factor R can be obtained which should be applied to relate these two quantities. It is also of course possible to combine these fitted nPDFs to form the individual values of the average of F 2 (νA) and F 2 (νA) for a given x, Q 2 to compare directly with the NuTeV published values of this quantity. This was recently done and the nCTEQ preliminary results [120] for low-Q 2 are shown in Figure 7. Although the neutrino fit has general features in common with the chargedlepton parameterization, the magnitude of the effects and the x-region where they apply are quite different. The present results are noticeably flatter than the charged-lepton curves, especially at lowand moderate-x where the differences are significant. The comparison between the nCTEQ fit, that passes through the NuTeV measured points, and the charged-lepton fit is very different in the lowest-x, lowest-Q 2 region and gradually approaches the charged-lepton fit with increasing Q 2 . However, the slope of the fit approaching the shadowing region from higher x where the NuTeV measured points and the nCTEQ fit are consistently below the charged-lepton A fit, make it difficult to reach the degree of shadowing evidenced in charged-lepton nucleus scattering at even higher Q 2 .
The general trend is that the anti-shadowing region is shifted to smaller x values, and any turnover at low x is minimal given the PDF uncertainties. More specifically, there is no indication of "shadowing" in the NuTeV neutrino results at low-Q 2 . In general, these plots suggest that the size of the nuclear corrections extracted from the NuTeV data are smaller than those obtained from charged lepton scattering.
Comparison of the ± A and νA Nuclear Correction Factors
For the nCTEQ analysis, the contrast between the charged-lepton ( ± A) case and the neutrino (νA) case is striking. While the nCTEQ fit to charged-lepton and Drell-Yan data generally align with the other charged-lepton determinations, the neutrino results clearly yield different behavior as a function of x, particularly in the shadowing/anti-shadowing region. In the ν case, these differences are smaller but persist in the low-x shadowing region. The nCTEQ collaboration emphasize that both the charged-lepton and neutrino results come directly from global fits to the data, there is no model involved. They further suggest that this difference between the results in charged-lepton and neutrino DIS is reflective of the long-standing "tension" between the light-target charged-lepton data and the heavy-target neutrino data in the historical global PDF fits [121,122]. Their latest results suggest that the tension is not only between charged-lepton light-target data and neutrino heavy-target data, but also between neutrino and charged-lepton heavy-target data. In other words a difference between charged-lepton ( ± A) and the neutrino (νA) when comparing the same A.
Concentrating on this interesting difference found by the nCTEQ group, if the nuclear corrections for the ± A and νA processes are indeed different there are several far-reaching consequences. Considering this, the nCTEQ group has performed a unified global analysis [116] of the ± A, DY, and νA data (accounting for appropriate systematic and statistical errors) to determine if it is possible to obtain a "compromise" solution including both ± A and νA data. Using a hypothesis-testing criterion based on the χ 2 distribution that can be applied to both the total χ 2 as well as to the χ 2 of individual data sets, they found it was not possible to accommodate the data from νA and ± A DIS by an acceptable combined fit.
That is, when investigating the results in detail, the tension between the ± F e and νF e data sets permits no possible compromise fit which adequately describes the neutrino DIS data along with the charged-lepton data and, consequently, ± F e and νF e based on the NuTeV results, have different nuclear correction factors.
A compromise solution between νA and ± A data can be found only if the full correlated systematic errors of the νA data are not used and the statistical and all systematic errors are combined in quadrature thereby neglecting the information contained in the correlation matrix. In other words the larger errors resulting from combining statistical and all systematic errors in quadrature reduces the discriminatory power of the fit such that the difference between νA and ± A data are no longer evident. This conclusion underscores the fundamental difference [116] of the nCTEQ analysis with other contemporary analyses.
On the other hand, a difference between νA and ± A is not completely unexpected, particularly in the shadowing region, and has previously been discussed in the literature [123,124]. The chargedlepton processes occur (dominantly) via γ-exchange, while the neutrino-nucleon processes occur via W ± -exchange. The different nuclear corrections could simply be a consequence of the differing propagation of the hadronic fluctuations of the intermediate bosons (photon, W ) through dense nuclear matter. Furthermore, since the structure functions in neutrino DIS and charged lepton DIS are distinct observables with different parton model expressions, it is clear that the nuclear correction factors will not be exactly the same. What is, however, unexpected is the degree to which the R factors differ between the structure functions F νF e 2 and F F e 2 . In particular the lack of evidence for shadowing in neutrino scattering at low Q 2 down to x ∼ 0.02 is quite surprising.
Should subsequent experimental results confirm the rather substantial difference between chargedlepton and neutrino scattering in the shadowing region at low-Q 2 it is interesting to speculate on the possible cause of the difference. A recent study of EMC, BCDMS and NMC data by a Hampton University -Jefferson Laboratory collaboration [125] suggests that anti-shadowing in charged-lepton nucleus scattering may be dominated by the longitudinal structure function F L . As a by-product of this study, their figures hint that shadowing in the data of EMC, BCDMS and NMC µ A scattering was being led by the transverse cross section with the longitudinal component crossing over into the shadowing region at lower x compared to the transverse.
As summarized earlier, in the low-Q 2 region, the neutrino cross section is dominated by the longitudinal structure function F L via axial-current interactions since F T vanishes as Q 2 as Q 2 → 0 similar to the behavior of charged lepton scattering. If the results of the NuTeV analysis are verified, one contribution to the different behavior of shadowing at low-Q 2 demonstrated by ν A and A, in addition to the different hadronic fluctuations in the two interactions, could be due to the different mix of longitudinal and transverse contributions to the cross section of the two processes in this kinematic region. NuTeV CCFR CTEQ5HQ1 MRST2001E+/-σ Figure 6: A comparison of the measurements of the F 2 structure function by NuTeV and CCFR and the predictions from the global PDF fits of the MRST and CTEQ collaboration [115] that does not use the NuTeV data points as input to their fit. The model predictions have already been corrected for target mass and, most significantly, nuclear effects assuming these corrections are the same for charge-lepton and neutrino interactions nCTEQ nCTEQ ± Figure 7: Nuclear correction factor R for the average F 2 structure function in charged current νF e scattering at Q 2 =1.2, 2.0, 3.2 and 5.0 GeV 2 compared to the measured NuTeV points. The green dashed curve curve shows the result of the nCTEQ analysis of ν A (CHORUS, CCFR and NuTeV) differential cross sections plotted in terms of the average F F e 2 divided by the results obtained with the reference fit (free-proton) PDFs. For comparison, the nCTEQ fit to the charged-lepton data is shown by the solid blue curve. | 19,297.6 | 2012-09-28T00:00:00.000 | [
"Physics"
] |
The impact of corporate digital transformation on the export product quality: Evidence from Chinese enterprises
The digital economy has become a driving force in the rapid development of the global economy and the promotion of export trade. Pivotal in its advent, the digital transformation of enterprises utilizes cloud computing, big data, artificial intelligence, and other digital technologies to provide an impetus for evolution and transformation in various industries and fields. in enhancing quality and efficiency. This has been critical for enhancing both quality and efficiency in enterprises based in the People’s Republic of China. Through the available data on its listed enterprises, this paper measures their digital transformation through a textual analysis and examines how this transformation influences their export product quality. We then explore the possible mechanisms at work in this influence from the perspective of enterprise heterogeneity. The results find that: (1) Digital transformation significantly enhances the export product quality in an enterprises, and the empirical findings still hold after a series of robustness tests; (2) Further mechanism analysis reveals that the digital transformation can positively affect export product quality through the two mechanisms of process productivity (φ), the ability to produce output using fewer variable inputs, and product productivity (ξ), the ability to produce quality with fewer fixed outlays; (3) In terms of enterprise heterogeneity, the impact of digital transformation on export product quality is significant for enterprises engaged in general trade or high-tech industries and those with strong corporate governance. In terms of heterogeneity in digital transformation of enterprise and the regional digital infrastructure level, the higher the level of digital transformation and regional digital infrastructure, the greater the impact of digital transformation on export product quality. This paper has practical implications for public policies that offer vital aid to enterprises as they seek digital transformation to remain sync with the digital economy, upgrade their product quality, and drive the sustainable, high-quality, and healthy development of their nation’s economy.
Introduction
As the world's leading exporter, China is accustomed to leveraging factor endowments like its abundance of labor and other comparative advantages as it engages in the international division of labor.The country's export products tend to be low in price, low in quality, and high in quantity [1].New trade theory (NTT) as represented by the enterprise heterogeneity model argues that a firm's export product quality is directly related to its country's export performance and the gains, status, and upgrading of its trade.Export product quality offers a new advantage for countries as they compete internationally [2][3][4].The report of the 20th Congress of the Communist Party of China also states the necessity of "accelerating the building of a strong trade country."At the same time, the existing scholarship indicates that while a series of initiatives have improved the quality of China's export products, it remains relatively low overall [1],and the phrase "made in China" still represents a lower price and quality [5].Improving the export product quality of Chinese enterprises will be vital in China's construction of a large trade nation, high-quality economic development, and the transformation and upgrading of foreign trade.
The integration of digital technology and the real economy has formed a digital economy that is profoundly changing China's traditional economic landscape as it reshapes the allocation of resources and the production activities of traditional enterprises.They country's enterprises are also experiencing its multi-dimensional impact on their technological innovation, business management, and production methods [6].The 14th Five-Year Plan (2021-2025) pledges to "develop the market for data factors, activate the potential of data factors, and drive changes in production, lifestyle and governance modes for digital transformation as a whole."China's government has proposed a series of policies to facilitate digital transformation, as it is well aware that the pace of the digital economy has made digitalization an inevitable choice for the transformation and reform of its enterprises [7,8].This transformation which has been called the most important strategic issue for businesses around the world is also inherently disruptive [9,10] and can significantly affect the size, performance, and value added of a firm's exports [11][12][13].As such, will digital transformation aid or harm the export product quality of China's enterprises, and how will it work?
To answer these questions, this paper explores the impact of digital transformation on the export product quality of enterprises from the macro background of the digital economy and the micro perspective of enterprises.Exploring this issue can enrich research on the economic effects of digital transformation in foreign trade and provide a theoretical basis for enhancing export product quality that reflects current circumstances.This is instrumental in both guiding the digital transformation of enterprises and sustaining the robust development of the Chinese economy.Finally, it serves as a valuable reference for developing countries as they seek to become more competitive in trade.
Literature review
Present research on digital transformation and export product quality has focused on the factors that influence export product quality, the economic effects of digital transformation, and the impact of digital development on exports.
As for what factors influence export product quality in an enterprise, scholars have conducted studies either from the perspective of the market environment or internal factors within the enterprise.In terms of the market environment, factors such as FDI, OFDI, trade liberalization, minimum wage standards, government subsidies, industrial agglomeration, and the protection of intellectual property can all influence export product quality [14][15][16][17][18][19][20][21][22][23].While minimum wage standards can harm export product quality [18], the other aforementioned factors can significantly enhance the export product quality.Among the internal factors within enterprises, those such as technological innovation, total factor productivity, input servitization, and enterprise listing have a significant positive impact on export product quality [24][25][26][27].
This paper also considers the literature on the economic effects of digital transformation in enterprises.The existing research defines digital transformation as the process of change in which firms use digital technologies to reduce the proportion of duplicated labor in production, operations, and services, or use advanced digital technologies to replace traditional ones [28].Digital transformation can have a positive and beneficial impact in a variety of ways, including enhancing total factor productivity [29], fostering firm innovation [30], improving organizational efficiency [31], enhancing performance in capital markets [8], improving input-output efficiency [32], and promoting specialization and the division of labor [33].
With respect to the impact of digital development on enterprise exports, scholars argue that information technologies and the internet as the foundation of digital development, can reduce costs related to information, transactions, and risks in the process of trade [2,34,35] to form a new comparative advantage [36].This can in turn boost exports [37][38][39] while improving their export performance [40], and moving them up the value chain [41,42].The information disclosure offered by information technology is one reason for an increase in quality [43].Yi and Wang (2021) [12] find that digital transformation helps firms expand their exports, while Du et al. (2022) [44] hold that digital transformation has upgraded the quality of China's export products by increasing innovation capacity, transforming products, and improving the quality of intermediate inputs.Hong et al. (2022) [45] make use of principal component analysis to measure the digital transformation indexes of firms, analyzing the U-shaped mediating role that innovation plays between digital transformation and export product quality.
In summary, the existing studies provide valuable reference information but are not without their limitations.First, while they confirm that digital technology can promote the upgrading of exports, they largely base their studies on the digital economy at the national and provincial levels [46].Second, the existing literature at the micro level has studied the link between digital transformation and export trade but has yet to offer any theories as to its inner workings.This paper investigates the impact of digital transformation on the export product quality of enterprises and how this takes place from the perspective of the digital economy.Its marginal contributions are as follows: First, it explores the impact of digital transformation on firms at the micro level, which helps to enrich existing scholarship on the digital transformation of firms and international trade.Second, it examines precisely how digital transformation affects export product quality from the perspective of enterprise heterogeneity, helping to provide firm-level evidence for the contributions of the digital economy in China's endeavors to cultivate a large trade nation.
Theoretical mechanisms and research hypotheses
The advent of the digital economy has provided new avenues for enterprises diverse in scale to achieve success in export trade.Concurrently, digital transformation within firms has emerged as a crucial impetus for the advancement of export product quality.This paper amalgamates concepts from existing pertinent analyses, delving primarily into both direct and indirect effects.It scrutinizes the precise means through which digital transformation affects export product quality while presenting complementary research hypotheses.
The direct effects of digital transformation on export product quality
With the rapid development of the digital economy, digital innovation and the application of big data have subverted the business management models of traditional enterprises and compelled them to transition into digital management [47].However, digital transformation is not simply the combination and application of digital technologies.Digital transformation regards data as an equally important factor of production alongside labor and capital [48].On the one hand, export enterprises strengthen their real-time monitoring and supervision of production processes by applying big data, cloud computing, and other digital technologies.While endeavoring to supervise the quality of their export products, enterprises continuously elevate and enhance their production processes, driving the development of products toward mechanization, intelligence, and automation [49,50] to optimize product quality even further.On the other hand, export enterprises can avail themselves of big data to rapidly analyze and apprehend the dynamics of the international market and synchronize with demand in their target market.As they optimize and upgrade their products in response to customer feedback, they also manage to keep pace with demand in the international market.Simultaneously, the application of cross-border e-commerce, the Internet of Things, and other digital technologies have each condensed the landscape of international trade, optimized the exchange of information between buyers and sellers, reduced trade costs, and galvanized enterprises to participate in the export trade.It has also led to increasingly fierce competition in export markets and stimulated enterprises to continuously innovate and improve their product quality.Based on the above analysis, this paper proposes the following first hypothesis: H1: Digital transformation can improve the export product quality of an enterprise.
The indirect effects of digital transformation on export product quality
In this section, we draw on the theoretical framework of Hallak and Sivadasan (2013) [3] and Shi and Shao (2014) [25] to discuss the endogenous determinants of export product quality, and then analyze the theoretical mechanisms by which digital transformation affects the export product quality of enterprises.
Analysis of the endogenous determinants of export product quality
Assuming that the consumer's utility function is one of constant elasticity of substitution (CES), the function is as follows: Where λ j is the quality of product j and q j is the demand for product j, O denotes the mix of goods purchased by consumers, σ denotes the elasticity of substitution between products, and σ>1.
The price index corresponding to the above utility function is We can then obtain the demand equation for product j: Where E is the total consumer expenditure, E P denotes the size of market demand, and p s j denotes the price of product j.Eq (2) indicates that the demand for product j depends on its price (p s j ) and quality (λ j ).We then introduce the production behavior of enterprises into the model.Based on the enterprise productivity heterogeneity of Meltiz [4], Hallak and Sivadasan (2013) [3] introduce two heterogeneous attributes into model: "process productivity" (φ) is the ability to produce output using fewer variable input; "product productivity" (ξ) is the ability to produce quality with fewer fixed outlays.These two heterogeneous attributes affect variable costs and fixed costs of enterprises, respectively, which in turn affect product quality.The variable costs (C) and fixed costs (F) specifically expressed as Eqs (3) and (4): Where β denotes the quality elasticity of variable production costs, α denotes the quality elasticity of fixed costs, and 0<β<1, α>0.φ is the process productivity that represents the difference in variable costs between enterprises, and ξ is product productivity, representing the heterogeneity of quality production capability, it reflects the diverse fixed input efficiencies of enterprises, namely their ability to improve product quality under given fixed expenditures [3].
Given the demand and cost functions, the profit function of the enterprise is obtained as follows: Where f x is the fixed trade cost.Using the firm's profit-maximizing condition, its optimal product quality can be obtained as follows: Where According to Eq (6), the optimal product quality of an enterprise is endogenously determined by the firm's process productivity (φ) and product productivity (ξ).By taking the partial derivatives of Eq (6) with respect to φ and ξ, we can obtain dl dφ > 0; dl dx > 0. This indicates that the higher the firm's process productivity and product productivity, the higher the quality of its products.In other words, an enterprise can improve its export product quality by increasing its process productivity (φ) and product productivity (ξ).
Mechanism analysis of how digital transformation influences export product quality
Based on the above analysis of the endogenous determinants of export product quality, it is evident that process productivity (φ) and product productivity (ξ) are its two major determinants.In this section, we will further investigate the mechanisms through which digital transformation affects export product quality from the perspectives of these two factors.
Process productivity (φ)
Digital transformation offers a comprehensive optimization and upgrading of production in an enterprise.The application of digital technologies can directly reduce the waste of production factors, enhance the unit factor output of enterprises, and invigorate overall process productivity [51].Simultaneously, the application of digital technologies such as artificial intelligence and big data ensures the efficient transmission of information among departments within enterprises which reduces information asymmetry, saves on various costs in the production process, and augments production efficiency [2].The above analysis indicates that digital transformation can either directly or indirectly improve a firm's process productivity φ, i.e., dφ ddig > 0 (where dig denotes the digital transformation of an enterprise).According to Eq (6), it can be inferred that the higher the process productivity of a firm, the higher the quality of its products, dl dφ > 0. This according to the chain rule is dl ddig ¼ dl dφ � dφ ddig > 0. This is to say that the digital transformation of an enterprise can elevate its export product quality by increasing its process productivity.Accordingly, this paper proposes the following second hypothesis: H2: Digital transformation can improve the export product quality of an enterprise by improving its process productivity.
Product productivity (ξ)
Product productivity ξ primarily reflects the ability of a firm to improve product quality under given fixed expenditures [25].However, the enhancement of product quality in an enterprises is closely associated with the innovation that is accomplished through research and development (R&D) activities [2,25,52,53].The existing research indicates that the continuous integration of digital technology and the real economy stimulates enterprises to increase investment in R&D and innovation [54], which contributes to enhancing a firm's product productivity (ξ) [21,55].First, the inherent innovative elements of digital products are assimilated into an enterprise when they are incorporated as a factor input in the production process.This enables enterprises to efficiently absorb external knowledge, reduce transaction costs, enhance their R&D capabilities, and promote the continuous optimization and refinement of their products, thereby enhancing export product quality.Second, the digital industry itself is highly capable of the efficient transmission of information and real-time feedback mechanisms.Digital transformation assists enterprises in gaining an in-depth and timely understanding of the production process and market responses of their products [21].This enables enterprises to promptly identify the strengths and weaknesses of their products, thereby motivating them to engage in product upgrades and innovation activities to meet the demands of consumers and ultimately enhance their product productivity.In conclusion, digital transformation can to some extent facilitate the enhancement of product productivity, i.e., dx ddig > 0. As indicated by Eq (6), the higher the product productivity of a firm, the higher the quality of the products it produces, i.e., dl dx > 0. According to the chain rule, dl ddig ¼ dl dx � dx ddig > 0. This suggests that digital transformation can drive the improvement of export product quality through the mechanism of product productivity.This paper therefore proposes a third hypothesis: H3: Digital transformation can positively impact export product quality by improving its product productivity.
Selection and measurement of variables
Selection and measurement of export product quality.This paper refers to Khandelwal [56], Shi and Shao [25], and Xu and Wang [18] to measure the export product quality.We estimate China's export product quality in the four dimensions of the firm, product, importing country, and year using the demand information regression inference method.The quantity of product j exported by enterprise i to country m in year t is: Where λ ijmt is the quality of product j exported by enterprise i to country m in year t, σ is the elasticity of substitution for the product type, p mt is the price index of the importing country, and E mt is the market size.
Taking the natural logarithm of both sides of Eq (7), we can obtain that: where χ mt = lnE mt −lnP mt is the importing country-year dummy variable to control for variables such as import distance that vary only with the importing country, variables such as exchange rates that vary only with time, and variables such as GDP that vary with time and importing country.E ijmt = (σ−1)lnλ ijmt is a residual term that contains information on product quality.Considering that quality q ijmt is related to price p ijmt and this can lead to problems of endogeneity, this paper draws on the study of Shi and Shao [25] by choosing the average price of a firm's exported products in other markets as the instrumental variable for the price of its exported products in country m.
After considering endogeneity problems, this paper regresses Eq (8).According to the results, we can obtain the exported product quality as follows: Where quality ijmt is the quality of product j exported by enterprise i to country m in year t.We further standardize the product quality of Eq (9) for the following result: Where maxquality ijmt denotes the maximum values of product quality for a given product j at the level of all years, all enterprises, and all importing countries.minquality ijmt denotes the minimum values of product quality for a given product j at the level of all years, all enterprises, and all importing countries.The standardized product quality is in the range of 0 to 1.
Selection and measurement of digital transformation in an enterprise
This paper draws on the latest research [8,29,33] to measure the degree of digital transformation in an enterprise.First, we refer to the work of Wu et al. (2021) [8] to establish a relatively complete corpus for digitization.The specific corpus used to analyze the frequency of digitization-related words is shown in Table 1.Second, we use python software to analyze the Management Discussion and Analysis (MD&A) section of the annual reports of listed companies by extracting the frequency of digitization-related words.Finally, the frequency of these words in the annual reports of listed companies (+1 is taken as a logarithm) is used as an indicator to measure the degree of digital transformation in an enterprise.
Artificial intelligence, blockchains, cloud computing, and big data form the basis of the usage of digital technology in Table 1.The underlying vocabulary is seen as the statistical vocabulary of specific digital word frequencies, and high-frequency vocabulary is considered to be the number of words with the highest number of occurrences among all words.It is apparent that the terms artificial intelligence, digital currency, the Internet of Things, electronic credit, and e-commerce have the highest frequency under artificial intelligence, blockchains, cloud computing, big data, and digital technology usage.
In accordance with the results measured from export product quality indexes and the level of digital transformation among enterprises, this paper divides digital transformation into high and low categories according to the mean value for typical factual analysis.that the kernel density curve for the export product quality of highly digitized enterprises falls more to the right than those with low digitization.This means that highly digitized enterprises also have a higher export product quality.Hypothesis 1 can be preliminarily verified, as the export product quality of enterprises improves with their digital transformation.
The selection and measurement of mechanism variables
Mechanism analysis suggests that the digital transformation of an enterprise can improve export product quality through the two mechanisms of process productivity (φ) and product productivity (ξ).Among these, process productivity (φ) is the ability to produce an output using fewer variable inputs and is measured by total factor productivity.Considering the availability and completeness of the data, this study employs the Levinsohn-Petrin (LP) and Fixed Effects (FE) methods to estimate total factor productivity in reference to Chen [57].Another mechanism variable is product productivity (ξ), which is the ability to produce quality with fewer fixed outlays.The existing databases lack the information needed to directly ascertain product productivity (ξ), however.Considering that R&D expenditure is avital component of fixed expenditures and higher R&D efficiency is associated with stronger product productivity, there exists a strong positive correlation between the two [4,25,52].Therefore, enhancing the product productivity of a firm requires continuous R&D and innovation activities to improve its product quality.This paper measures product productivity (ξ) according to the quantity of patent innovation (R&D_q) and innovation efficiency (R&D_e), and the specific calculating methods with reference to He et al. [58] are as follows: Where AppPatent i,t represents enterprise i's quantity of patent applications in year t, GranPatent i,t represents enterprise i's quantity of granted patents in year t and Research i,t represents enterprise i's total R&D investment in year t.
The selection and measurement of control variables
This paper selects a series of control variables that can impact the export product quality of an enterprise with reference to relevant studies [1,38,[59][60][61]: enterprise scale (lnes), described by the natural logarithm of the firm's total annual assets; the rate of return on total assets (rrt),represented by the net profit of the firm divided by the average balance of total assets; management fee rate (mfr), measured by dividing the firm's management fee by its operating revenue measures; Tobin's Q (tq), which measures a firm's performance and growth by dividing the sum of the firm's total market value and total liabilities by its total assets; and government subsidies (lngs), which in this paper is represented by the amount of government subsidies received by an enterprise.The descriptive statistics of specific variables are shown in Table 2.
Data sources
The data concerning export product quality in this paper were obtained from the China Customs Database.This paper refers to Shi and Shao's [25] method of data processing to exclude the samples of enterprises with incomplete information and unreasonable data (e.g., export amounts under 50 or export quantities less than 100), the samples of enterprises with intermediate traders (enterprises with names like trade, commerce, science and trade, economic and trade, import and export are considered as such).The elasticity of substitution between products must be calculated artificially, and this paper uses the method of Broda and Weinstein [62].
The financial data of listed companies used in this paper were obtained from databases such as CSMAR, INCOPAT, and CNRDS.Data regarding digital transformation were obtained from the annual reports of each company.
After matching the data concerning a firm's export product quality with the financial data of listed enterprises, this created a total study sample of 4,403 listed enterprises containing 111,345 observations at the firm-product-importing country-year level for the period of 2000 to 2015.With reference to common practices in the existing literature, the data were screened, and the following were excluded: (1) samples with serious missing core variables; (2) ST and *ST samples; and (3) samples of financial firms.
Model setting
The following benchmark model was constructed to examine the impact of digital transformation on the export product quality of enterprises: Where subscript i represents different enterprises, j represents different products, m represents importing countries, and t represents time; quality ijmt denotes the quality of product j exported by enterprise i to country m in year t, Dig denotes the degree of digital transformation of enterprise i in year t, and Controls is the control variable group that specifically contains enterprise scale, the rate of return on total assets, the management fee rate, Tobin's Q, and government subsidies.Year t is the year-fixed effect, Indus i is the industry-fixed effect, and u ijmt is the random disturbance term.This paper draws on the studies of Zhou [63] and DeHaan E [64] to analyse industry fixed effects in comparison with firm fixed effects, and the results suggest that industry fixed effects are superior to firm fixed effects.
Baseline analysis
This paper examines the impact of digital transformation on the export product quality of firms, and the specific test results are displayed in Table 3. Table 3 presents the baseline regression results for the benchmark model (Eq 11).The results in column (1) of Table 3 indicate that the digital transformation of a firm can indeed improve its export product quality when year-fixed and industry-fixed effects are not considered.The results in columns ( 2) and (3) of Table 3 demonstrate that digital transformation still has a significant positive impact on export product quality even when these two effects are considered.The results in column (4) of Table 3 show that the explanatory variable coefficients are still significantly positive, which indicates that the digital transformation of enterprises can improve export product quality when year-fixed and industry-fixed effects are both considered in the model.This is consistent both with the above theoretical analysis of direct effects and the findings of similar existing studies [45], Hereby proving Hypothesis 1.
Robustness test
To further confirm that the results of the benchmark regression are robust and credible, this paper conducts robustness tests in five aspects: replacing explanatory variables, adjusting export product quality measures, considering regional effect, eliminating the impact of annual variation in industry product quality, and considering endogeneity.
Replacing explanatory variables
The explanatory variables in this paper are expressed through the frequency of specific words that relate to digitization in the statements of firms.This method of measurement does not consider a firm's investment in digital transformation, however.This paper therefore draws on the study of Song et al. [7] to measure digital transformation by the proportion of total intangible assets that are related to digitization.The test results are displayed in column (1) of Table 4.
Adjusting export product quality measures
The elasticity of substitution between products must be calculated artificially in the process of measuring export product quality.This paper uses the method of Broda and Weinstein to calculate the alternative elasticity between products in baseline regression.Following the approach of Fan and Guo (2015) [65], an alternative elasticity of σ = 5 is employed to recompute the export product quality (A-quality) for the conducting of robustness tests and ensure the robustness of the estimation results.The results of the tests are presented in column (2) of Table 4.
Regional effects
Considering the variations in economic development and policy regulations across different provinces where firms are located, this study tests robustness by incorporating controls for provincial and industry influences.The results are presented in column (3) of Table 4. Additionally, this study further controls for city and industry effects due to the possibility of citylevel characteristics remaining constant over time.The results are presented in column (4) of Table 4.
Eliminating the impact of annual variations in industry product quality
Considering that trends in product quality may vary across different industries over time, it is possible for some industries to experience a decline in product quality and a larger proportion of poorly digitized firms while others see the reverse.While this study has already controlled for industry fixed effects in product quality, it is necessary to further account for annual changes in industry product quality.Therefore, this paper incorporates controls for yearindustry interaction effects and regional effects to eliminate the impact of annual variations in industry product quality.The results of this analysis are presented in columns ( 5)-( 7) of Table 4.
Considering endogeneity
This study investigates how the digital transformation of enterprises enhances export product quality, and it is possible for a bi-directional causality to exist between digital transformation and the export product quality.Specifically, enterprises with a higher export product quality may also have a stronger willingness to adopt digital technologies.This bi-directional causality relationship may pose endogeneity challenges to the results of the benchmark model.Drawing on the relevant research [66], this paper employs the fiber optic cable length as an instrumental variable for digital transformation and uses the two-stage least square method for estimation.The primary reasons for selecting this instrumental variable are as follows: (1) Fiber optic cable length is the most vital aspect of digital infrastructure in the digital era [67].The length of fiber-optic cables represents the capacity for the high-speed and stable transmission of digital information.The length demanded of fiber optic cable routes increases in tandem with the demand for larger network bandwidth as enterprises undergo digital transformation, and the fact that this length is so closely intertwined with the progress of this transformation thoroughly satisfies the relevance condition.(2) Since China's fiber optic cables are primarily controlled and maintained by its four major state-owned network operators, this instrumental variable is unrelated to export product quality, satisfying the exogeneity assumption for the instrumental variable.The results of the endogeneity test in this paper are shown in column (8) of Table 4.
Other robustness tests
This study also conducts a series of other robustness tests.Considering that factors such as the destination country and the trade mode adopted by the enterprise may also have an impact on the explained variable, we further control for these two effects to minimize any problems caused by the omission of important variables.We also consider that the export product quality of an enterprise may change over time, particularly in terms of trends like increasing product standards in the destination country.We therefore further control the interaction effects of time-industry-export destination country to comprehensively address issues concerning omitted variables.The results are shown in columns ( 9) and ( 10) of Table 4.
The results in Table 4 reveal that the explanatory variable coefficients are significantly positive after a series of robustness tests, offering evidence that the primary findings in this paper are indeed robust.
Further analysis
Mechanism analysis.After empirically verifying the effect of digital transformation on export product quality, we further test their underlying influence mechanisms in this section.Based on the above theoretical analysis, the impact of digital transformation on export product quality can be realized through the two mechanisms of process productivity (φ) and product productivity (ξ).
To further ascertain whether process productivity (φ) and product productivity (ξ) present a mediation effect between digital transformation and quality, this paper draws on the logic of mechanism analysis [68][69][70][71][72], establishing Eq (12) to test the mechanisms by which the digital transformation of an enterprise improves its export product quality.
Where M denotes the variable of the mechanism and the meaning of the remaining variables is consistent with Eq (11).Considering the problem of endogeneity between the mechanism variables and explanatory variable, the two-stage least square method is used to empirically test the impact of digital transformation on mechanism variables.This paper refers to recent research [73,74] in selecting the number of internet broadband access ports and domain names as the instrumental variable, the primary reasons for which are as follows: (1) internet broadband access ports and domain names are two crucial elements in the integration of the real and digital economies, and intuitively reflect the maturation of digital transformation, which satisfies the relevance condition; (2) internet broadband access ports and domain names are not directly correlated with export product quality, which satisfies the assumption of exogeneity in the selection of instrumental variables.The specific test results are displayed in Table 5.
Table 5. Results of mechanism test.The results In columns ( 1) and (2) of Table 5 show that the coefficients of the explanatory variable (Dig) are significantly positive, indicating that the digital transformation of an enterprise significantly improves its process productivity (φ).The results in columns (3) and (4) of Table 5 show that the coefficients of the explanatory variable (Dig) are significantly positive, which indicates that the digital transformation of an enterprise also has a significant positive impact on its product productivity (ξ).Numerous studies have also illustrated the pivotal role played by process productivity (φ) and product productivity (ξ) in improving export product quality [25,52,53,[75][76][77][78][79].With reference to these works, we can draw the conclusion that digital transformation can indeed influence the export product quality of an enterprise for the better through the two mechanisms of process productivity (φ) and product productivity (ξ), proving Hypotheses 2 and 3.
Variables
Heterogeneity analysis.Enterprise heterogeneity.Considering that the heterogeneity of an enterprise is an integral component of its export product quality [1], it is reasonable to take this into account when studying how it this affected by digital transformation.We define enterprise heterogeneity in this study in terms of a firm's trade mode, corporate governance, and the industry to which it belongs.We then analyze this heterogeneity through a sub-sample test.First, digital transformation has different effects on the improvement of product quality for different trade modes.To test heterogeneity, the total sample in this study is divided into two subsamples of general and processing trade modes.Second, firms with different levels of corporate governance have different requirements concerning their management, leading to different applications and absorption capacities for digital technology and different levels of production and operation.Digital transformation can therefore affect the improvement of product quality in a variety of ways.This paper draws on the study of Zhou et al. [80] to test heterogeneity by dividing the sample of enterprises into strong and weak governance categories.Finally, products with different technical attributes contain different technologies, and digital transformation will improve product quality differently depending on the technical level of the industry to which the enterprise belongs.Drawing on the research of Brockman et al. [81], this study also tests heterogeneity by categorizing industries as either high-tech or low-tech intensive based on the median intensity of R&D in each industry.The specific test results are shown in Table 6.
The results in columns (1) and (2) of Table 6 indicate that the impact of digital transformation on export product quality is significantly higher in general trade enterprises than in those engaged in processing trade.This is because general trade enterprises are engaged in the complete range of activities along the value chain from R&D to sales.These firms have abundant resources in reserve and are highly capable of transformation, which leads to a greater demand for improvements in product quality.Processing trade enterprises are instead engaged mainly in low value-added activities like processing and product assembly where they are tasked with fulfilling orders.These enterprises have a relatively lower demand to improve production capacity and a limited means to transform, leading to a lower demand for improvements in product quality [2].Digital transformation therefore has a significant effect on improving the export product quality of general trade enterprises.The results in columns ( 3) and ( 4) of Table 6 show that digital transformation also has a more pronounced effect on enterprises with strong corporate governance.One possible reason is that corporate governance can directly reflect a firm's capacity for technological integration and the allocation of resources.The stronger its capacity for corporate governance, the more a firm is able to integrate technology and resources to improve its export product quality.The results in columns ( 5) and ( 6) of Table 6 indicate that digital transformation affects export product quality more significantly in high-tech industries.This is because enterprises engaged in high-tech industries are technology-rich and have a stronger demand for high-end services such as R&D, design, and information, all of which are conducive to the improvement of their export product quality.
Heterogeneity in the digital transformation of enterprises.Enterprises vary in their endogenous motivations and capabilities concerning digital transformation.This study divides digital transformation into different levels relative to the average and examines their different effects on export product quality.An enterprise is considered to have a high level of digital transformation if it is above the average, and vice versa.The specific test results are shown in columns (1) and (2) of Table 7, underscoring that higher digital transformation does indeed result in greater improvements in export product quality.
Heterogeneity at the level of regional digital infrastructure.Considering that the current level of digital development varies greatly between regions in China, does the impact of digital transformation on a firm's export product quality also result in heterogeneity?To this end, this paper refers to the work of Pan et al. [82] to construct indicators for the infrastructure of the digital economy and calculates the level of digital infrastructure in each province.A province is considered as having a high level of digital infrastructure if it is above the median level of all provinces and vice versa.The specific test results are shown in columns (3) and (4) of Table 7.They indicate that a higher level of digital infrastructure in a province indeed corresponds to digital transformation having a greater effect on the export product quality of its firms.
Conclusions
Harnessing the power of digital technology has become the central driving force for the evolution and growth of different industries in the midst of the digital revolution.Leveraging the dividends of the digital economy to enhance export product quality and become more competitive in foreign trade will be crucial in reshaping the image of Made in China.This paper makes use of the data of listed companies in China from 2000 to 2015 to examine the influence of digital transformation on export product quality.It analyzes the endogenous determinants of export product quality from the theory of enterprise heterogeneity to explore the influence mechanisms of digital transformation.The main conclusions are as follows: First, the digital transformation of an enterprise significantly improves export product quality, and the empirical findings still hold after a series of robustness tests.Second, digital transformation improves export product quality through process productivity (φ) and product productivity (ξ).Third, the heterogeneity analysis finds that digital transformation affects export product quality the most for firms that are engaged in general trade, those in high-tech industries, and those with a stronger capacity for corporate governance.At the same time, the differences in the surrounding digital infrastructure and level of digital transformation among enterprises also affects the manner in which digital transformation improves quality.
Policy implications
Based on the above findings, this paper points out the following policy implications concerning how firms may enhance the quality of their export products and help to foster a large trade nation: First, government should accelerate the digital transformation of enterprises and provide an environment that guarantees its realization.The research in this paper shows that a higher level of digital transformation corresponds to greater improvements in a firm's export product quality.The government should provide financial and policy support for firms that have already undergone digital transformation to promote further improvements while providing subsidies and underwriting measures for those that have not.This will promote digital transformation and improve the efficiency and capacity for quality production among enterprises, thereby upgrading the quality of their products.In addition, the analysis of the level of digital infrastructure across regions of China shows that this also affects how digital transformation improves export product quality.The government should gather investment and promote the construction of big data centers, 5G, and other forms of digital infrastructure regionally to bolster the digital transformation of enterprises and improve the quality of their export products.Second, enterprises should seize the opportunity for digital transformation to improve their efficiency and ability to innovate, making their products more competitive in overseas markets.The research in this paper shows that total factor production efficiency and the capacity for technological innovation are the primary means by which digital transformation improves the export product quality of an enterprise.Firms should therefore seize the opportunity provided by digital transformation to reorganize and optimize how innovation is managed, cultivating their ability to absorb and integrate various technical resources.They should also strengthen their use of digital technology in production, management, and sales while investing in artificial intelligence, robotics, interconnection platforms, intelligent sensors, and other intelligent equipment, as doing so will improve precision, efficiency, and the quality of their products.
Third, digital transformation should be targeted at improving the export product quality of different types of firms.Considering that differences in trade modes, technical attributes, and governance capabilities among firms change the way that digital transformation improves their export product quality, both the government and enterprises should take targeted measures to maximize these effects.The government should prioritize financial support to enterprises engaged in general trade and high-tech industries, and enterprises that have already realized digital transformation should strengthen their corporate governance.Low-tech enterprises should strengthen their expenditures in technological R&D and innovation, applying digital transformation to all aspects of their production and operations and seizing the opportunity to improve their export product quality.
Shortcomings and outlook
First, this paper has measured the export product quality of firms from an industry-wide perspective.As industries are highly varied, future research can further explore the factors influencing export product quality within each industry.Second, further research can explore how export quality is affected in the different dimensions of digital transformation.As there may be differences in the mechanisms through which the various dimensions of digital transformation affect product quality, future research could further explore the impact on product quality at the level of the various dimensions of digital transformation.
Fig 1 indicates
Table 1 . Digitization dictionary. Core Vocabulary High Frequency Vocabulary Underlying vocabulary
Internet finance, Internet healthcare, financial technology, open banking, quantitative finance, digital finance, digital marketing, netlink, unmanned retail, mobile Internet, mobile Internet, mobile payment, voice recognition, smart agriculture, smart wear, smart grid, smart contract, smart environmental protection, smart home, smart transportation, smart customer service, smart energy, smart investment, smart travel, smart medical, smart marketing, autonomous driving Note: The ranking of vocabulary word frequency is in parentheses https://doi.org/10.1371/journal.pone.0293461.t001Fig 1. Kernel density map of export product quality for digital transformation.https://doi.org/10.1371/journal.pone.0293461.g001 | 9,815.4 | 2023-11-02T00:00:00.000 | [
"Business",
"Computer Science",
"Economics"
] |
Structural properties of high density lipoprotein subclasses homogeneous in protein composition and size.
We isolated native high density lipoprotein (HDL) subclasses homogeneous in size and in their protein content with the objective of investigating the differences and similarities in their apolipoprotein AI (apoA-I) structures. Defined particles were isolated from ultracentrifugally prepared HDL by immunoaffinity and gel-filtration chromatography. The isolated 88-A LpAI, 106-A LpAI, 96-A LpAI/AII particles (LpAI, particles contain only apoA-I; LpAI/AII, particles contain apoA-I and apoA-II), together with a 93-A reconstituted HDL were analyzed for purity, composition, and content of apolipoprotein molecules per particle, and were examined by far and near circular dichroism and intrinsic fluorescence spectroscopic methods, as well as by reaction kinetics with lecithin:cholesterol acyltransferase. The spectroscopic analyses indicated that the secondary structures and three-dimensional arrangements of apoA-I in all these particles are remarkably similar: their tryptophan residues are located in similar nonpolar environments and become exposed to increasing concentrations of guanidine hydrochloride in comparable denaturation steps; the 60-65% alpha-helical structures in apoA-I are denatured in similar patterns with 0-5 M denaturant concentrations. However, increasing surface lipid contents and the presence of apoA-II stabilize apoA-I on the HDL particles. The reaction kinetics with lecithin:cholesterol acyltransferase are similar and slow for the isolated HDL particles, reflecting product inhibition, and/or an apoA-I conformation that is unfavorable for the activation of the lecithin:cholesterol acyltransferase reaction.
sist of a heterogeneous population of particles containing different types and amounts of apolipoproteins and lipids. They are mostly spherical particles ranging in diameter from 70-120 A. Two main subclasses of HDL can be isolated by immunoaffinity chromatography; one of the subclasses contains both apolipoprotein A-I (apoA-I) and apolipoprotein A-I1 (apoA-11) (LpAI/AII), whereas the other subclass contains apoA-I, but no A-I1 (LpAI) (1)(2)(3)(4)(5)(6). Both of these subclasses of HDL include small amounts of other proteins and are heterogeneous in size. Several laboratories have investigated the metabolic behavior of these two heterogeneous HDL subclasses ('7-11). I n vitro studies of the binding of LpAI and LpAI/AII to various cells, and their ability to promote cholesterol efflux from cells enriched in cholesterol, have yielded conflicting results. For example, Barbaras et al. (8) found that LpAI and LpAI/AII particles bound equally well to preadipocytes loaded with cholesterol, but that only the LpAI particles promoted cholesterol efflux from the cells. In contrast, Johnson et al. (lo), using different cells and experimental conditions, were unable to find a significant difference in the behavior of the two HDL subclasses as cholesterol acceptors. Metabolic studies, in uiuo by Rader et al. (11) have shown that the catabolic rate of apoA-I on both types of particles is markedly different. Thus, because of the possible functional differences between the LpAI and LpAI/AII particles it is important to delineate their structural differences, particularly differences in the structure of apoA-I; however, to obtain unambiguous structural information, homogeneous particles are required.
During our investigations of reconstituted HDL (rHDL), we have found that apoA-I can exist in a few well defined conformational states in distinct rHDL particles (12)(13)(14). In discoidal rHDL containing apoA-I and palmitoyloleoylphosphatidylcholine (POPC), each apoA-I is arranged into 6 to 8 antiparallel cy-helical segments joined by fl turns or sheets (12, 15). The nonpolar side of the helices covers the edge of the lipid disc. The discrete diameters of these discoidal rHDL particles are defined by the number of apoA-I molecules per particle (two or more) (12, 13) and by the number of helices in each apoA-I that are in contact with the lipid. The structural differences in the apoA-I molecules were measured by fluorescence and circular dichroism spectroscopic methods, and the distinct structures of apoA-I were correlated with up to 15-fold differences in the reactivity of the rHDL with 1ecithin:cholesterol acyltransferase (LCAT) (12)(13)(14). We have also prepared a spherical rHDL of defined composition and size by reacting the discoidal rHDL with LCAT in the presence of low density lipoprotein (14). The 93-A rHDL product of this reaction, which contains large amounts of cholesterol ester and appears round by electron microscopy, has three apoA-I molecules in a structure comparable to that in the discoidal rHDL precursors.
Since we have firmly established, by using the rHDL models, that apoA-I can exist in well defined conformational states which can determine the functional properties of the rHDL, we set out in this work to isolate native LpAI and LpAI/AII particles of uniform size, by immunoaffinity chromatography and gel filtration, with the objective of characterizing their structures and establishing whether apoA-I exists in different conformations in the different native particles.
MATERIALS AND METHODS
Apolipoprotein A-I, LDL, and LCAT were prepared by routine methods (16-18) from human plasma donated by the Champaign County Blood Bank-Regional Health Resource Center. The purity of apoA-I and LCAT was checked by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (PAGE), and the gels were stained with either Coomassie Blue or silver stain using the Phast System (Pharmacia LKB Biotechnology). Sodium cholate, crystalline cholesterol, POPC, and bovine serum albumin (BSA) were purchased from Sigma; [4-"C]cholesterol was purchased from Du Pont-New England Nuclear; guanidine hydrochloride (GdnHC1) was the electrophoresisgrade F product from Fisher; bis(sulfosuccinimidy1)suberate (BS,) was purchased from Pierce.
Preparation of Anti-AI and Anti-AII Immunosorbents-Monoclonal antibodies against apoA-I and apoA-I1 were prepared by intraperitoneal immunization of BALB/c mice with intact HDL3 and characterized as previously described (19). A mixture (100 mg) of three different monoclonal antibodies (A05, A17, and A30) which recognize all forms of plasma apoA-I was covalently coupled to CNBractivated Sepharose 4B (Pharmacia LKB) as instructed by the manufacturer except that a ratio of 3.5 mg of antibodies/g of gel was used.
In an identical fashion a mixture (100 mg) of three different monoclonal antibodies (G03, G05, and Gll), which recognize all forms of plasma apoA-11, was coupled to CNBr-Sepharose (3.5 mg/g gel). The immunosorbents were packed into borosilicate glass columns of 5-cm internal diameter.
Preparation of Native HDL and rHDL-LpAI and LpAI/AII were isolated using a modification of the Cheung and Albers method (3). Human HDL (d = 1.063-1.21 g/ml) obtained by ultracentrifugal flotation was used in a two-step immunoaffinity chromatography procedure at 4 "C. Typically, 2.5 mg of HDL protein were injected on the anti-apoA-I1 column connected in series with an anti-apoA-I column at a slow flow rate (0.2 ml/min) during a 15-h period. Then the individual columns were extensively washed (2 ml/min) and each retained fraction, LpAI and LpAI/AII, was eluted with 20 ml of 3 M NaSCN. The eluted fractions were immediately desalted on Sephadex G-25 columns at a flow rate of 2 ml/min in order to minimize the time of contact with NaSCN. In all cases, more than 75% of the protein applied to the columns was retained by the immunoaffinity columns, and essentially all of the retained HDL protein was eluted with the NaSCN and recovered after the desalting step. The LpAI and LpAI/AII were concentrated using Centriprep 10 concentrators (Amicon) to 1 mg/ml and finally applied to a Superose 6 gel filtration column for further fractionation by size using a Pharmacia FPLC system. The losses of HDL protein on the Superose 6 column were insignificant, but each of the fractions with uniform sizes used in the subsequent experiments represented only IO-20% of the HDL protein applied to this column. The samples were stored in a standard 10 mM Tris-HC1, pH 8.0, buffer containing 0.15 M NaCI, 0.01% EDTA, and 1 mM NaN3. This buffer was used for all the experiments performed in this study unless otherwise specified.
In a separate experiment, we tested the effects of NaSCN on HDL particle size distribution and reactivity with LCAT. High density lipoprotein (2.5 mg of protein) was incubated with 3 M NaSCN for 3 h at 4 "C. After removal of the NaSCN by dialysis, the particle size distribution for exposed and unexposed HDL was shown to be essentially identical by gradient gel electrophoresis (method described below). Also the reaction kinetics with LCAT (described below) were found to be the same for HDL which had been exposed or not exposed to NaSCN. Therefore, we concluded that the effects of this salt on the structural and functional properties of HDL are insignificant as judged by these sensitive tests.
The 93-A spherical reconstituted HDL (rHDL) was synthesized using the sodium cholate dialysis method (20, 21). An rHDL mixture containing 5 mg of apoA-I was prepared from a molar ratio of 80:81:80, POPC/cholesterol/apoA-I/sodium cholate. This mixture of rHDL was incubated with LDL (1:2 weight ratio of rHDL to LDL protein) in the presence of 12 mg/ml BSA, 4 mM 0-mercaptoethanol, and 20 pg of LCAT at 37 "C for 48 h. Ultracentrifugal flotation was performed first at a density of 1.070 g/ml for two 24-h periods in order to remove the LDL, and then at a density of 1.21 g/ml for 24 h to float the rHDL. Residual LDL and small amounts of a 78-A particle were removed by gel filtration on a Superose 6 column as was previously described (14).
Electrophoretic Analyses of the rHDL Particle and the Native HDL Subfractions-The Stokes diameters of the rHDL and native HDL subfractions isolated after immunoaffinity and gel filtration chromatography were determined by nondenaturing gradient polyacrylamide gel electrophoresis (GGE) on Pharmacia precast PAA 4/30 gels as described (22). The following proteins were used as standards for the Stokes diameters of the HDL particles: BSA, 71 8,; lactate dehydrogenase, 82 8,; horse ferritin, 122 A; and thyroglobulin, 170 A.
The apolipoprotein composition of the native HDL subfractions was determined by SDS-PAGE and silver staining on 20% gel slabs.
Lipoprotein Analysis-The protein concentration was determined using the method of Lowry et al. (23) and the absorbance at 280 nm with an extinction coefficient of 1.13 ml/mg.cm for the rHDL and LpAI particles and an estimated extinction coefficient of 0.96 ml/ mg.cm for the LpAI/AII particles. The last extinction coefficient was estimated by assuming that the LpAI/AII particle contains an equimolar quantity of apoA-I and apoA-I1 and using molar extinction coefficients of 31,700 M" cm" and 12,000 M" cm" (24) and molecular weights of 28,000 and 17,500 for apoA-I and apoA-11, respectively. The content of apoA-I and apoA-I1 in LpAI/AII was also determined by an enzyme-linked immunosorbent assay (25). The Chen et al. method (26) was used to determine the phospholipid content, extending the standard curve down to 0.16 r g of inorganic phosphate. Total cholesterol was determined using the enzymatic assay of Heider and Boyett (27).
The number of apoA-I and apoA-I1 molecules per particle for the native HDL subfractions was determined by cross-linking with 10 mM BSs for 3.5 h in a modified version of the Staros technique (28).
The cross-linking reaction mixture was quenched using 250 mM ethanolamine and subsequently run on SDS-PAGE (IO-30%). Free apoA-I and free apoA-11, as well as a 1:l weight ratio of these two apolipoproteins, were also subjected to cross-linking with BSa and SDS-PAGE in order to serve as standards. In addition, mixtures of apoA-I to apoA-I1 in different molar ratios were run on the SDS-PAGE gel along with the 96-8, LpAI/AII particle. The intensity of the stained bands was quantitatively assessed using a Pharmacia-LKB Ultro Scan XL laser densitometer. The observed intensities indicated that the apoA-I and apoA-I1 were present in a 1:l molar ratio on the 96-8, LpAI/AII. The 93-8, spherical rHDL has been previously shown to contain 3 apoA-I molecules per particle (14).
Fluorescence Spectroscopy-Uncorrected tryptophan (Trp) fluorescence spectra were measured with a Perkin-Elmer MPF-66 fluorescence spectrophotometer at 25 "C. An excitation wavelength of 280 nm and 4-nm slit widths were used. The samples were adjusted to an absorbance value of approximately 0.05 at 280 nm with standard salt buffer which was shown to contribute little to the spectral region of interest, 330-355 nm. The same samples were also used to measure the intrinsic fluorescence polarization values at 25 "C with an SLM model 400 fluorescence polarization instrument using a 280-nm exciting light, 4-nm slit widths, and Corning glass 0-54 emission filters, Denaturation with solid GdnHCl was monitored by following the change in the wavelength of maximum fluorescence of the T spectra. Solid GdnHCl was added to 0.7 ml of free apoA-I, the 9 3 3 rHDL, or the native HDL subfractions directly into the cuvette in a sequential manner. The concentrations of GdnHCl ranged from 0.72 to 6.37 M. The time required for each addition of GdnHCl, mixing, and recording of spectra was approximately 3 min. This denaturation experiment was performed on two separate preparations of the 93-A rHDL and the native HDL subclasses, giving very similar results both times.
Circular Dichroism Measurements-Circular dichroic spectra were recorded with a Model CD6 Jobin Yvon, ISA (Longiumeau, France) spectropolarimeter at the Laboratory for Fluorescence Dynamics, University of Illinois, Urbana. In the far-ultraviolet region (200-250 nm) measurements were made in a 1-mm path length quartz cell. The rHDL and HDL samples were adjusted to an absorbance value at 280 nm of 0.1 for the denaturation experiments and the determination of the percentage of a-helicity. The mean residue ellipticity in units of deg.cm2 dmol" was calculated from the following equation: where Oh is the observed ellipticity in degrees at wavelength X, 1 is the optical path in cm, c is the concentration in g/ ml, and MRW is the mean residue weight. We used a value of 115 g/ residue for the rHDL and LpAI subfractions and a value of 114.6 g/ residue for the single purified subfraction of LpAI/AII (assuming that apoA-I and apoA-I1 are equimolar in this particle). The percentage of a-helicity was calculated from the empirical expression of Chen et al. (29). Denaturation with GdnHCl was also monitored using the change in the observed ellipticity at 222 nm. The conditions used were the same as those in the fluorescence denaturation experiment described above except that the absorbance at 280 nm of the samples was adjusted to 0.1. This experiment was repeated on two to three separate preparations of particles with very similar results.
For the CD measurements in the near-ultraviolet region (250-320 nm) a 10-mm path length quartz cell was used, and the optical density of all samples was adjusted to approximately 0.2 at 280 nm. The nearultraviolet CD spectra are the average of eight scans. Base-line runs were made prior to each sample run and the base-line was subtracted to obtain the final spectrum. The spectra were reproducible for at least two separate preparations of particles.
LCAT Reaction Kinetics-The rHDL and native HDL particles were incubated with aliquots of ['4C]cholesterol dispersed in 2% BSA containing approximately 4 X 10' cpm for 3 h at 37 "C, in order to label the substrate particles (30). The reaction mixtures for the kinetic analysis contained substrate concentrations ranging from 8 X lo-' to 2 X 10"j M apoA-I or apoA-I plus apoA-11, 2 mg of defatted BSA, 4 mM P-mercaptoethanol, and 0.4 or 0.5 pg of pure LCAT. The mixtures were incubated at 37 "C for 2 h. Lineweaver-Burk plots were constructed from data for four particle concentrations and the corresponding initial velocity results. The inverse slope of the Lineweaver-Burk plot gives the Vmax,spp)/Km(app), an indicator of the overall reactivity of the particles. The Vmax(spp)/Km(app) was adjusted for any differences in enzyme concentration.
RESULTS
This work represents one of the first attempts to isolate and characterize distinct size classes within the two main fractions of HDL, LpAI, and LpAI/AII. Fig. 1 shows a photograph of a 4-30% polyacrylamide gradient gel containing the LpAI and LpAI/AII fractions isolated by gel filtration. Panel A shows that LpAI has bee? successfully separated into two discrete sizes of 106 and 88 A as determined by gradient gel electrophoresis. These particles are free of contaminating apoA-I which appears as a double band at the bottom of the gel. Panel B illustrates the fractionation of the LpAI/AII. There appear to be three pain size classes in this fraction inclyding 96-, 87-, and 80-A particles. Lane F represents the 96-A size class which was purified and used in subsequent studies. The smaller particles shown in lane E were not effectively separated. Similar size classes of HDL have been observed in other laboratories starting with either plasma or ultracentrifugally isolated HDL in the immunoaffinity chromatography separation procedure (3,6,(31)(32)(33). Cheung and Albers (3) reported two mean Stokes diameters for LpAI of 10.8 and 8.5 nm and three mean Stokes diameters for LpAI/ AI1 including 9.6, 8.9, and 8.0 nm which agree very well with our results.
The photograph in Fig. 2 were largely eliminated by the gel filtration step, were not investigated, but should include the C apolipoproteins, and apoE. Table I lists the sizes of the native HDL subfractions and the Teconstituted rHDL sphere as determined by GGE. The 93-A spherical rHDL has been previously synthesized and characterized in our laboratory (14). Table I also contains the composition of the native and synthetic particles, as well as the number of molecules of apoA-I and apoA-I1 determined by cross-linking with BS3.
The cross-linking results indicate that the LpAI 88-8, particle contains 3 apoA-I molecules while the lot-8, particle contains 4 apoA-I. The cross-linking of the 96-A LpAI/AII gives a cross-linked protein with an equivalent migration to 2 apoA-I plus 2 apoA-I1 molecules or 3 apoA-I plus 1 apoA-I1 molecules on SDS-PAGE. In order to determine which of these ratios was correct, the intensity of protein staining for the 96-A LpAI/AII particle uersw that of standard molar The diameters were obtained by GGE (k2 A).
'Phospholipid (PL), total cholesterol (TC), and protein (Prot) were determined in duplicate for a representative preparation. e Determined by cross-linking the apolipoproteins with bis(sulfosuccinimidy1) suberate (28) and analysis by SDS-PAGE.
The previously determined composition for a similar 93-A rHDL particle was 4412411 (14).
e An estimated extinction coefficient of 0.96 ml/mg.cm and the combined molecular weights of apoA-I and apoA-I1 were used in calculating the composition ratios. We assume two protein units (apoA-I plus apoA-11) per particle. ratios of apoA-I and apoA-I1 was determined on an SDSpolyacrylamide gel. A molar ratio of 1:1 (apoA-1:apoA-11) was found. Furthermore, enzyme-linked immunosorbent assays of this LpAI/AII fraction gave similar apoA-I and apoA-I1 concentrations: apoA-I = 3.21 PM and apoA-I1 = 2.94 p?. Thus, based on these results, it was concluded that the 96-A LpAI/ AI1 particle contains 2 apoA-I molecules and 2 apoA-I1 molecules per particle.
From the determined stoichiometries and the volumes of the components, the diameter of the particles as spheres was calculated. Ratios of cholesterol ester to free cholesterol were taken from James et al. (34) who subjected HDL, and HDL3 to a similar immunoaffinity chromatography fractionation and determined these lipid ratios for LpAI and LpAI/AII found in both the HDLz and HDLs subclasses. (14) probably accounts for the 3-A difference in diameter. The diameters calculated for the native HDL subfractions are all smaller than the sizes determined by gradient gel electrophoresis. These differences in size could be due, in part, to the triglycerides present in the native HDL.
Apolipoprotein Conformation Based on Spectroscopic Results- Table I1 coptains the results from the spectroscopic studies on the 93-A rHDL, the native HDL subfractions, and free apoA-I. The wavelength of maximum intrinsic fluorescence indicates the relative polarity of the Trp residues in the apoA-I molecules. Since apoA-I1 does not contain any Trp residues, only the apoA-I Trp fluorescence is detected in the LpAI/AII particles. Because the wavelength values range from 331 to 334 nm, the Trp residues must reside in a fairly nonpolar environment.Jt seems that the native HDL subfractions, mainly the 106-A LpAI and 96-A LpAI/AII particles, may have their Trp residues in slightly more nonpolar environments especially compared to free apoA-I. The intrinsic fluorescence polarization values reflect the segmental motions of the Trp residues and their fluorescence lifetimes. As previously observtd, the free apoA-I has the highest polarization value. The 93-A rHDL and the native HDL subfractions seem to be similar in the dynamic behavior of their tryptophan residues.
The a-heljcal content is 55% for the free apoA-I and 65% for the 93-A rHDL. These values are within the expected error of the results published previously by our laboratory (12, 14). The LpAI subfrac:ions are quite similar in their a-helical content while the 96-A LpAI/AII particle has much lower ahelicity. This lower a-helicity is due to the presence of apoA-11, since it is well established that the percentage of a-helix is lower in apoA-I1 than apoA-I (37,38). Stoffel and Preissner (38) have shown that apoA-I1 in aqueous solution has 27% ahelical structure, and upon recombination with phospholipids, increases in helix content to 40%.
Denaturation Studies with Guanidine Hydrochloride- Fig. 3 show! the results of GdnHCl denaturation of free apoA-I, the 93-A rHDL, and the native HDL subfractions monitored by the change in the wavelength of maximum fluorescence. Free apoA-I denatures quite readily in 2 M GdnHCl as expected (12, 14,39,40). The results in Table 11 indicate that 50% denaturatiop has occurred at a 1.06 M GdnHCl concentration. The 8t-A LpAI subfra$ion is more readily denatured than the 106-A LpAI and 96-A LpAI/AII subfractions. It is possible that the apoA-I1 on the LpAI/+II subfraction stabilizes the structure of apoA-I. The 93-A rHDL is the most stable particle requiring 4.8 M GdnHCl in order to attain 50% denaturation (Table 11). From the plateaus in the denaturation curves one may speculate that the different regions of apoA-I containing the 4 Trp residues are denaturing independently from each other.
The results of the denaturation with the same concentrations of GdnHC1, monitored by circular dichroism, are given in Fig. 4. It appears that the loss of secondary structure followed by the change in the ellipticity does not occur at the same rate as the change in the wavelength of maximum fluorescence. Fifty percent of the secondary structure is lost at GdnHCl concentrations le$s than 2 M for the native HDL subfractions. Again, the 93-A rHDL appears more resistant to denaturation by GdnHCl, since it requires greater concentrations of GdnHCl to reach the 50% denaturation point.
Near-ultraviolet Circular Dichroism-The near-ultraviolet (250-320 nm) CD spectra (Fig. 5) were normalized to an optical density of 0.1 (at 280 nm)* for the three native HDL subclasses, the total HDL, the 93-A rHDL, and the free apoA-I. Two peaks at 284 and 291 nm are evident for the different native HDL a?d rHDL subfractions. The maximum at 284 nm for the 88-A LpAI is weaker. One minimum at 296 nm is observed for all of these particles, and a broad negative band with several extrema between 260 and 280 nm is observed for the particles that contain only apoA-I. A strong minimum at 272 nm characterizes the speFtrum of the 96-A LpAI/AII particle. The spectra of the 96-A LpAI/AII and the total HDL (1.063 g/ml< d < 1.21 g/ml) are very similar except that the strong minimum is blue-shifted at 269 nm for the total HDL. The spectrum of the free apoA-I is quite different. The peaks observed for the HDL particles at 291 and 284 nm are replaced by a minimum at 293 nm and a shoulder at 287 nm, and the minimum at 296 nm is no longer present. Also a positive plateau of low ellipticity replaces the negative band centered between 262 and 272 nm. As the extracted lipids of HDL have no optical activity between 250 and 320 nm, the observed differences in the spectra can be attributed to differences in protein conformation (37). ' Fluorescence polarization values were measured at 25 "C, and 280 nm was the excitation wavelength. The errors are k0.003.
TABLE I1 Fluorescence spectroscopy, circular dichroism, and GdnHCl denaturation results on free apoA-I, 93-A rHDL, and native HDL subclasses
'The % a-helix content was estimated from the empirical formula of Chen et al. (29) and molar ellipticity values at 222 nm. The were determined from the inverse slopes of Lineweaver-Burk plots. The values shown are similar for the different particles and are very low compared to discoidal rHDL substrates of LCAT (12, 14). The 106-4 LpAI particles appear to be the least reactive and the 96-A LpAI/AII the most reactive particles in these preparations.
DISCUSSION
Prior to structural analysis of the three isolated subclasses of HDL, we needed to establish their homogeneity. They were shown (Fig. 1) to be quite homogeneous in size by nondenaturing GGE. Further proof of the homogeneity of the purified particles was the detection of a single band upon cross-linking of the proteins followed by analysis on SDS-PAGE. By SDS-PAGE we have also established that the LpAI fraction was completely depleted in apoA-I1 (Fig. 2, lune E ) and that apoA-I represented greater than 95% of the proteins contained in the two size subclasses of LpAI (88 and 106 A). By the same technique apoA-I and apoA-I1 have beep shown to represent greater than 90% of the proteins in 96-A LpAI/AII particles.
The sizes given in Table I are similar to those reported by other laboratories (3,6,(31)(32)(33). However, comparisons between our compositional data and those of other laboratories are not easily accomplished since we have isolated pure size classes within the LpAI and LpAI/AII fractions where most other research groups have not. Only Nichols et ul. (3) have reported the compo$tion of a purified 10.6-nm LpAI size subclass. Our 106-A LpAI subfraction contains a similar amount of phospholipid, but less total cholesterol and more protein. These differences in composition may be due to differences in the homogeneity of the isolated particles and to the inherent ability of a fixed protein framework to contain more or less lipid, as shown for discoidal rHDL particles (14). The number of apoA-I molecules per LpAI 106-A particle agrees with the results of Nichols et al. (41) showing that 4 apoA-I molecules per particle are present on adult HDLzt, LpAI. We found 2 apoA-I and 2 apoA-!I molecules per particle (i.e. a 1:1 molar ratio) within the 96-A LpAI/AII subfraction. Other investigators have determined this molar ratio to range from 0.8:l to 3:l for ultracentrifugal fractions of HDL (31). The Vmal(a~p)/Km(app) was obtained from Lineweaver-Burk analysis of the initial velocity uersus apolipoprotein concentration data. The results are the average of two experiments, each performed with four apolipoprotein concentrations ranging from 8 X to 2 X lo-' M.
The correlation coefficients from linear regression analysis were between 0.987 and 0.999. * The apolipoprotein concentration was calculated from the combined extinction coefficient and molecular weights for apoA-I and apoA-11, for a particle containing equimolar amounts of the two apolipoproteins.
The well-defined, highly reproducible sizes of the isolated LpAI subclasses and the 96-A LpAI/AII particles suggest the existence of fixed protein frameworks which determine the diameters of the different HDL subclasses. Close packing of the proteins on the surface of the particles is also suggested by the cross-linking results and the protein and lipid ratios on their surfaces. The efficient chemical cross-lipking (with BS3) of the 3 and 4 apoA-I molecules on the 88-A LpAI and 106-A LpAI pcrticles, respectively, and 2 apoA-I plus 2 apoA-I1 on the 96-A LpAJ/AII particles, indicates proximity between the protein monomers. Assuming a surface area around 4000 A* for apoA-I (36) the percent surface area occupied by protein is about 50 and 45% for the 88 and 106-A LpAI particles, respectively. This, together with the lipid/protein ratios, implies that protein-protein inter-and intramolecular contacts must be extensive. Assuming the extreme case where each a-helical segment of apoA-I is surrouaded by phospholipids (8 a-helical segments, each with 90-A of periphery in $ontact with pho!pholipids which occupy a surface area of 68 A* (36) and 8.2 A of linear distance) one can calculate that each apoA-I would be surrounded by 88 molecules of phospholipid. Since the LpAI particles contain only 20-23 molecules of phospholipid per apoA-I, it is clear that most of the a-helical segments of apoA-I must be in contact with each other in the protein monomers, and that some regions of the protein monomers must be adjacent to other monomers to account for the small amounts of surface lipids.
Wavelength (nm)
The present 93-;1 rHDL particle preparation contains a similar amount of phospholipid compared to our previous ones, but a substantially lower amount of cholesterol (14). Compared with the native HDL subclasses, the rHDL contains considerably more phospholipid aad less total cholesterol. In the past, we considered the 93-A rHDL to be a good model for mature HDL in the plasma compartment; powever, due to this difference in lipid composition, the 93-A rHDL probably represents an intermediate in the formation of a mature spherical rHDL. The synthesis of the 93-A rHDL involves adding exogenous heat-inactivated LDL as a source of free cholesterol, LCAT, and BSA to an incubation mixture at 37 "C. It is possible that the absence of lipid transfer proteins and other sources of cholesterol prevent the full maturation of the rHDL.
The fluorescence spectroscopy results indicate no significant differences in the environment and the mobility of the tryptophan residues of :he rHDL and native HDL subfractions including the 96-A LpAI/AII particle. Using Trp fluorescence quenching experiments with iodine, Talussot and Ponsin (42) also did not observe any differences between rHDL containing apoA-I and rHDL containing apoA-I and apoA-11. The circular dichroism spectra in the far-ultraviolet region showed ellipticity minima at 222 and 208 nm and maxima in the 190-195-nm region indicati9g that a-helix is the major secondary structure of the 93-A rHDL and the native HDL subfractions. The content of a-helical secondary structure did not differ qarkedly (61-65%) for the HDL particles except for the 96-A LpAI/AII particle which had an a-helix content of 52%. The lower percentage of a-helix in the 96-A LpAI/AII particle is the result of the equimolar amounts of apoA-I1 and apoA-I in the particle. As the ahelicity of apoA-I1 complexed with phospholipid has been established at about 40% (38), we can estimate from the total a-helicity of the 96-A LpAI/AII that the a-helix content of apoA-I at about 60%. Thus, the presence of apoA-I1 seems not to alter the secondary structure of apoA-I in the native 96-8, LpAI/AII.
In order to examine the structural stability of the apolipoproteins in the native HDL subfractions, the denaturation by GdnHCl was assessed by the change in the wavelength of maximum fluorescence and the ellipticity at 222 nm. The wavelength of maximum fluorescence monitors the NHz-terminal region of the apoA-I (up to residue 108) which contains the 4 Trp residues. Since apoA-I1 does not contain any Trp residues, only the structucal changes of the apoA-I molecules are monitored in the 96-A LpAI/AII particle in the fluorescence experiment. The ellipticity change at 222 nm follows the loss in secondary structure upon denaturation. The secondary structure represented mainly by a-helical segments is thought to start at residue 44 and to extend to the carboxyl terminus of the apoA-I. The apoA-I structure as previously described (12)(13)(14) and as shown in Figs. 3 and 4 is clearly stabilized by lipids while free apoA-I completely denatures by 2 M GdnHCl. Both Fags. 3 and 4 show a general pattern in the stability of the 93-A rHDL and the native HDL subclasses; the !8-A LpAI particles are tke least stable followed by the 106-A LpAI particles. The 96-A LpAI/AII particles are more resistant to qdnHC1 than the LpAI subfractions but less so than the 93-A rHDL. Possibly the 93-A rHDL are the most stable particles because of their higher content of surface lipids, and the absence of polyunsaturated lipids which may decrease the hydrophobic interactions within the particles. The apoA-11, even though it does not seem to alter the conformation of the apoA-I on the 96-A LpAI/AII particle, may stabilize the apoA-I conformation and the particles in general. Nichols et al. (43) have reported that ultracentrifugation of HDL in GdnHCl causes the dissociation of apoA-I alone between 2 and 3 M GdnHCl and the dissociation of apoA-I together with apoA-I1 between 5 and 6 M GdnHC1. Perhaps these separate pools of apoA-I represented LpAI and LpAI/AII and their relative stability due to apoA-11. In fact, Cheung and Wolf (32) investigating the stability of LpAI and LpAI/AII to ultracentrifugation found that the LpAI fraction lost more protein than the LpAI/AII fraction and changed the relative proportions of the different size subclasses. The LpAI/AII remained essentbally unchanged. These reports and our results with the 96-A LpAI/AII particles support the hypothesis that the LpAI/AII particles are more stable than the LpAI subclasses.
The curves in Figs. 3 and 4 suggest a multiphasic denaturation. Clearly, the secondary structure is lost before the apoA-I is completely denatured according to the fluorescence wavelength experiments. The results show a similar pattern of denaturation for apoA-I in all of the particles suggesting once again a similar protein structure. The first step in the denaturation curves shown in Fig. 3 may represent the exposure of 1 Trp residue in a rather unstable region of the protein.
We propose that this region involves Trp-108 which is located in one of the a-helical segments of apoA-I. Recent work in our laboratory on the denaturation behavior of the Lys-107 deletion mutant of apoA-I in discoidal rHDL complexes indicated that this section of apoA-I is structurally flexible, and is readily denatured (44,45). The next denaturation step in Fig. 3 from about 2.5 to 4.5 M GdnHC1, is still accompanied by changes in secondary structure and represents a wavelength change that could account for the exposure of the 2 Trp residues to solvent; therefore, we speculate that the Trp-50 and Trp-72 residues may be involved, since they are located in putative a-helical segments. Finally, we propose that the Trp-8 residue is exposed only at GdnHCl concentrations above 5 M. At this denaturant concentration all secondary structure is lost, yet one of the Trp residues is still protected, suggesting that the possible candidate is Trp-8 in a region of apoA-I which has no predicted a-helical structure. Furthermore, human apoA-IV, which has a linear sequence very homologous to apoA-I (46) and forms comparable rHDL particles to apoA-I (47), has a single Trp at position 12 which behaves in the presence of GdnHCl very much like the most stable Trp in apoA-I. While the secondary structure of apoA-IV in rHDL complexes is lost with 5.5 M GdnHCl, the single Trp residue in the NH2 terminus of the molecule is not yet exposed to solvent (47). These results strongly suggest a very stable NH2-terminal domain ine apoA-IV rHDL complexes. Similarly it is likely that the 93-A rHDL and the native HDL subclasses contain apoA-I with a stable NH2-terminal domain.
The near-ultraviolet circular dichroism spectra of the HDL particles and free apoA-I are very similar to the spectra measured by Lux et al. (37). The spectra of the different HDL particles display maxima at 284 and 291 nm and a minimum a t 296 nm (see Fig. 5). On the basis of their location and characteristic spacing, these maxima have been assigned to 1 or more of the Trp residues of apoA-I and correspond to two different vibrational states of the ' Lb electronic transition of Trp (37,(48)(49)(50). From studies with Trp model compounds, the minimum at 296 nm in HDL has been attributed to the 'La transition of Trp (50). The absence of 284-and 291-nm maxima (37,51) in the free and in the lipid-bound apoA-11, which contains no Trp residues, confirms their assignment to the Trp residues. In free apoA-I, these two peaks are reversed in sign and red-shifted to 287 and 293 nm, showing that the Trp residues responsible for the spectra have a different average conformation and are in a more polar environment.
Tryptophans, tyrosines (37,(48)(49)(50), and disulfides (52) may all contribute to the circular dichroism signal in the 260-280nm region. However, since Lux et al. (37) have shown that an increase in p H from 9.5 to 12.6 is accompanied by major changes of the ellipticity between 260 and 280 nm, we suggest that there is an important contribution of the tyrosine residues to the ellipticity in this region. Lux et al. (37) showed that the relipidation of HDL apolipoproteins with phosphatidylcholine alone was able to restore the 284-and 291-nm maxima which were inverted in the free apolipoprotein. The addition of cholesteryl esters intensified these bands and was required to reproduce the broad negative band between 260 and 280 nm present in native HDL. It is of interest to note that the intensity of the maxima at 284 nm for the HDL particles (Fig. 5) seem to be proportional to the total phospholipid content ( Table I). The presence, in the total HDL and in the 96-A LpAI/AII, of apoA-I1 containing 4 tyrosines per monomer contributes to the strong negative ellipticity observed at 269-and 272 nm, respectively. Thg addition of one part of the %-A0 LpAI, one part of the 106-A LpAI, and two parts of the 96-A LpAI/AII spectra between 250 and 315 nm approximately simulates the HDL spectrum, including the strong minima from 262 to 272 nm. The pear-ultraviolet circular dichroism spectra of the 88-and 106-A LpAI subfractions are very similar and indicate that the tertiary structure of the apoA-I in both these subclasses of LpAI is essentially identical. The section of the spectrum in Fig. 5 from 280 to 320 nm, due to the electronic tran$tions of the apoA-I Trp resigues, is also similar for the 96-A LpAI/AII as well as the 93-A rHDL. The shape, sign, characteristic wavelengths, and spacing of the spectrum for the 93-A rHDL particles is identical to the native LpAJ spectra; only the intensity of the 284nm band for the 93-A rHDL is somewhat higher probably because the rHDL contains double theo amount of phospholipid per apoA-I compared to the 88-A LpAI particle. Our results suggest very similar conformations of apoA-I in all of these native subfractions and the 93-A rHDL, including the same number of a-helical segments.
Finally, the LCAT reactivity studies indicate that the 93-A rHDL and the native HDL subfrastions are all poor substrates for LCAT compared to the 96-A rHDL discs which are at least 30-fold more reactive (14). Overall this low reactivity may be due to product inhibition by the cholesterol esters in these particles and/or to an unfavorable apoA-I structure for the activation of the LCAT reactioa. The 106-A LpAI is somewhat less reactive than the 88-A LpAI particle, which agrees with, but does not explain, the differential reactivity of HDL, uersus HDL, with LCAT (53). At this point we have 90 explanation for the marginally higher reactivity of the 96-A LpAI/AII particles.
From the above results the major differences between the 93-A rHDL and each of the native HDL s u b c l p e s is their phospholipid composition. Nevertheless, the 93-A rHDL may still be a good model for native HDL since the apoA-I structure appears to be quite similar for all of the particles studied. Thq major difference found is the increased stability of the 96-A LpAI/AII subfractions as compared to both LpAI subfractions. This property may underlie different functions of LpAI and LpAI/AII in the circulation and may be responsible for their different catabolism. | 9,070.6 | 1993-03-05T00:00:00.000 | [
"Biology",
"Medicine"
] |
Biosynthesis of angelyl-CoA in Saccharomyces cerevisiae
Background The angelic acid moiety represents an essential modification in many biologically active products. These products are commonly known as angelates and several studies have demonstrated their therapeutic benefits, including anti-inflammatory and anti-cancer effects. However, their availability for use in the development of therapeutics is limited due to poor extraction yields. Chemical synthesis has been achieved but its complexity prevents application, therefore microbial production may offer a promising alternative. Here, we engineered the budding yeast Saccharomyces cerevisiae to produce angelyl-CoA, the CoA-activated form of angelic acid. Results For yeast-based production of angelyl-CoA we first expressed genes recently identified in the biosynthetic cluster ssf of Streptomyces sp. SF2575 in S. cerevisiae. Exogenous feeding of propionate and heterologous expression of a propionyl-CoA synthase from Streptomyces sp. were initially employed to increase the intracellular propionyl-CoA level, resulting in production of angelyl-CoA in the order of 5 mg/L. Substituting the Streptomyces sp. propionyl-CoA carboxylase with a carboxylase derived from Streptomyces coelicolor resulted in angelyl-CoA levels up to 6.4 mg/L. In vivo analysis allowed identification of important intermediates in the pathway, including methyl-malonyl-CoA and 3-hydroxyl-2-methyl-butyryl-CoA. Furthermore, methyl-malonate supplementation and expression of matB CoA ligase from S. coelicolor allowed for methyl-malonyl-CoA synthesis and supported, together with parts of the ssf pathway, angelyl-CoA titres of approximately 1.5 mg/L. Finally, feeding of angelic acid to yeasts expressing acyl-CoA ligases from plant species led to angelyl-CoA production rates of approximately 40 mg/L. Conclusions Our results demonstrate the biosynthesis of angelyl-CoA in yeast from exogenously supplied carboxylic acid precursors. This is the first report on the activity of the ssf genes. We envision that our approach will provide a platform for a more sustainable production of the pharmaceutically important compound class of angelates. Electronic supplementary material The online version of this article (10.1186/s12934-018-0925-8) contains supplementary material, which is available to authorized users.
Background
Esters of angelic acid ((Z)-2-methyl-2-butenoic acid), also known as angelates, are pharmacologically active natural products widely distributed in plants (Additional file 1: Figure S1). The best-known and most studied example is certainly represented by ingenol-3-angelate (also known as ingenol-mebutate), a topical chemotherapeutic recently approved by the FDA for the treatment of actinic keratosis, a pre-cancerous skin condition. This ester of the diterpenoid ingenol and angelic acid is derived from sarco-endoplasmic reticulum Ca 2+ -ATPase used in the treatment of solid tumors [5].
Recently, bacterial angelates have also been reported: SF2575 is a tetracycline polyketide-angelic acid ester produced by Streptomyces sp. SF2575 [6]. The compound exhibits not only weak antibiotic activity but also potent anti-cancer activity towards a broad range of cancer cell lines [7]. Trehangelins are trehalose angelates produced by the endophytic actinomycete Polymorphospora rubra K07-0510, displaying potent inhibitory activity against hemolysis of red blood cells [8].
Supply of angelates is currently based on extraction of the pure compounds from the species of origin. In some cases chemical synthesis has also been achieved [9][10][11][12]. However, both approaches are low yielding and have high environmental impact. In the case of ingenol-mebutate, direct isolation from the aerial tissue of E. peplus only yields 1.1 mg/kg of tissue [13]. Semi-synthesis, starting from the more abundant cognate compound ingenol, obtained an overall yield of around 31%, but relied on expensive catalysts [10]. Thapsigargin is isolated from wild plants of T. garganica, where it is present in minute amounts (1.2-1.5% of the dry weight, depending on the selected tissue) [5]. Its total synthesis in 42 steps from (S)-carvone had an overall yield of only 0.6% [9].
To make these compounds more accessible, microbial production certainly represents an interesting alternative route.
Unfortunately, biosynthesis of any of the plant angelates has not yet been elucidated and even the enzymes involved in the biosynthesis of the angelic acid moiety are unknown. In contrast, the gene clusters responsible for the bacterial synthesis of SF2575 ("ssf") and trehangelin A ("thg") have been identified and characterized [6,14]. This has led to the elucidation of the metabolic pathways responsible for the biosynthesis of these compounds and the identification of enzymes needed for assembly of their core structures and also for tailoring reactions, including angelyl-CoA (AN-CoA) formation and esterification. The latter is synthesized in both Streptomyces sp. SF2575 and P. rubra by enzymes resembling those found in fatty acid biosynthesis. In joint action, the identified beta-ketoacyl-(acyl-carrier-protein) synthase III (KAS III), the 3-ketoacyl-(acyl-carrier-protein) reductase and the enoyl-CoA hydratase may lead to AN-CoA biosynthesis starting from acetyl-CoA (Ac-CoA) and methyl-malonyl-CoA (MM-CoA) via the intermediates 2-methyl-acetoacetyl-CoA (MAA-CoA) and 3-hydroxyl-2-methyl-butyryl-CoA (HMB-CoA) (Fig. 1). The enzymes from P. rubra were characterized in vitro [14], whereas the enzymes from S. sp. SF2575 have been suggested to be involved in AN-CoA formation based on homology to functionally similar enzymes from other species [6]. Pickens and co-workers [6] proposed that in S. sp. SF2575 condensation of MM-CoA and Ac-CoA to MMA-CoA is catalyzed by SsfN, a KAS III homolog. SsfK, homologous to 3-oxoacyl-ACP reductases, enables keto-reduction of MAA-CoA yielding HMB-CoA. In the last step, stereospecific dehydration of HMB-CoA by SsfJ (a member of the enoyl-CoA hydratase/isomerase family) results in AN-CoA. Moreover, SsfE, homologous to biotin-dependent methyl-malonyl-CoA decarboxylases and propionyl-CoA carboxylases, was suggested to be involved in MM-CoA formation from Pr-CoA.
In this study we describe production of angelyl-CoA in the yeast Saccharomyces cerevisiae. This production host has well-established advantages when compared to other microorganisms such as robustness and resistance under harsh industrial conditions, resistance to phages and the ability to ferment sugars under acidic conditions. AN-CoA biosynthesis was achieved through expression of a heterologous pathway derived from the bacterial ssf cluster, allowing AN-CoA synthesis starting from propionyl-CoA. Upon engineering of the propionyl-CoA metabolism we reached maximum titres of approximately 6.4 mg/L AN-CoA. Moreover, the identification of AN-CoA represents an important building block for the synthesis of many plant secondary metabolites with biological activity. AN-CoA is the substrate used by acyl-CoA transferases to catalyse esterification reactions, adding the angelate moiety onto diverse acceptor molecules. Its biosynthesis in yeast will enable angelyl-acylation of a broad range of compounds. Although further optimization is needed, we anticipate that the strains reported here will pave the way for the bio-based production of esters of angelic acid.
Angelyl-CoA production in yeast starting from propionate
Based on the hypotheses of Pickens et al. [6] we assembled the pathway to AN-CoA biosynthesis in baker's yeast. First, we expressed the bacterial genes ssfE, ssfN, ssfK, and ssfJ as yeast codon-optimized versions for the conversion of Pr-CoA into AN-CoA (Fig. 2a). Pr-CoA is an intermediate metabolite produced in yeast through a variety of pathways including thio-esterification of propionate and catabolism of odd chain fatty acids and selected amino acids [15]. A plasmid, co-expressing the four heterologous genes under control of strong constitutive promoters was constructed together with a control plasmid (no ORFs downstream of the promoters). Transformation into yeast generated strains ANG1 (control) and ANG2 (ssfE/ssfN/ssfK/ssfJ). These strains were tested for production of AN-CoA. In strain ANG2 around 0.37 mg/L of AN-CoA accumulated, whereas no AN-CoA could be detected in strain ANG1 (control strain, data not shown).
To test whether an improved precursor supply may increase AN-CoA synthesis, we cultured the strains in propionic acid-supplemented medium. Propionic acid is metabolized by the yeast via activation to Pr-CoA, which is catalyzed by ACS1-an endogenous isoenzyme of acetyl-CoA synthase [16]. ACS1 can accept propionic acid as substrate, albeit at lower rates than acetic acid [17]. To further improve propionate thio-esterification, we then expressed an acyl-CoA synthase specific for this anion. We chose to express propionyl-CoA synthase prpE from Salmonella enterica serovar Typhimurium, encoding an enzyme required for the catabolism of propionate in this bacterium [18]. Two strains were generated, one expressing the ssf genes and an empty plasmid (ANG3), the other one expressing the ssf genes together with prpE under control of the PGK1 promoter (ANG4). Feeding with propionic acid led to 5.5-fold elevated intracellular accumulation of Pr-CoA in strain ANG3, compared to the non-fed strain (Fig. 2b). Upon propionate supplementation, prpE expression in strain ANG4 boosted Pr-CoA accumulation to 20-fold higher values than those seen with strain ANG3 not expressing prpE (Fig. 2b). This is in accordance with an earlier report on prpE expression in the presence of exogenous propionate, leading to substantial accumulation of Pr-CoA in yeast [19]. The elevated concentration of Pr-CoA inside the cells resulted in a massive increase in AN-CoA production (from 0.37 mg/L to almost 5 mg/L at 12 h of growth, see Fig. 3a) but it also affected cell growth. The increased energy requirements needed to maintain cytosolic pH homeostasis in the presence of propionic acid in the medium may, in addition, contribute to growth retardation [20]. We cultured the cells in medium buffered to pH 4.5 as this pH value is a compromise between support of a decent yeast growth and remaining below the pK a of propionic acid (pK a = 4.88). This pH supports passive diffusion of propionic acid into the cells [21]. Cells cultured at pH 4.5 showed better growth and a slightly elevated production of AN-CoA (Fig. 2c).
We then performed time course experiments with strain ANG4 (prpE + ssfE/ssfN/ssfK/ssfJ; Fig. 3a) and a newly constructed strain, ANG5 (prpE + pccB/accA1/ ssfN/ssfK/ssfJ; Fig. 3b). The latter strain expresses the propionyl-CoA carboxylase complex from Streptomyces coelicolor [22], instead of ssfE. The complex consists of the transcarboxylase subunit PccB and the biotin carrier protein/biotin carboxylase subunit AccA1 (PccB/ AccA1 complex). Both strains were grown in buffered, propionate-supplemented medium. AN-CoA formation peaked at 12 h, but could barely be detected after 48 h of culture. AN-CoA accumulation reached maximally 4.9 ± 0.5 mg/L in strain ANG4 (Fig. 3a), whereas it climbed to even 6.4 ± 0.2 mg/L in strain ANG5 after 12 h of growth (Fig. 3b). Similar dynamics were detected for Ac-CoA, a critical metabolite for growth and proliferation. Ac-CoA accumulation also peaked at 12 h and went down to almost 0 upon 96 h of shake flask culture. In order to correlate AN-CoA production with Pr-CoA production, we also analyzed the relative amount of Pr-CoA accumulation in strains ANG4 and ANG5. Both strains accumulated similar levels of Pr-CoA within the first 12 h. Thereafter, in strains expressing pccB/accA1, Pr-CoA remained stable up to at least 36 h of growth. Even after 96 h a significantly higher level of Pr-CoA could be detected in ANG5 expressing pccB/accA1 than in ANG4 expressing ssfE (Fig. 3c).
Further analyses of the angelyl-CoA pathway in yeast
In order to investigate the individual functions of the ssf genes involved in AN-CoA biosynthesis in vivo, three truncated pathways were assembled on plasmids and transformed into yeast (strains ANG6-ANG8). The three strains expressed prpE together with one (ssfE, ANG6), two (ssfE/ssfN, ANG7), or three (ssfE/ssfN/ssfK, ANG8) genes from the ssf pathway. These strains were analysed together with the control strain expressing solely prpE (ANG9), and the strain expressing the entire pathway (prpE + ssfE/ssfN/ssfK/ssfJ, ANG4). Strains were analysed for production of Pr-CoA, AN-CoA and the putative intermediates MM-CoA, MAA-CoA and HMB-CoA (see Additional file 1: Figure S2 for extracted ion chromatograms of all strains).
The strain expressing solely prpE (ANG9) accumulated only Pr-CoA (Fig. 4). The same is true for ANG6 (prpE and ssfE expressed). Neither MM-CoA nor MAA-CoA could be detected in strains expressing prpE/ssfE/ssfN (ANG7). However, accumulation of Pr-CoA in those strains was much lower compared to control strain Pr-CoA accumulation in strains expressing ssfENKJ, and ssfENKJ together with prpE, shown as fold change compared to the respective control strains. Engineered strains were incubated in selective SC medium either non supplemented ("non fed") or supplemented with 0.5 g/L propionic acid ("+ propionate"). c Intracellular accumulation of AN-CoA in strains ANG3 (ssfENKJ) and in strain ANG4 (prpE + ssfENKJ) grown for 72 h in SC medium supplemented with 0.5 g/L propionic acid. The medium was buffered to pH 4.5 (yellow bars) or supplied unbuffered (green bars). Circles indicate OD 600 at 72 h of growth. Represented are the averages and standard deviations of three independent cultures ANG9, suggesting that Pr-CoA might have been utilized for further reactions inside the cells. When the pathway was extended to include ssfK (ANG8) yeasts produced HMB-CoA. In addition, a substantial accumulation of MM-CoA (1.7 mg/L) was detected in this strain, together with higher levels of Pr-CoA, compared to strains ANG6 or ANG7 (nearly seven-and ninefold more, respectively). Finally, strain ANG4, containing the entire pathway, accumulated AN-CoA together with lower amounts of HMB-CoA (nearly twofold less than ANG8) and MM-CoA (7.5-fold less than ANG11). Pr-CoA levels were comparable to those found in the control strain expressing only prpE (Fig. 4). Over time accumulation of HMB-CoA in strains expressing prpE/ssfE/ssfN/ssfK and accumulation of AN-CoA in strains expressing prpE/ ssfE/ssfN/ssfK/ssfJ was accompanied by a decrease of the earlier intermediates (Additional file 1: Figure S3).
Angelyl-CoA production in yeast starting from methyl-malonate
As an alternative route to AN-CoA production, we evaluated the malonyl/methylmalonyl-CoA ligase operating naturally in Streptomyces coelicolor. The biosynthetic route should allow for MM-CoA synthesis in yeast upon methyl-malonate supplementation and heterologous expression of malonyl-CoA synthase matB (Fig. 5a). MatB is an enzyme exhibiting a certain promiscuity, accepting both malonate and methyl-malonate as substrates [23]. Building of this pathway may avoid accumulation of potentially toxic amounts of Pr-CoA as it starts with a different substrate for production of methyl-malonyl-CoA.
Three plasmids were constructed and transformed into yeast, thus generating strain ANG10 (expressing matB), strain ANG11 (expressing ssfN/ssfK/ssfJ) and strain ANG12 (expressing the entire pathway matB + ssfN/ssfK/ ssfJ). Without feeding of methyl-malonic acid MM-CoA could not be detected in those strains (data not shown). Upon methyl-malonate feeding, expression of matB in strain ANG10 led to around 2.7 mg/L MM-CoA accumulation, representing the majority of the acyl CoA-pool (Fig. 5b). As expected, none of the compounds of interest could be detected in strain ANG7 expressing exclusively the ssfN/ssfK/ssfJ genes. Strain ANG12, expressing the entire pathway (matB + ssfN/ssfK/ssfJ), was able to produce not only MM-CoA but also AN-CoA in titres in the range of 1.3-1.9 mg/L (Fig. 5b, c). Interestingly, also Pr-CoA substantially accumulated in strain ANG12 (matB + ssfN/ssfK/ssfJ) (Fig. 5b). This was not observed in strain ANG10 or ANG11, suggesting that the accumulation was induced only when the full pathway was expressed. As shown with propionic acid feeding of strains ANG4 and ANG5 (Fig. 3a, b), AN-CoA accumulation peaked at 12 h of growth and declined thereafter to levels close to the limit of detection (Fig. 5c). MM-CoA accumulated in this strain to maximal titres of 6.1 ± 0.1 mg/L at 12 h. Ac-CoA and AN-CoA produced by strain expressing prpE + ssfENKJ (ANG4). b Ac-CoA and AN-CoA produced by strain expressing prpE + pccB/accA1 + ssfNKJ (ANG5). c Relative amounts of Pr-CoA in strains expressing prpE + ssfENKJ (ANG4) and strains expressing prpE + pccB/accA1 + ssfNKJ (ANG5). Strains were grown in SC medium supplemented with 0.5 g/L propionic acid and buffered to pH 4.5. Filled circles in a, b indicate OD 600 . Represented are the averages and standard deviations of three independent cultures
Angelic acid feeding and angelyl-CoA production
In addition to setting up the entire pathway for angelyl-CoA production in yeast, we attempted to produce AN-CoA directly from angelic acid. We grew yeast strains expressing heterologous acyl-CoA ligases from plant, bacterial and fungal origin in angelic acid-supplemented medium (schematically shown in Fig. 6a). Acyl-CoA ligases can catalyze acyl-CoA thioester formation through adenylation of the carboxylic acid substrate. Many studies have explored the substrate specificity of these enzymes, revealing in several cases remarkable substrate promiscuities beyond their canonical substrate pools [24][25][26].
In preliminary experiments we found that two of the CoA-ligases tested showed the ability to accept angelic acid as a substrate for thio-esterification: carboxyl CoA ligase 4 from Humulus lupulus (HlCCL4) and predicted acyl-activating enzyme 6 from Solanum tuberosum (StCCL). The sequences of those two CoA ligases were used to search for CoA ligases in the available transcriptome of Euphorbia peplus, the plant producing ingenol-3-angelate. We identified several potential candidates. Three of them, arbitrarily named EpCCL1, EpCCL2 and EpCCL3, were prioritized based on their high sequence identity to HlCCL4 and StCCL (Additional file 1: Figure S4).
Three individual strains were constructed expressing the codon-optimized variants of EpCCL1 (ANG16), EpCCL2 (ANG17) or EpCCL3 (ANG18). Strains ANG16, ANG17, and ANG18, in parallel with strains expressing HlCCL4 (ANG14) and StCCL (ANG15), were assayed for AN-CoA production using medium supplemented with 0.1 g/L angelic acid. Figure 6b shows the relative activity of the CCL enzymes in yeast, expressed as percentage of AN-CoA accumulating in those cells. All of the putative CoA-ligases showed activity against angelic acid. Yeasts expressing the CoA ligase from S. tuberosum (ANG15) accumulated the highest amount of AN-CoA after 24 h. The amount of AN-CoA accumulating in those cells was set to 100%. ANG14, the strain expressing HlCCL4, accumulated 75% of this amount of AN-CoA, whereas yeasts expressing EpCCL1 (ANG16) or EpCCL2 (ANG17) accumulated 56 or 62% of this amount of AN-CoA, respectively. The strain expressing EpCCL3 accumulated marginal amounts of AN-CoA (7% of the amount of ANG15). Expression of the heterologous enzymes had negligible effect on strain growth (data not shown).
Time course analysis of the StCCL expressing strain revealed a peak in AN-CoA production after 12 h, followed by a net decrease in titre (Fig. 6c). These kinetics were similar to the ones described before for AN-CoA production in yeast expressing the bacterial ssf pathway. The highest production level of AN-CoA at 12 h of batch culture was 39.0 ± 9.9 mg/L.
Discussion
In this study we demonstrated the potential of yeast to produce AN-CoA by (A) heterologous expression of genes from a recently discovered gene cluster derived from actinomycetes, (B) the heterologous expression of plant acyl-CoA synthases able to accept angelic acid as substrate.
Expression of the ssf pathway in S. cerevisiae
First, we showed production of AN-CoA in yeast through expression of part of the ssf pathway adopted from Streptomyces sp. SF2575. Pr-CoA is converted by SsfE to MM-CoA, which serves as substrate for the subsequent condensation reaction with Ac-CoA. Pr-CoA and Ac-CoA are both present in yeast, albeit the former only in small amounts. Upon propionate supplementation, ten times more AN-CoA (almost 5 mg/L) was produced in buffered medium when expressing an additional heterologous CoA-ligase, specific for Pr-CoA (propionyl-CoA synthase from Salmonella, PrpE).
Interestingly, the intracellular Pr-CoA concentration reached high levels, even when a second copy of the ssfE gene was expressed (Additional file 1: Figure S5). This doubling of the gene dosage should optimize coupling of Pr-CoA synthesis and the further downstream steps, pulling most of the Pr-CoA to AN-CoA. We propose that Pot1, the only 3-ketoacyl-CoA thiolase present in S. cerevisiae, may be responsible for Pr-CoA accumulation through the breakdown of the AN-CoA pathway intermediate MAA-CoA. Pot1 is normally involved in β-oxidation and it catalyses the conversion of 3-ketoacyl-CoA into an acyl-CoA shortened by two carbon atoms [27]. Shortening of MAA-CoA by two carbon atoms may inevitably lead to generation of Pr-CoA and Ac-CoA, thus explaining Pr-CoA build-up. The accumulation of Pr-CoA, besides being responsible for loss of carbon flux towards AN-CoA, may also be responsible for the growth inhibition suffered by those strains. Strains expressing ssfE reached OD 600 values of ~ 7 after 96 h of growth, whereas strains expressing the pccB/accA1 complex, instead of ssfE, underwent a more severe growth inhibition (OD 600 = 3 after 96 h), parallel to the elevated accumulation of Pr-CoA (Fig. 3a, b). It has been described for both the filamentous fungus Aspergillus nidulans and the bacterium Rhodobacter sphaeroides that Pr-CoA inhibits enzymes involved in glucose metabolism, in particular CoA-dependent enzymes such as pyruvate dehydrogenase and succinyl-CoA synthase, leading to a significant growth retardation [28,29]. In Escherichia coli, Pr-CoA was found to be a competitive inhibitor of citrate synthase [30]. Similar mechanisms could explain the severe growth retardation observed in yeast upon accumulation of Pr-CoA.
We attempted to overcome Pr-CoA toxicity by employing an alternative biosynthetic route that directly produced MM-CoA upon expression of methyl-malonyl-CoA synthase matB from Streptomyces coelicolor. This enzyme has recently been shown to generate methylmalonyl-CoA in yeast upon methyl-malonate feeding [19]. Combining matB with the ssf pathway genes indeed induced generation of AN-CoA without severely hampering cell growth. Cells grown in those conditions could reach an OD 600 of 10.7 after 96 h of growth, compared to OD 600 of 3 and 7 when supplementing propionic acid (see Figs. 3,5). Nevertheless, the levels of AN-CoA (1.5 mg/L) did not reach the values seen in yeast operating with the Pr-CoA pathway. The fact that expression of this alternative pathway also induces accumulation of Pr-CoA is consistent with the previously formulated hypothesis that Pr-CoA accumulation is induced via Pot1 activity on MAA-CoA.
The individual steps of the pathway were further characterized in vivo by sequential expression of the corresponding pathway genes. Contrary to expectations, no MM-CoA nor MAA-CoA could be detected in strains expressing either only ssfE, or ssfE together with ssfN. In cells expressing the entire pathway, accumulation of MM-CoA was observed-probably as a result of a "pulling effect" which led to substrate saturation of SsfN. MAA-CoA was never detected in any of the strains, even not in early phases of yeast growth, as confirmed by acyl-CoA analysis at 4 and 8 h (Additional file 1: Figure S3). MAA-CoA was also not detected in yeasts expressing genes from the thg cluster of P. rubra (Additional file 1: Figure S6), albeit the yeasts were able to produce AN-CoA. The activities of the enzymes ThgI, ThgK and ThgH, homologues to SsfN, SsfK and SsfJ, were characterized in vitro by Inahashi et al. [14], confirming that the AN-CoA pathway starts via MM-CoA and Ac-CoA condensation and runs through the intermediates MAA-CoA and HMB-CoA. Rapid conversion of MAA-CoA to either HMB-CoA or Pr-CoA and Ac-CoA (see before) might explain its analytical absence.
We have demonstrated production of AN-CoA up to 6.4 mg/L in yeast by introducing the bacterial ssf pathway. However, avenues remain to explore for further optimization of S. cerevisiae-based production of AN-CoA. Titres may e.g. be improved by insertion of higher copy numbers of rate limiting pathway enzymes, by promoter-based optimization of expression levels of individual enzymes, or by avoiding accumulation of pathway intermediates. Deletion of the non-essential gene pot1 could also increase production levels. Compartmentalization of the heterologous pathway may also be a valuable approach for efficient production as it may increase spatial proximity of enzymes involved. To avoid feeding of propionic acid, insertion of a de novo production route to Pr-CoA is desirable. During submission of this manuscript, Krink-Koutsoubelis and colleagues reported a direct Pr-CoA production route from malonyl-CoA using parts of the 3-hydroxypropionate carbon assimilation cycle found in certain auxotrophic archaea and bacteria [31]. Coupling of this Pr-CoA biosynthesis pathway to the ssf pathway could enable AN-CoA production without addition of media supplements.
Saccharomyces cerevisiae has already been used for the biosynthesis of important precursors of esters of angelic acid, such as precursors of the diterpenoid ingenol-3-angelate [32]. Together with the reported AN-CoA synthesis, S. cerevisiae may provide an important and economic route to total biosynthesis of ingenol-3-angelate and other valuable angelates.
Expression of acyl-CoA synthases from plant origin in S. cerevisiae
We also report the identification of plant acyl-CoA synthases able to utilize angelic acid in order to yield AN-CoA. In plants AN-CoA production most probably follows a different pathway than the one found in actinomycetes. AN-CoA may be derived from degradation of l-isoleucine (via 3-methyl-2-oxopentanoate and 2-methylbutanoyl-CoA) or from tiglyl-CoA through a cis-trans isomerase system, similar to that responsible for crotonyl-CoA isomerization [33,34]. However, none of the responsible enzymes for its biosynthesis have been identified.
Here we attempted to enable ligation of angelic acid and CoA by employing heterologous enzymes known to have CoA ligation activity and substrate promiscuity. The enzyme HlCCL4 from H. lupulus, involved in bitter acids biosynthesis pathway in hop, was shown to have substrate preference towards several short-chain fatty acids, including isobutyric acid and 2-methylbutyric acid [35]. Obviously, the ability of HlCCL4 to utilize angelic acid as substrate can be attributed to the structural resemblance of the saturated acids that are the usual preferred substrates of the enzyme. The enzyme StCCL, showing 71% identity to HlCCL4 (Additional file 1: Figure S4), showed the highest activity and led to the highest titres of AN-CoA. Two of the enzymes identified in the transcriptome of Euphorbia peplus (EpCCL1 and EpCCL2) proved to be quite efficient in AN-CoA production, but not as much as StCCL. The Euphorbia CoA ligases are probably tightly linked to ingenol biosynthesis in the plant. Therefore, activity of those enzymes is synchronized with ingenol-3-angelate biosynthesis rather than being optimized for AN-CoA production. It may be also possible that AN-CoA synthesis does not go via the intermediate angelic acid in Euphorbia peplus.
Although angelic acid feeding might not be a relevant strategy for biotechnological production of AN-CoA, we envision that the strains expressing the CoA ligases identified in this work can provide a system for screening and functional characterization of acyl-transferases able to use angelyl-CoA as donor acyl-CoA. Such enzymes are needed for the transfer of the angelate moiety onto diverse acceptor molecules. More work will be necessary to identify these enzymes. We are currently screening the transcriptome of Euphorbia peplus for identification of possible candidates involved in ingenol-3-angelate biosynthesis. Candidates for this reaction are enzymes of the BAHD family of acyltransferases, as it has been recently shown for the esterification of hydroxycinnamoyl-and benzoyl-CoA [36]. Rapid acylation by BAHDs would probably also prevent the observed disappearance of angelyl-CoA, which may be due to intracellular hydrolysis of the activated compound, as previously observed for several CoA-activated molecules [37,38].
Conclusions
In this proof of concept study we have successfully achieved AN-CoA production in yeast by the expression of genes from the bacterial ssf cluster. This represents the first report on the activity of these enzymes in vivo. Moreover, we have identified acyl-CoA ligases from different plant species that use angelic acid as substrate and yield considerable titres of AN-CoA. Our results pave the way for future microbial production of different kinds of angelates.
Chemicals and media
All chemicals were bought from Sigma-Aldrich (St. Louis Missouri, USA) unless stated otherwise. Authentic standard of angelyl-CoA was synthesized by Jubilant LifeSciences, India.
LB medium for growth of Escherichia coli was supplied from Carl Roth GmbH + Co. KG (Karlsruhe, Germany), and was supplemented with 100 μg/L of ampicillin for amplification of plasmids.
Yeast extract peptone dextrose (YPD) medium with 20 g/L glucose was used for growth of wildtype strains prior to transformation. For pre-and main cultures of transformed strains we used synthetic complete (SC) drop-out medium (Formedium LTD, Hustanton, England), supplemented with 6.7 g/L yeast nitrogen base, 20 g/L glucose and all amino acids necessary for the corresponding auxotrophy. For propionyl-CoA carboxylase-expressing strains medium was supplemented with additional biotin (20 μg/L). Basic amounts of biotin are routinely added to baker's yeast cultures as the co-factor is not produced by the laboratory strain S288C [39].
For preparation of pH 4.5-buffered medium a 1.0 M stock solution of citrate buffer (sodium citrate and citric acid) was prepared. Buffer stocks were filter-sterilized and used at the final concentration of 100 mM.
Organic acid supplemented media were prepared as solutions containing 0.5 g/L (6.7 mM) propionic acid, 0.5 g/L (4.23 mM) methylmalonic acid, or 0.1 g/L (1.0 mM) angelic acid, respectively. Stock solutions (1.0 M) of methyl-malonic acid and angelic acid were prepared in deionized water and ethanol respectively. Table 1 lists all plasmids constructed in this work. All coding sequences were synthesized by GeneArt ® (Thermofisher Scientific, Zug, Switzerland) as yeast-codon optimized versions. Standard cloning was done using the restriction enzymes HindIII HF and SacII, and T4 DNA ligase from New England Biolabs (Ipswich, Massachusetts, USA) according to standard protocols [40]. E. coli XL10 Gold (Agilent, Santa Clara, California, USA) cells were used for subcloning of genes. Coding sequences were cloned in single expression vectors (ARS/CEN), or in entry vectors for assembly of multigene expression plasmids in vivo by homologous recombination (HRTs), as described by Eichenberger et al. [41]. Briefly, genes were cloned into entry vectors, carrying different combinations of promoters and terminators ("expression cassettes"), flanked by 60 base pair homology sequences. Helper cassettes containing (a) the autonomously replicating sequence, (b) a centromere region, and (c) the auxotrophy marker are also flanked by 60 base pair homology sequences. Expression cassettes as well as helper cassettes were released by one-pot digestions using AscI (New England Biolabs). The digested mixtures were transformed into yeast. In case of negative expression control, corresponding empty entry vectors were added to the digestion mix.
Plasmids and strains
Saccharomyces cerevisiae strains generated throughout this study are listed in Table 2. All constructed strains were derived from strain NCYC 3608 (NCYC, Norwich, United Kingdom), a derivative of S288C, modified in our labs to add auxotrophophic markers (HIS, LEU, URA), and repair the petite phenotype according to Dimitrov et al. [42]. All yeast strains were stored in 25% glycerol at − 80 °C.
Yeast transformation and growth
Yeast transformation was performed using the lithium acetate method [43]. Transformants were grown on agar plates prepared with selective SC drop-out medium. Precultures were grown for 24 h at 30 °C on an orbital shaker (160 rpm). Optical density at 600 nm (OD 600 ) of a 1:40 dilution was measured in an Ultrospec 10 table top spectrophotometer (GE Healthcare, Little Chalfont, United Kingdom). Main cultures for production of angelyl-CoA were inoculated in 25 mL of medium at a starting OD 600 of 0.1, and grown at 30 °C (160 rpm) for 12-96 h. Media supplemented with organic acids were used exclusively for the growth of main cultures.
Sample preparation
We harvested 100 OD-units (~ 1 × 10 9 cells) by centrifugation at 4000 rpm for 5 min. Cell pellets were re-suspended in 1 mL of water and re-pelleted in 2 mL screw cap tubes. Extraction from pellets was performed as described previously [44]. Briefly, cell pellets were re-suspended in 500 μL of 75% ethanol and shaken (1500 rpm) for 3 min at 95 °C in a Thermo-Shaker TS-100 (Axonlab, Reichenbach an der Fils, Germany). Cell debris was removed by centrifugation (4000 rpm, 5 min) and the liquid phase was transferred to a 96 deepwell microplate. The ethanol extracts were evaporated for 5 h at 35 °C using a Genevac HT4 (SP Industries 935 Mearns Road-Warminster, PA18974). Dried pellets were re-solubilized in 100 μL of 50 mM ammonium acetate. Remaining debris was removed by centrifugation (5 min at 4000g), and supernatants were used for analyses.
Acyl-CoAs analysis
Analytical LC-MS was carried out using a Waters Xevo G2 XS TOF mass detector (Milford, Massachusetts, USA). Separation of the compounds was achieved on a Waters Acquity UPLC ® HSS T3 C18 column (1.7 μm, 2.1 mm × 50 mm) kept at 50 °C. Mobile phases were composed of (A) 1% acetonitrile, 99% water, 5 mM ammonium acetate, and (B) 10% acetonitrile, 90% isopropanol, 5 mM ammonium acetate. An elution gradient from 99% A to 0% A within 2 min at a flow rate of 0.5 mL/min was used. The mass analyzer was equipped with an electrospray source and operated in negative mode. Capillary voltage was 1.0 kV; the source was kept at 150 °C and the desolvation temperature was 500 °C. Desolvation and cone gas flow were 1000 and 150 L/h, respectively. For each compound of interest we calculated peak areas on the extracted ion chromatograms of the respective [M−H] − ions, using a mass window of 0.02 Da. Angelyl-CoA, acetyl-CoA and methyl-malonyl-CoA were quantified using a linear calibration curve with authentic standards ranging from 0.03125 to 4 mg/L for all compounds. For other compounds without standards, Area-under-the-curve (AUC) values were calculated for a relative quantity of the compound. Concentration and AUC values were normalized per 100 OD units. | 7,404.6 | 2018-05-12T00:00:00.000 | [
"Biology"
] |
Site-specific phosphorylation of the middle molecular weight human neurofilament protein in transfected non-neuronal cells
We expressed the human midsized neurofilament subunit (NF-M) using genomic DNA in mouse L cells and showed that it is transcribed and translated into a protein capable of assembly into the cytoskeleton and of forming a filamentous network that colocalizes with the endogenous vimentin filaments. Moreover, human NF-M expressed in L cells is phosphorylated at sites within the multiphosphorylation repeat (MPR), i.e., the major sites of phosphorylation of NF-M in vivo. We also expressed a genomic construct lacking the MPR domain in the native molecule and showed that this MPR(-) protein also was expressed and formed a filamentous network despite diminished incorporation of radiolabeled phosphate. Two major conclusions emerged from the work described in this paper: human NF-M is translated, assembled, and phosphorylated at physiological sites without the need of any other specific neuronal proteins; phosphorylation sites other than the MPR are present within NF-M which may play a role in synthesis, assembly, and degradation of NF protein in humans.
We expressed the human midsized neurofilament subunit (NF-M) using genomic DNA in mouse L cells and showed that it is transcribed and translated into a protein capable of assembly into the cytoskeleton and of forming a filamentous network that colocalizes with the endogenous vimentin filaments.
Moreover, human NF-M expressed in L cells is phosphorylated at sites within the multiphosphorylation repeat (MPR), i.e., the major sites of phosphorylation of NF-M in v&o. We also expressed a genomic construct lacking the MPR domain in the native molecule and showed that this MPR( -) protein also was expressed and formed a filamentous network despite diminished incorporation of radiolabeled phosphate.
Two major conclusions emerged from the work described in this paper: human NF-M is translated, assembled, and phosphorylated at physiological sites without the need of any other specific neuronal proteins; phosphorylation sites other than the MPR are present within NF-M which may play a role in synthesis, assembly, and degradation of NF protein in humans.
Axons are a unique structural feature of neurons; they are cytoplasmic extensions whose volume and surface area dwarf the dimensions of the cell soma proper. As such, the axon contains many structural features unique to neurons of which perhaps the most striking is the axonal cytoskeleton (for a recent review, see Hollenbeck, 1989). The most abundant proteins within the axons of mammalian neurons are neurofilament (NF) triplet proteins, the primary intermediate filament (IF) system in neurons. NFs are a macromolecular complex comprised of 3 polypeptides designated NF-L, NF-M, and NF-H (low, middle, and high M,, respectively). The number of NFs and amount of the triplet proteins correlate with the diameter of the axon, and NFs have been proposed to play a part in the regulation of axonal diameter (Hoffman et al., 1984(Hoffman et al., , 1987. From genomic sequences it is clear that the NF genes belong to a distinct subset, denoted type IV (see Steinert and Roop, 1988, for review of IF structure and nomenclature) within the IF genes. Type I-III IF genes have 6 introns, which interrupt the coding sequence in exactly the same relative positions, whereas type IV NF genes have only 2 or 3 introns, which are positioned differently from those of the type III genes (except for 1 intron that corresponds to 1 of the type III introns) and those of the other IFS (Lewis and Cowan, 1986;Myers et al., 1987;Lees et al., 1988). While the various IF genes are clearly descendants of a single progenitor gene, the relationship of the NFs to the other IFS is debated (Lewis and Cowan, 1986;Myers et al., 1987).
The coding sequences of the COOH-terminal extensions of the larger M, NF polypeptides have proven to be of particular interest; the human NF-M gene and the NF-H genes of human, mouse, and rat have been shown to code for multiple repeats (up to 50 for the NF-H genes) based on the sequence KSPV(A) (Myers et al., 1987;Lees et al., 1988;Shneidman et al., 1988). It now appears that the larger A4, NF polypeptides of all species examined (except for rat and mouse NF-M, which have single sequence motifs resembling the one found in human NF-M) have multiple repeats based on the sequence KSPV(A) (Levy et al., 1987;Myers et al., 1987;Napolitano et al., 1987;Pleasure et al., 1989;and unpublished observations). The repeat motif in human NF-M is a sequence of 13 amino acids (aa), i.e., KSPVPKSPVEEKG repeated serially almost exactly 6 times ( Fig. 1). This repeated sequence of 13 aa in human NF-M contains 2 KSPVs with the serines at position 2 and 7 (see Fig. 1). Using monoclonal antibodies (mAbs) that recognize various phosphoisoforms of human NF-H and NF-M and chemically phosphorylated peptides based on the KSPV(A) motif, the regions containing these repeats have been shown to be the major site of NF phosphorylation which may be responsible for generating the extensive heterogeneity of phosphorylation found in NF in vivo (Lee et al., 1988a, b). The repeat motifs have thus been called the multiphosphorylation repeat (MPR) (Lee et al., 1988a, b). Recent work in our laboratory has shown that both serines in the 13 aa repeated sequence of human NF-M are sites of phosphorylation (Tangoren et al., 1989;and I. Tangoren, L. Otvos, and V. M.-Y. Lee, unpublished observations) and that the single repeat motif in rat NF-M also is phosphorylated (Tangoren et al., 1989;Xu et al., 1989). The functional significance of the phosphorylation of the MPR regions in NFs is unknown, although there is correlative evidence in the lamprey to show that it may be involved in determining axonal diameter (Pleasure et al., 1989) or in mammals that it may somehow control the rate of slow axonal transport reviewed by Matus, 1988). Also, little is known about the identity of possible sites of phosphorylation in NF-M outside the MPR that may play a role in regulating assembly, as has been shown for other IFS (Inagaki et al., 1987). Nevertheless, sites of phosphorylation outside the MPR are thought to exist based on the specificity of phosphorylation-dependent mAbs that do not recognize the MPR, and these putative sites have been postulated to play a role in structural functions of lamprey NF (Pleasure et al., 1989).
In this report we demonstrate that genomic clones of human NF-M are expressed in mouse L cells, and that they are translated and assembled into filaments without the presence of any other neuronal proteins, confirming another recent report (Monteiro and Cleveland, 1989). More important, we show that human NF-M can be expressed from genomic DNA and is phosphorylated within the MPR in transfected L cells. This represents the first report that NFs, the most abundant class of neuronal structural proteins and protein kinase substrates of neurons, can be phosphorylated appropriately in non-neuronal cells. Finally, we have also expressed a construct lacking the MPR in L cells to determine whether the MPR is essential for human NF-M expression and assembly, and whether phosphorylation sites other than those comprising the MPR exist in human NF-M.
Materials
and Methods Construction of plasmids and transfection of L cells pTZNFM. A 6.6 kb Barn HI to Hind111 fragment was isolated from a lambda phage clone and ligated into pTZ18 (Fig. 1). This fragment contains the entire human NF-M gene and 2.5 kb of upstream sequence. The resulting plasmid was used directly for transfection experiments.
pTZNFMBam-. This plasmid was derived from pTZNFM by removing a 726 bp Barn HI to Barn HI fragment from the third exon. This creates an in-frame deletion removing amino acids 552-793.
Northern analysis of NF expression in cells
Poly A+ RNA was isolated using a kit purchased from Invitrogen which was used exactly according to the manufacturer's specifications from cells in log phase growth. RNAs were analyzed by electrophoresis through 1% agarose gels containing formaldehyde (Thomas, 1980) and transferred to nylon membranes (Zeta-Probe/BioRad).
Hybridization was to a partial cDNA clone of human NF-M (pHNF,; Myers et al., 1987) in 50% formamide, 6 x SSPE, 1 x Denhardt's. 0.1% SDS. and 100 us/ml salmon sperm DNA. Posthybridization washes were 1 i with 2 x SSC/ 0.1% SDS at room temperature, and 1 x with 0.2 x SSC 0.1% SDS at 58°C. Exposure was as stated in the legend of Figure 2.
Preparation of mAbs and antiserum
The mAbs used in this study were prepared and initially characterized previously (Carden et al., 1985;Lee et al., 1987).
The antivimentin antiserum was prepared by immunizing rabbits with a 22 amino acid synthetic peptide representing the sequence of amino acids 438-459 from the carboxyl terminal region of human vimentin (Ferrari et al., 1986). The antiserum derived from these rabbits was used at a dilution of 1:500 for immunofluorescence or immunoblotting.
Indirect immunojluorescence
Cells were grown on glass coverslips for at least 48 hr, after which they were fixed by immersion in -20°C acetone for 7-10 min and allowed to air-dry. The coverslips were then incubated with the primary mAb for 1 hr at room temperature in a humidified chamber. The coverslips were then washed 4 times for 15 min in PBS and incubated with the secondary Ab (goat anti-mouse coupled to FITC and goat anti-rabbit coupled to RITC, Cappel) for 1 hr at room temperature in a humidified chamber, after which they were washed again 4 times for 15 min in PBS and mounted with aquamount.
Preparation of cytoskeletal extracts and total cell extracts Confluent 60 mm plastic tissue culture dishes were lysed with 200 ~1 of cold Tris buffered saline (TBS)/Triton with a cocktail of protease inhibitors added and scraped into a cold glass homogenizer. The lysate was incubated for 30 min on ice with intermittent plunges of the homogenizer. The insoluble cytoskeleton was pelleted at 100,000 x g in a Beckmann TLlOO centrifuge and the pellet solubilized in Laemmli sample buffer (LSB) without dye for protein determination using a Pierce protein assay kit dependent on Coomassie blue dye binding. Total cell extracts were prepared by homogenizing cells in boiling LSB without dye and boiling the extract for 15 min.
SDS-PAGE and immunoblotting
Cell extracts were separated on 0.75-mm-thick 4-8% gradient or 7.5% polyacrylamide gels exactly as described previously . Proteins separated by SDS-PAGE were electrophoretically transferred to either nitrocellulose (Schleicher and Schuell) or Immobilon-P (Millipore) and probed with mAbs and were visualized using the peroxidaseantiperoxidase (PAP) protocol as described .
Quantitative immunoblotting
Pure bovine lens vimentin was purchased from Boehringer Mannheim. Bovine NF-M which was judged to be > 95% pure was purified by anion exchange chromatography on a DEAE column from neurofilament triplet proteins isolated according to Carden et al. (1985). Various amounts of cytoskeletal extracts were separated on 7.5% SDS-PAGE gels and blotted to nitrocellulose. Included on these gels were known amounts of the appropriate standards, vimentin or NF-M, which were used to generate standard curves for each antigen. The replicas were incubated overnight with the primary Abs (either RMO 189, an anti-NF-M core mAb, or our rabbit antivimentin antiserum). Following washes the replicas were incubated with 1 x lo6 cpm/ml of either anti-mouse IgG labeled with lz51 or Protein A labeled with Iz51. The replicas were washed and exposed to film. The films were scanned with an LKB ultrascan laser densitometer and standard curves computed for each film.
f2P]P0, and [3SS]-methionine labeling and immunoprecipitations
Confluent 60 mm plastic tissue culture dishes of cells were washed once with RPM1 1640 nhosnhate-free medium (Gibco) containine 10% fetal bovine serum (FBS; JR Scientific), 2 mM i-glutamine, lO,OtO units/ml penicillin, and 10 mg/ml streptomycin and then phosphate-starved by incubation in the phosphate-free medium for 15 min at 37°C; 2.5 mCi of [)2P]-orthophosphate (Amersham) was added to each culture dish and the cells were labeled for 2 hr at 37°C. For pulse chase experiments confluent dishes of MNA and MNA-B were starved in methionine-free RPM1 1640 medium for 20 min and then labeled for 2 hr with 1.25 mCi of trans-[?+label (ICN) per dish and chased with complete medium for 0 or 18 hr, as indicated in Figure 6. Immunoprecipitations were conducted as described in Black and Lee (1988). After separation, the gels were dried and exposed to film for times stated in the figure legends.
The pulse chase experiment in Figure 6 represents half of a gel where duplicate samples were processed from the same plates of cells. Densitometric values represent the average of the duplicate sets which were scanned on an LKB ultrascan densitometer.
Dephosphorylation of L-cell cytoskeletal extracts L-cell extracts were dephosphorylated with Escherichia coli alkaline phosphatase (Sigma type III) using 4 units/mg of protein exactly as described by Carden et al. (1985). The reaction was allowed to proceed et al. at 37°C for 16 hr. Control extracts were treated in exactly the same way except that no enzyme was added.
Two-dimensional gel analysis
Iso-electric focusing was performed according to the method of O'Farrell exactly as described previously (Pleasure et al., 1989). A pH gradient of 4.5-8.0 was achieved using LKB ampholines (pH 3.5-10.0,5.0-7.0, and 4.0-6.0 in a ratio of 2:9:9 at a final concentration of 2%): 20 ua of a cytoskeletal extract prepared from confluent 60 mm dishes of'both MNA and MNA-B cells were loaded on the first dimension. The second dimension consisted of 4-8% gradient SDS-PAGE gels.
Results
Transfection and expression of human NF-M constructs L cells generated by transfection with the full-length genomic clone for human NF-M are designated as MNA cells, while those generated by transfection with the Barn HI digested clone [i.e., MPR(-)] are designated as MNA-B cells (Fig. 1). After transfection and selection, the cells were cloned by limiting dilution, and multiple clones were examined by Northern blotting, indirect immunofluorescence, and/or Western blotting for NF-M mRNA and protein expression, respectively. Several clones were isolated which expressed each of the 2 constructs; typical Northern blot results are shown in Figure 2. NF-M message is clearly visible in the lane containing MNA RNA, which comigrates exactly with the authentic NF-M mRNA isolated from human brain (data not shown; Myers et al., 1987). In addition, we determined that the endogenous murine NF genes encoding NF-L, NF-M, or NF-H were not induced (data not shown). The expression of NF-M in fibroblasts transfected with this construct implies that the tissue-specific expression of NF-M is not controlled exclusively in cis within the 3 kb of upstream elements included in this construct or that NF-M expression is deregulated by the presence or absence of some other factor(s). In addition, Figure 2 shows that NF-M lacking the Barn HI restriction fragment is capable of expression in L cells and that the stably transfected MNA-B cells make mRNA that is ap-proximately 2500 bp in size. This reflects the removal of 726 bp from the MPR(-) construct. The second, more rapidly migrating band in both lanes has been observed before in human brain RNA (Myers et al., 1987) and in RNA isolated from some human neuronal tumor cell lines (unpublished observations). Its origin is unknown but it may be due to differential polyadenylation signals or splicing in the 3' end of the mRNA. The reason for the reduced expression of the MPR(-) NF-M mRNA and protein is not known, but it is most likely a clonally related phenomenon because other clones expressing the NF-M constructs described here have varying levels of protein and mRNA expression. It is unlikely that the difference in NF-M and MPR( -) NF-M mRNA expression is due to differential stability of the 2 messages because the region deleted from MPR( -) NF-M is wholly exonic and would not be predicted to make a great change in either splicing or mRNA conformation.
Whole cell extracts were prepared from MNA and MNA-B cells and probed using a library of anti-NF mAbs (Carden et al., 1985;Lee et al., 1987Lee et al., , 1988a to determine if NF-M and MPR( -) NF-M were being translated (Table 1). All mAbs that reacted with human NF-M isolated from human spinal cord reacted with the full-length NF-M in MNA cells, suggesting a close similarity between authentic human NF-M and human NF-M expressed in the MNA cells. All of the MPR-specific mAbs which have been shown to react with the MPR in human NF-M did not react with the MPR(-) NF-M in the MNA-B cells. All mAbs known to be specific for core epitopes in NF-M and all mAbs specific for the heptad repeat region at the extreme carboxy terminal region of NF-M (unpublished observations) were positive for extracts from both MNA and MNA-B cells. This indicates that intact NF-M and MPR(-) NF-M are both translated into protein and that the MPR-specific mAbs are incapable of detecting NF-M following the removal of the MPR domain. In contrast, mAbs specific for structural determinants coded for by sequences both 5' and 3' of the deletion are found in MPR(-) NF-M.
Included among the MPR-specific mAbs that detect full-length NF-M are those dependent on the state of phosphorylation of The Journal of Neuroscience, July 1990, fO (7) 2431 the MPR (Lee et al., 1988a, b). A number of these mAbs have previously been defined as Pm, Pind, or P+ according to their susceptibility to the removal of phosphates (P) using alkaline phosphatase to dephosphorylate NF-M (P-mAbs are those mAbs which react with native human NF only after alkaline phosphatase treatment, P+ are those which react with native human NF only before alkaline phosphatase treatment, and Pind are those which react with native human NF before and after alkaline phosphatase treatment; see Lee et al., 1987Lee et al., , 1988a, for further descriptions of the classification of anti-NF-M mAbs). Members of all of these groups of mAbs are included in Table 1. These groups of mAbs react with the full-length NF-M protein in transfected MNA cells (see Fig. 7 also), indicating that NF-M is likely to be phosphorylated at the MPR to some degree in MNA cells. Considering previous data that have shown that the migration of NF is highly dependent on phosphorylation (Carden et al., 1985), and that the human NF-M mRNA codes for a protein with a M, of only about 102 kDa (Myers et al., 1987), this observation may in part reflect the removal of the majority of the sites of phosphorylation from NF-M. The MPR( -) NF-M still is retarded in its SDS-PAGE mobility when its true A4, is considered. This is probably due to the numerous glutamic acid residues in the COOH terminal extension of NF-M. In addition, immunoreactive full-length NF-M is distributed in M,'s ranging from 150 to 165 kD, suggesting possible heterogeneity due to different states of phosphorylation at the MPR (see below). Since MPR(-) NF-M protein migrates as a single band, this implies that almost all of the heterogeneity due to phosphorylation at the MPR has been lost. Figure 3B shows that there is no apparent change in the levels or solubility of the endogenous vimentin protein from the L cells. Quantitative immunoblotting of extracts of MNA cells using an antivimentin antiserum or the anticore NF-M mAb (RMO 189), showed that vimentin constitutes 2 1.9% (SD 3.03) and NF-M 1.6% (SD 0.07) of the total Triton X-100 insoluble protein (data not shown).
Cellular localization of the NF-M and MPR(-) NF-M proteins
Indirect immunofluorescence studies using anti-NF mAbs on the MNA and MNA-B cells revealed that both the full-length and the MPR( -) NF-M stained in a filamentous manner. RMO 308, an MPR-specific mAb, clearly reacted with an abundant network of filaments containing the full-length NF-M, and these filaments colocalized exactly with endogenous vimentin. This suggests that both vimentin and NF-M proteins are incorporated into the IFS of MNA cells (Fig. 4, A, B). RMO 3, an mAb specific for an epitope outside the MPR in the sidearm of NF-M, stained MPR(-) NF-M in MNA-B cells, and this immunoreactivity also colocalized with vimentin ( Fig. 4, C, D). However, the intensity and the distribution of MPR( -) NF-M were reduced compared to the full-length NF-M protein or vimentin. The reason for this reduction is not known at present but it may be due to the lower levels of MPR( -) NF-M mRNA rather than to a decrease in the stability of the MPR( -) NF-M protein (see below). Finally, as expected from the foregoing, RMO 308 did not stain MNA-B cells (Fig. 4E) nor did any of the anti-NF mAbs stain untransfected L cells (data not shown).
The pattern of staining for NF-M in MNA cells was noteworthy because the most abundant NF immunoreactivity typically was clustered around the nucleus and radiated outward toward the cell periphery (Fig. 4A). A minority of the MNA cells adopted a flat morphology with extended cytoplasm. Occasionally multiple nuclei were noted in these cells. This morphology also was seen in untransfected L cells with the same frequency. The transfected flat cells usually stained intensely both with anti-NF mAbs and the antivimentin antiserum. In addition, occasional cells had primarily a perinuclear whorl of anti-NF immunoreactivity which colocalized with vimentin immunoreactivity. This pattern of staining may represent the previously observed site of assembly of soluble IF subunits into the IF network (Vikstrom et al., 1989). (Table 1 and data not shown). These data further indicate the integrity of the product of the deleted construct. (Fig. 5, compare lanes 1 and 2). Labeled MPR(-) NF-M at 0 hr constituted 40% of full-length NF-M at 0 hr. We conclude that the difference in intracellular distribution of the full-length and deleted products was not due to decreased stability of MPR( -) NF-M, but instead may reflect decreased synthesis of the protein, and that the removal of the MPR has no effect on the stability of human NF-M.
Both NF-A4 and MPR(-) NF-A4 are phosphorylated Two-dimensional gel analysis of a mixture of cytoskeletal extracts from MNA and MNA-B cells contributes further evidence that the removal of the MPR greatly simplifies the pattern of phosphorylation of NF-M in these cells and that the M, heterogeneity of the full-length protein is due to phosphorylation. Silver-stained gels show the relative positions of the 2 NF-M proteins among the large number of proteins in the cytoskeletal extracts (Fig. 6A). Probing nitrocellulose replicas of these 2-dimensional gels with RMO 189, a core-specific anti-NF-M mAb, elicited a complex pattern of immunoreactivity (Fig. 6B).
A streak of immunoreactivity stretching from a more basic isoelectric point (PI) with a M, of 150 kDa to a more acidic p1 with a M, of 165 kDa was detected together with a single spot at 100 kDa. In contrast, when similar immunoblots were probed with RMO 308, an MPR-specific anti-NF-M mAb, only the streak at 150-165 kDa was seen (Fig. 6C'). We conclude that the streak at 150-165 kDa is full-length NF-M whereas the spot at 100 kDa is MPR(-) NF-M. This conclusion is also supported by experiments in which cytoskeletal extracts were treated separately (data not shown). Further, we suggest that the streak may represent the sequential addition of phosphate to the full-length NF-M, resulting in isoelectric variants with more acidic pIs and slower electrophoretic mobilities. That the p1 of the MPR(-) NF-M is not very different from that of the full-length NF-M may be due to the large number of glutamic acid residues remaining in the sidearm of the truncated NF-M (Myers et al., 1987).
MNA and MNA-B cells were metabolically labeled in viva
with [32P]P0, and immunoprecipitated with anti-NF-M mAbs to determine whether phosphates are incorporated into NF-M and MPR( -) NF-M in these cells. Figure 7 shows typical results for these experiments. RMO 189 (an anticore NF-M mAb) and RMdO 20 (an MPR-specific mAb) both immunoprecipitated abundant 32P-labeled full-length NF-M, indicating heavy phosphorylation of NF-M in these cells. In contrast, immunoprecipitation of MNA-B cells with the same mAbs showed that RMO 189, but not RMdO 20, immunoprecipitated a faintly 32P-labeled band migrating at 100 kDa. These results indicate that kinases capable of phosphorylating at least 2 distinct sets of sites in NF-M are present in transfected L cells.
Although full length NF-M in L cells is heavily phosphorylated at the MPR, the faster mobility and heterogeneous banding pattern of the full-length NF-M protein expressed in MNA cells when compared to that isolated from human spinal cord indicates that NF-M in these cells exists in a number of phosphoisoforms. Furthermore, close examination shows that only a small amount of the NF-M in these cells corn&rates with fully phosphorylated NF-M as isolated from human spinal cord (compare lanes 2 and 3 with lane 4 in Fig. 3 and see Fig. 7). This was confirmed by immunoblotting with P+ mAbs, i.e., those like HO 45 which recognize the highly phosphorylated isoforms of NF-M. These blots revealed that only the most slowly migrating portion of the broad NF-M immunoreactive band is visualized (Fig. 8). Dephosphorylation of L-cell extracts with alkaline phosphatase abolished the immunoreactivity of NF-M with HO 45, proving that HO 45 immunoreactivity was indeed due to phosphorylation. RMO 189, a phosphorylationindependent or Pind mAb, shows that the full range of NF-M MW isoforms is reduced to a single band following dephosphorylation (Fig. 8), implying that the M, heterogeneity ofNF-M in the L cells is due to phosphorylation. The same single band that is present with the Pind mAb after dephosphorylation is the major species which reacts, under all conditions, with RMdO 20, a dephosphorylation dependent, or P-mAb, which binds to an epitope within the MPR (Fig. 8). This suggests that a significant proportion of the NF-M expressed by the L cells is hypophosphorylated within the MPR. The fact that all of our phosphorylation-dependent, MPR-specific mAbs reacted with full-length NF-M but not MPR(-) NF-M extracted from transfected L cells (Table l), together with the evidence presented in Figures 7 and 8, supports the conclusion that the MPR which contains the major sites of phosphorylation of human NF-M in situ also makes up the major sites of phosphorylation of NF-M in L cells. Finally, the presence of a lightly labeled MPR(-) NF-M species in MNA-B cells further indicates that there are one or more sites of phosphorylation in human NF-M other than the MPR.
Discussion
This report demonstrates for the first time the site-specific phosphorylation of an IF gene product and of a neuronal cytoskeletal protein in non-neuronal cells. Human NF-M was shown to be phosphorylated at the MPR, the region of the molecule that is extensively phosphorylated in vivo and may be a determinant of NF function in humans (Lee et al., 1988a, b). In addition, the existence of phosphorylation site(s) outside the MPR was (Table 1, Fig. 8). This is in sharp contrast with previous studies which showed that human NF-M isolated from human tissue and probed in situ by immunocytochemistry exists primarily in the most highly phosphorylated form (Schmidt et al., 1987). This discrepancy may be explained by the fact that L cells, unlike normal neurons, undergo mitosis continually when maintained in 10% fetal bovine serum. Indeed, it will be interesting to study changes in NF assembly and phosphorylation in a mitotic system and compare these features to developing systems where NF-M immunoreactivity is found before the last round of cell division (Tapscott et al.,198 1). Like L cells, such developing systems lack the extensively phosphorylated isoforms at this stage .
Another possible explanation for the relative paucity of the highly phosphorylated forms of human NF-M in transfected L cells may be that the entire array of kinases and phosphatases responsible for the reversible phosphorylation of NF-M within and/or outside the MPR sites is not present or active in L cells. One potential kinase is the one described by Wible et al. (1989), which is highly active on the MPR-containing NF subunits. This kinase is particularly interesting because it phosphorylates partially dephosphorylated bovine NF-H better than extensively dephosphorylated NF-H (Wible et al., 1989). Thus, it is possible that this putative neuron-specific kinase fully phosphorylates the MPR subsequent to initial phosphorylation events performed by more ubiquitous kinases. Our L cells transfected with genomic NF-M DNA will be useful for studies of the interaction between human NF-M and the NF-specific kinase mentioned above and would be an ideal cell line to express this kinase once it has been cloned. Finally, differences in the phosphorylation state of NF-M in the L cells compared with NF-M isolated from human spinal cord may be due to the lack of other differentiated neuronal characteristics in the L cells (e.g., neurite extension), the regulation of which also might require the expression of similar kinases and phosphatases.
We have also successfully expressed MPR( -) NF-M in L cells by removing a Barn HI restriction fragment containing the MPR encoding sequences from the full-length genomic NF-M construct (Figs. 1,2). Several lines of evidence presented here demonstrate that the MPR(-) NF-M is translated and has characteristics of the expected deletion product.
Our observation that MPR(-) NF-M is recovered from the detergent-insoluble cytoskeleton and colocalizes with vimentin suggests that it is incorporated into IFS. This finding implies that the MPR is not essential for NF incorporation into the IF network in this system. Previous studies by others have shown that digestion of the highly phosphorylated sidearm from assembled NFs had no effect on the integrity of the IF backbone (Chin et al., 1983). Our study confirms and extends these observations by directly demonstrating that the MPR is not essential for initiating the assembly of NF-M into stable IFS. Recent studies showing that extensive dephosphorylation of NFs in vitro by alkaline phosphatase had no effect on filament morphology (Carden et al., 1985;Hisanaga and Hirokawa, 1989) are further evidence supporting the lack of the MPR's role in assembly. Although the carboxy terminal regions of type III IF proteins have been shown to be unnecessary for assembly into filaments Fuchs, 1987, 1989;van den Heuvel et al., 1987), many differences exist between the type III and type IV IFS (Steinert and Roop, 1988). One such difference is the presence of extended sidearm domains with extensive phosphorylation sites in NF-M and NF-H from all species but not in any other IFS (Steinert and Roop, 1988 Figure 8. Dephosphorylation of full-length NF-M. Shown are gel replicas loaded with 30 pg of Triton-insoluble extracts which were either dephosphorylated (lanes marked DP) or not. These replicas were exposed to 3 mAbs of different phosphorylation-dependent specificities. RMO 189 is an anticore NF-M mAb whose reactivitv is indenendent of the phosphorylation state of the molecule. HO 45-is an anti-MPR mAb whose reactivity is dependent on the presence of extensive phosnhorvlation of serines within the MPR (see Fia. 1). RMdO 20 is an anti-MPR mAb whose reactivity is dependent on the presence of several nonphosphorylated repeats of KSPV-based repeat. M, markers are shown in kDa.
RMO I89
RMdO 20 The position of full-length NF-M from the L cells is indicated by stars and that of MPR(-) NF-M by arrows. The left panel was immunoprecipitated by RMO 189, an anticore NF-M mAb, and the right panel by RMdO 20, a mAb whose epitope is within the MPR. Note that RMdO 20 does not precipitate any MPR(-) NF-M. The electrophoretic mobility of NF-M isolated from human spinal cord along with A4, markers is shown at the right. The gel was exposed to film for 1 week at -70°C.
the pursuit of further studies of the residual sites of MPR(-) NF-M phosphorylation (not detectable when the MPR is present due to the extensive phosphorylation within the MPR) which may play a direct role in NF stability. Although MPR( -) NF-M in transfected L cells lacks the major phosphorylation sites in human NF-M, it is phosphorylated at a low level in L cells. The site(s) phosphorylated in MPR( -) NF-M is presently unknown and may serve functions analogous to those attributed to phosphorylation sites in other IF proteins. For example, vimentin is phosphorylated within the amino terminal region, and repeated cycles of phosphorylation and dephosphorylation at this site(s) have been implicated in the process of repeated assembly-disassembly during the cell cycle (Inagaki et al., 1987;Chou et al., 1989;Evans, 1989). Nuclear lamins likewise are cyclically phosphorylated and dephosphor-ylated in relation to the assembly and disassembly of the nuclear membrane during mitosis (Gerace and Blobel, 1980;Miake-Lye and Kirschner, 1985). Since the sequence KSPV has been shown to be the predominant phosphate acceptor motif in NF proteins (Lee et al., 1988a, b), the KSPV (aa 5 10-5 13) closer to the core region and outside the MPR (Myers et al., 1987) should be considered a potential phosphorylation site in human NF-M. Attempts were made to determine whether this KSPV is indeed the site of phosphorylation in MPR(-) NF-M. MAbs specific for the nonphosphorylated form of the tetrapeptide sequence KSPV (Lee et al., 1988a, b) do not recognize the MPR( -) NF-M. These mAbs still do not recognize MPR(-) NF-M following dephosphorylation, suggesting that if this KSPV is phosphorylated, it is resistant to dephosphorylation. However, data from another laboratory indicate that a KSPV is present in the same position in rat NF-M (Napolitano et al., 1987) and that this KSPV is a site of phosphorylation (Xu et al., 1989). Other potential phosphorylation sites, including those present within the amino terminal portion of NF-M, may also be likely candidates (Sihag and Nixon, 1989). Many more definitive experiments, including 2-dimensional peptide mapping and sequencing, will be necessary to determine whether this KSPV in human NF-M and/or other serine residues are the authentic phosphate acceptor site(s) in MPR(-) NF-M.
The functional significance of NF phosphorylation in overall NF biology is unknown. Nevertheless, the resolution of phosphorylation site(s) outside of the MPR confirms the existence of at least 2 types of such sites the presence of which we described in lamprey NF (Pleasure et al., 1989). In the lamprey, 2 distinct types of phosphorylation sites with different anatomical localizations were demonstrated. The first type was detected by MPRspecific mAbs, whereas the second was recognized by phosphorylation-dependent non-MPR mAbs. We have referred to the non-MPR site(s) of phosphorylation as "structural" sites since they were localized to axons of all sizes and may assume roles in NF assembly or filament maintenance that were indispensable to NF integrity. The MPR sites of phosphorylation in lamprey are occupied extensively only in large-diameter axons and may be involved in controlling axonal diameter (Pleasure et al., 1989). Our L-cell transfection system has not only allowed the resolution of both MPR and non-MPR phosphorylation sites, but will also provide an expression system to examine the regulation of phosphorylation of both these sites in human NF-M. | 8,015 | 1990-07-01T00:00:00.000 | [
"Biology"
] |
Red Y2O3:Eu-Based Electroluminescent Device Prepared by Atomic Layer Deposition for Transparent Display Applications
Y2O3:Eu is a promising red-emitting phosphor owing to its high luminance efficiency, chemical stability, and non-toxicity. Although Y2O3:Eu thin films can be prepared by various deposition methods, most of them require high processing temperatures in order to obtain a crystalline structure. In this work, we report on the fabrication of red Y2O3:Eu thin film phosphors and multilayer structure Y2O3:Eu-based electroluminescent devices by atomic layer deposition at 300 °C. The structural and optical properties of the phosphor films were investigated using X-ray diffraction and photoluminescence measurements, respectively, whereas the performance of the fabricated device was evaluated using electroluminescence measurements. X-ray diffraction measurements show a polycrystalline structure of the films whereas photoluminescence shows emission above 570 nm. Red electroluminescent devices with a luminance up to 40 cd/m2 at a driving frequency of 1 kHz and an efficiency of 0.28 Lm/W were achieved.
Introduction
Inorganic-based electroluminescent (EL) devices have been extensively studied for transparent flat panel display applications due to their distinct characteristics. Such technology allows for the creation of displays capable of withstanding harsh environments thanks to their exclusively solid structure, which leads to a high level of vibration and mechanical shock resistance [1]. Additionally, the electroluminescence phenomenon, which is not affected by temperature, allows EL devices to operate in a wide range of temperatures [2]. Furthermore, the ability to use alternating current to drive EL devices prevents charge accumulation, leading to long operating lifetimes [3].
Because the abovementioned characteristics are difficult to achieve with technologies such as organic-light emitting diodes (OLEDs), inorganic-based electroluminescent displays are very attractive from the commercial point of view. LUMINEQ thin film electroluminescent (TFEL) rugged displays and their transparent version TASEL displays are good examples of such commercial products which have been incorporated in industries such as automotive, industrial vehicles, and optical devices.
While yellow and green TFEL and TASEL displays are commercially available, demand for red EL devices has been increasing. Transparent red electroluminescent displays could, for example, be integrated to heavy vehicles, enabling them to display warning signs more effectively, thereby increasing the safety of operators. In the past, some attempts to develop red electroluminescent devices have been made by integrating phosphors such as CaS:Eu [4][5][6], CaY 2 S 4 :Eu [7], β-Ca 3 (PO 4 ) 2 :Eu [8], and ZnS:Sm,P [9] into the classic dielectric/semiconductor/dielectric (DSD) EL device structure. Red EL devices, with phosphors such as Eu 2 O 3 [10], Ga 2 O 3 :Eu [11,12], and IGZO:Eu [13], were also developed using alternative device structures. However, only the use of a color filter with the yellow ZnS:Mn phosphor resulted in sufficiently high red luminescence to be used in commercial products [14]. This solution is unfortunately not suitable for transparent display applications as the use of filters reduces the overall transparency of the device.
Among the currently available red inorganic phosphors, Y 2 O 3 :Eu and Y 2 O 2 S:Eu are the most efficient [15,16]. Y 2 O 3 and Y 2 O 2 S are known for their good chemical and photochemical stability. Furthermore, because Y 3+ and Eu 3+ have similar dimensions of the ionic radii, rare-earth materials such as Eu 3+ can easily be integrated into Y 2 O 3 and Y 2 O 2 S matrices [17]. However, Y 2 O 3 exhibits a high electrical resistivity, with reported values in the 10 11 -10 12 Ωm range [18], which makes it incompatible with the classic DSD electroluminescent device structure. Nevertheless, several papers have demonstrated the successful use of Y 2 O 3 and Y 2 O 2 S in red and green electroluminescent devices using multilayer structures where ZnS is used as a carrier accelerating layer [19,20]. Y 2 O 3 :Eu thin film phosphors can be grown by various methods such as wet chemistry [21], laser vaporization [22], hydrothermal [23], microwave hydrothermal [24,25], chemical precipitation with calcination [26], co-precipitation [27], Pechini [28], sol-gel [29,30], and pulse laser deposition [31] methods. Atomic layer deposition (ALD) is a well-known method that allows the growth of uniform and dense films with well-controlled stoichiometry and high chemical stability. Moreover, ALD, which is the method used for the fabrication of commercial electroluminescent displays, offers the advantage of an all-in-one growth step for the dielectric and phosphor layers in a DSD structure, thereby improving device resistance to moisture [1,32]. Years of advances in ALD technology have allowed the use of more elements and chemical precursors for the development of novel processes. As a result, opportunities for the fabrication of high-quality phosphors, and consequently more efficient electroluminescent devices, may arise in the future.
In a previous paper, we reported the growth of blue and red Y 2 O 3-x S x :Eu phosphors by ALD [33]. In this work, we focus on the fabrication and the performance evaluation of Y 2 O 3 :Eu-based multilayer structure electroluminescent devices that can potentially be used in red transparent display applications.
Materials and Methods
Atomic layer deposition processes for Y 2 O 3 , Eu 2 O 3 , Al 2 O 3 , and ZnS thin films were first developed on (100)-oriented Si substrates. All the films were grown at 300 • C in a Beneq TFS-200 ALD-reactor (Beneq Oy, Espoo, Finland) at a pressure of about 1.3 mbar. (CH 3 C p ) 3 Y (98%, Intatrade, Anhalt-Bitterfeld Germany), Eu(thd) 3 (THD = 2,2,6,6tetramethyl-3,5-heptanedionate) (99.5%, Intatrade, Anhalt-Bitterfeld Germany), Zn(OAc) 2 (99.9%, Alpha Aesar, Thermo Fisher GmbH, Germany), and trimethylaluminum (TMA, Al(CH3)3) (98%, Strem Chemicals UK Ltd., Cambridge, UK) were used as precursors for yttrium, europium, zinc and aluminum, respectively, while H 2 O and/or O 3 were used as oxygen precursors for the Y 2 O 3 , Al 2 O 3 , and Eu 2 O 3 processes. H 2 S was used as sulfur precursor for the ZnS process. In all processes, N 2 was used as a carrier and purging gas. Details about the pulsing sequences and pulse and purge times are presented in Table 1 The electroluminescent device was prepared using the structure proposed by T. Suyama et al. [19]. The multilayer structure was grown by ALD on a standard glass substrate coated with an ion-diffusion barrier and an ITO layer provided by LUMINEQ (Beneq Oy, Espoo, Finland). First, a 150 nm thick Al 2 O 3 dielectric layer was grown by ALD. It was then followed by several ZnS (50 nm)/Y 2 O 3 :Eu (40 nm) multilayers. Finally, another 150 nm thick Al 2 O 3 layer was deposited on the structure. The 1720 nm thick device was finalized by depositing a top contact. A schematic illustration of the device is presented in Figure 1. While it is possible to use a transparent top contact for a fully transparent device, for merely convenience purposes, top contact stripes of aluminum were sputtered here using a mechanical mask. The crossing of the ITO transparent contact and the aluminum stripes, which also comprises the sandwich multilayer Al 2 O 3 /ZnS/Y 2 O 3 :Eu/Al 2 O 3 structure, creates a passive matrix with a pixel size of 3 × 5 mm 2 . Note that prior to the deposition of the top Al 2 O 3 layer, the multilayer sequence was always completed with a ZnS top layer. device, for merely convenience purposes, top contact stripes of aluminum were sputtered here using a mechanical mask. The crossing of the ITO transparent contact and the aluminum stripes, which also comprises the sandwich multilayer Al2O3/ZnS/Y2O3:Eu/Al2O3 structure, creates a passive matrix with a pixel size of 3 × 5 mm 2 . Note that prior to the deposition of the top Al2O3 layer, the multilayer sequence was always completed with a ZnS top layer. [19]. In this work, 6 layers of Y2O3:Eu and 7 layers of ZnS were used.
A SE400adv ellipsometer (SENTECH Instruments GmbH, Berlin, Germany) using a 633 nm wavelength at 70° angle of incidence, was used to determine the growth per cycle (GPC) for each material. GPC values were subsequently used to determine the thickness of the different layers. The crystallinity of Y2O3:Eu and ZnS thin films was investigated by X-ray diffraction (XRD) using the Cu Kα line in a Rigaku SmartLab (Rigaku Europe SE, Neu-Isenburg, Germany) high-resolution X-ray diffractometer equipped with in-plane arm. The XRD data were analyzed using the HighScore Plus 4.6 (PANalytical B.V., Almelo, The Netherlands). Photoluminescence (PL) emission was measured from Y2O3:Eu thin film phosphors with a Hitachi F-7100 Fluorescence Spectrophotometer (Hitachi High-Tech Analytical Science Ltd., Abingdon, UK) equipped with a 150 W xenon lamp. Measurements were performed at room temperature with an excitation slit of 5 nm, emission slit of 2.5 nm, and a photomultiplier tube voltage of 400 V. To determine the excitation wavelength, excitation spectra were recorded for maximum emission at 612 nm. Electroluminescent devices were powered by a Hewlett Packard 6811a source using AC mode at a frequency of 1 kHz. Electroluminescence spectra were recorded using a Konica Minolta CS-2000 spectrometer (Konica Minolta Sensing Europe B.V., Nieuwegein, The Netherlands) with a measurement angle of 1°.
For the calculation of the EL device efficiency, the Sawyer-Tower circuit was used to determine the charge density versus voltage (Q-V) characteristic. The used circuit is composed of a sense capacitor connected in series with the EL device. The total capacitance of the circuit was determined using a Fluke 76 digital multimeter. Data from the Q-V plot [19]. In this work, 6 layers of Y 2 O 3 :Eu and 7 layers of ZnS were used.
A SE400adv ellipsometer (SENTECH Instruments GmbH, Berlin, Germany) using a 633 nm wavelength at 70 • angle of incidence, was used to determine the growth per cycle (GPC) for each material. GPC values were subsequently used to determine the thickness of the different layers. The crystallinity of Y 2 O 3 :Eu and ZnS thin films was investigated by X-ray diffraction (XRD) using the Cu Kα line in a Rigaku SmartLab (Rigaku Europe SE, Neu-Isenburg, Germany) high-resolution X-ray diffractometer equipped with in-plane arm. The XRD data were analyzed using the HighScore Plus 4.6 (PANalytical B.V., Almelo, The Netherlands). Photoluminescence (PL) emission was measured from Y 2 O 3 :Eu thin film phosphors with a Hitachi F-7100 Fluorescence Spectrophotometer (Hitachi High-Tech Analytical Science Ltd., Abingdon, UK) equipped with a 150 W xenon lamp. Measurements were performed at room temperature with an excitation slit of 5 nm, emission slit of 2.5 nm, and a photomultiplier tube voltage of 400 V. To determine the excitation wavelength, excitation spectra were recorded for maximum emission at 612 nm. Electroluminescent devices were powered by a Hewlett Packard 6811a source using AC mode at a frequency of 1 kHz. Electroluminescence spectra were recorded using a Konica Minolta CS-2000 spectrometer (Konica Minolta Sensing Europe B.V., Nieuwegein, The Netherlands) with a measurement angle of 1 • .
For the calculation of the EL device efficiency, the Sawyer-Tower circuit was used to determine the charge density versus voltage (Q-V) characteristic. The used circuit is composed of a sense capacitor connected in series with the EL device. The total capacitance of the circuit was determined using a Fluke 76 digital multimeter. Data from the Q-V plot were acquired by measuring the voltage at each of the device terminals using a WaveSurfer 3104z (Teledyne Lecroy, Teledyne GmbH, Heidelberg, Germany) oscilloscope. The charge (Q) of the device could be determined by multiplying the output voltage by the total capacitance of the circuit [32]. Simulations were performed using LTspice XVII.
Results
To optimize the emission of the Eu-doped Y 2 were acquired by measuring the voltage at each of the device terminals using a WaveSurfer 3104z (Teledyne Lecroy, Teledyne GmbH, Heidelberg, Germany) oscilloscope. The charge (Q) of the device could be determined by multiplying the output voltage by the total capacitance of the circuit [32]. Simulations were performed using LTspice XVII.
Results
To optimize the emission of the Eu-doped Y2O3 thin film phosphors, films with three different Y2O3:Eu2O3 ratios were grown. Thus, three Eu doping concentrations (2:2, 3:2, and 4:2) were obtained by changing the number of Y2O3 and Eu2O3 sequences. As an example, a 4:2 doping configuration refers to a Y2O3:Eu thin film layer in which 4 layers of Y2O3 (Y(MeCp)3/N2/H2O/N2) were followed by 2 layers of Eu2O3 (Eu(Thd)3/N2/O3/N2/H2O/N2) during the ALD process. Taking into consideration Y2O3 and Eu2O3 densities, growth rates on Si substrate, and assuming that the Y2O3 and Eu2O3 films are stoichiometric, the 2:2, 3:2 and 4:2 doping configurations lead to calculated Eu concentrations of 16, 11, and 9 mol%, respectively. Figure 3a shows grazing incidence X-ray diffractograms for Y2O3:Eu and ZnS thin films measured between 15 and 65°. The Y2O3:Eu sample was prepared with a 3:2 (Y2O3:Eu2O3) cycle ratio. The Y2O3:Eu XRD diffractogram shows that the main phase of the film is polycrystalline (randomly orientated) cubic (pattern number 00-041-1105; Ia3) with some traces of monoclinic phase (marked with asterisk). The grazing incidence XRD data Figure 3a shows grazing incidence X-ray diffractograms for Y 2 O 3 :Eu and ZnS thin films measured between 15 and 65 • . The Y 2 O 3 :Eu sample was prepared with a 3:2 (Y 2 O 3 :Eu 2 O 3 ) cycle ratio. The Y 2 O 3 :Eu XRD diffractogram shows that the main phase of the film is polycrystalline (randomly orientated) cubic (pattern number 00-041-1105; Ia3) with some traces of monoclinic phase (marked with asterisk). The grazing incidence XRD data of the ZnS sample show clearly that the sample is highly orientated as only the (002) reflection is observed. The wide bump, between 45 and 60 • , most likely originates from the substrate. of the ZnS sample show clearly that the sample is highly orientated as only the (002) reflection is observed. The wide bump, between 45 and 60°, most likely originates from the substrate. Further proof of the orientation was obtained by performing an in-plane measurement that probes the crystalline planes perpendicular to the surface normal as shown in Figure 3b. One can see only (hk0) family of planes meaning that (00l) planes are strongly orientated parallel to the surface. On Figure 3c, which shows the 2θ-ω measurement for the ZnS sample, the hump disappears supporting the idea that it was originated from the substrate. The peak at 59.1° reveals the (004) reflection related to the (002) intense reflection. Figure 4a shows a photograph of a 3 × 5 mm 2 red Y2O3:Eu/ZnS-based EL pixel under a sinusoidal excitation of 1 kHz measured at 280 Vrms. The photograph was taken with a digital camera in automatic mode under normal room lighting. For this pixel, a brightness of 40 cd/m 2 was measured. Figure 4b shows the electroluminescence spectrum, at maximum luminance, of the Y2O3:Eu/ZnS EL device with a 3:2 (Y2O3:Eu2O3) cycle ratio. The EL spectrum, which was measured under an operating voltage of 280 Vmrs and a frequency of 1 kHz, clearly shows the typical 5 D0 → 7 FJ (J = 0, 1, 2, 3, and 4) transitions in Eu 3+ emission centers. The sharp 5 D0 → 7 F2 line is located at 612 nm. Note the prominent 5 D0 → 7 F4 emission at 708 nm. The 1931 CIE color coordinates shown in Figure 4c were deduced from the EL spectrum in Figure 4b using OriginLab Chromaticity Diagram script (Origin Pro 2019, Northampton, MA, USA). Thus, the obtained red color emission corresponds to (x, y) values of (0.640, 0.348). Further proof of the orientation was obtained by performing an in-plane measurement that probes the crystalline planes perpendicular to the surface normal as shown in Figure 3b. One can see only (hk0) family of planes meaning that (00l) planes are strongly orientated parallel to the surface. On Figure 3c, which shows the 2θ-ω measurement for the ZnS sample, the hump disappears supporting the idea that it was originated from the substrate. The peak at 59.1 • reveals the (004) reflection related to the (002) intense reflection. Figure 4a shows a photograph of a 3 × 5 mm 2 red Y 2 O 3 :Eu/ZnS-based EL pixel under a sinusoidal excitation of 1 kHz measured at 280 Vrms. The photograph was taken with a digital camera in automatic mode under normal room lighting. For this pixel, a brightness of 40 cd/m 2 was measured. Figure 4b shows the electroluminescence spectrum, at maximum luminance, of the Y 2 O 3 :Eu/ZnS EL device with a 3:2 (Y 2 O 3 :Eu 2 O 3 ) cycle ratio. The EL spectrum, which was measured under an operating voltage of 280 Vmrs and a frequency of 1 kHz, clearly shows the typical 5 D 0 → 7 F J (J = 0, 1, 2, 3, and 4) transitions in Eu 3+ emission centers. The sharp 5 D 0 → 7 F 2 line is located at 612 nm. Note the prominent 5 D 0 → 7 F 4 emission at 708 nm. The 1931 CIE color coordinates shown in Figure 4c were deduced from the EL spectrum in Figure 4b using OriginLab Chromaticity Diagram script (Origin Pro 2019, Northampton, MA, USA). Thus, the obtained red color emission corresponds to (x, y) values of (0.640, 0.348). Figure 5a shows the luminance versus applied voltage characteristics of the Y2O3:Eu/ZnS electroluminescent device under a sinusoidal excitation of 1 kHz. The device shows a maximum brightness of 40 cd/m 2 at 280 Vrms. The threshold voltage of the device is not well-defined; it can, however, be considered as the voltage needed for the generation of 1 cd/m 2 [32]. Here, a luminance of 1 cd/m 2 is achieved for an excitation voltage of 180 Vrms. Figure 5b shows Q-V characteristics of a ZnS/Y2O3:Eu EL device, measured at Figure 5a shows the luminance versus applied voltage characteristics of the Y 2 O 3 :Eu/ ZnS electroluminescent device under a sinusoidal excitation of 1 kHz. The device shows a maximum brightness of 40 cd/m 2 at 280 Vrms. The threshold voltage of the device is not well-defined; it can, however, be considered as the voltage needed for the generation of 1 cd/m 2 [32]. Here, a luminance of 1 cd/m 2 is achieved for an excitation voltage of 180 Vrms. Figure 5b shows Q-V characteristics of a ZnS/Y 2 O 3 :Eu EL device, measured at 40 Vrms above the threshold voltage and 1 kHz sinusoidal wave. The measured sense capacitor and total capacitance of the circuit were 171 nF and 6.24 nF, respectively. The input power density, which was calculated by multiplying the area of the graphic in Figure 5b to the applied frequency, was determined to be 153 W/m 2 . Based on these values, an efficiency of 0.28 Lm/W was calculated. Note the Y axis of the Q-V curve which is not centered in the position (0, 0) coordinates of the graphic. Figure 5a shows the luminance versus applied voltage characteristics of the Y2O3:Eu/ZnS electroluminescent device under a sinusoidal excitation of 1 kHz. The device shows a maximum brightness of 40 cd/m 2 at 280 Vrms. The threshold voltage of the device is not well-defined; it can, however, be considered as the voltage needed for the generation of 1 cd/m 2 [32]. Here, a luminance of 1 cd/m 2 is achieved for an excitation voltage of 180 Vrms. Figure 5b shows Q-V characteristics of a ZnS/Y2O3:Eu EL device, measured at 40 Vrms above the threshold voltage and 1 kHz sinusoidal wave. The measured sense capacitor and total capacitance of the circuit were 171 nF and 6.24 nF, respectively. The input power density, which was calculated by multiplying the area of the graphic in Figure 5b to the applied frequency, was determined to be 153 W/m 2 . Based on these values, an efficiency of 0.28 Lm/W was calculated. Note the Y axis of the Q-V curve which is not centered in the position (0, 0) coordinates of the graphic.
Discussion
Y2O3:Eu, ZnS, and Al2O3 thin films were successfully grown by ALD at 300 °C using commercial precursors. The processing temperature was limited to 300 °C because of the decomposition temperature of the metalorganic precursors and O3. Y2O3:Eu thin film samples, grown with different Eu concentrations, show clearly red emission with a maximum intensity at 612 nm. This line is related to the 5 D0 → 7 F1 magnetic dipole transition of Eu 3+ [34]. With the process conditions described in this work, the optimum Eu concentration was found to be about 11 mol%. While a lower Eu concentration of 9 mol% led to lower
Discussion
Y 2 O 3: Eu, ZnS, and Al 2 O 3 thin films were successfully grown by ALD at 300 • C using commercial precursors. The processing temperature was limited to 300 • C because of the decomposition temperature of the metalorganic precursors and O 3 . Y 2 O 3: Eu thin film samples, grown with different Eu concentrations, show clearly red emission with a maximum intensity at 612 nm. This line is related to the 5 D 0 → 7 F 1 magnetic dipole transition of Eu 3+ [34]. With the process conditions described in this work, the optimum Eu concentration was found to be about 11 mol%. While a lower Eu concentration of 9 mol% led to lower PL intensities as expected, the well-known quenching that arises from energy transfer between the Eu 3+ luminescent centers was observed for a Eu concentration of 16 mol%. These values are close to the ones reported by H. Huang et al. [35] in comparison with the optimum Eu concentration values of 20 and 5 mol% reported by J. Kaszewski et al. [25] and Y. Kumar et al. [27], respectively.
In a classic DSD electroluminescent device structure, an ideal phosphor should have a polycrystalline structure [32]. Therefore, the polycrystalline nature of our ALD Y 2 O 3: Eu and ZnS thin film layers is advantageous to the multilayer Y 2 O 3 :Eu/ZnS electroluminescent device. Furthermore, in comparison with other reported Y 2 O 3 :Eu electroluminescent devices [36][37][38], our low processing temperature of 300 • C offers the possibility of building devices on some temperature-resistant polymer flexible substrates [39].
An all-in-one growth step for the Al 2 O 3 dielectric, ZnS, and Y 2 O 3 :Eu phosphor layers was used for the fabrication of our EL device by ALD. In contrast to the photoluminescence spectrum, the electroluminescence spectrum shows a prominent 5 D 0 → 7 F 4 emission at 708 nm. This could be due to the lower sensitivity of the PL equipment in comparison with the EL equipment, since most photomultiplier tubes have lower sensitivity in the 5 D 0 → 7 F 4 transition region [34]. At 280 Vrms and under a sinusoidal excitation of 1 kHz, with the growth conditions reported in this paper, we achieved high-purity red color emission with an intensity up to 40 cd/m 2 . This intensity could be significantly increased by further optimization of the different device layers, i.e., optimization of Y 2 O 3 :Eu and ZnS thicknesses and the dielectric layer (here a mere Al 2 O 3 layer was used). Using multilayer structures, red and green Y 2 O 3 /Y 2 O 2 S-based electroluminescent devices with luminance up to 137 cd/m 2 (at 150 Vrms) and 124 cd/m 2 (at 300 Vrms), respectively, were reported by T. Suyama et al. [19] and K. Ohmi et al. [20]. While those values are higher than the ones we obtained for our devices, devices in [19,20] were measured under an excitation frequency of 5 kHz. Frequency has been reported to significantly influence the electroluminescence emission intensity. As an example, luminance values could be increased from 15 to 350 cd/m 2 in CaYS:Eu electroluminescent devices by increasing the frequency from 50 Hz to 1 kHz [7].
While it is difficult to compare the efficiency of our device with other red electroluminescent devices due to different measurement conditions, the calculated efficiency of 0.28 Lm/W for our ZnS/Y 2 O 3 :Eu multilayer EL device is lower than the 0.8 Lm/W value reported for the ZnS:Mn EL device with red filter and measured with a frequency of 60 Hz [16]. Q-V characteristics usually appear in a trapezoid shape where physical quantities such as threshold voltage, threshold voltage of the phosphor layer, threshold charge density, and transferred charge density are well-defined [32]. The elliptic shape of our Q-V characteristics is due to the multilayer structure of the ZnS/Y 2 O 3 :Eu EL device and possible presence of leakage current in the phosphor layer. Our Q-V curve appears negatively biased when the ITO layer of the EL device is connected to the power supply and the top contact is connected to the sense capacitor in the Sawyer-Tower circuit, as shown in Figure 6a. However, when the connections are inverted (the top contact is connected to the power supply and the ITO layer is connected to the sense capacitor), the Q-V curve appears positively biased. Therefore, one possible explanation for this behavior is the asymmetric structure of the device. During the growth process, each phosphor layer starts with the deposition of Y 2 O 3 and finishes with Eu 2 O 3 making ZnS surrounded on one side by Y 2 O 3 and on the other by Eu 2 O 3 as shown in Figure 6a. We believe this asymmetry might favor charge accumulation.
The Q-V characteristics could be reproduced by simulating the equivalent circuit (Figure 6b) of the EL device in the Sawyer-Tower circuit. Figure 6c shows the simulation results of two different scenarios: (i) in red, where the Sawyer-Tower circuit has the EL device with the ITO layer connected to the power supply and the top contact connected to the sense capacitor, as depicted in Figure 6a; and (ii) in blue, where the data were simulated with the top contact connected to the power supply and the ITO layer to the sense capacitor. This simulation requires high voltages and one Zener diode (related to the ZnS/Y 2 O 3 or Eu 2 O 3 /ZnS interfaces) with higher threshold voltage than its counterpart. The simulation in Figure 6c matches Figure 5b when the Zener diode D ZnS/Y2O3 , which is related to the ZnS/Y 2 O 3 interface, has a larger breakdown voltage than D Eu2O3/ZnS . ZnS/Y2O3 or Eu2O3/ZnS interfaces) with higher threshold voltage than its counterpart. The simulation in Figure 6c matches Figure 5b when the Zener diode DZnS/Y2O3, which is related to the ZnS/Y2O3 interface, has a larger breakdown voltage than DEu2O3/ZnS. Figure 6. (a) Upside down representation of the 2D schematic of the Y2O3:Eu/ZnS EL device connected in a Sawyer-Tower circuit schematic with an amplification scheme of the ZnS layer and its surroundings. (b) Equivalent circuit of the Y2O3:Eu/ZnS EL device. (c) Simulated Q-V characteristics when (red) ITO is connected to the power supply and the top contact is connected to the sense capacitor; and (blue) ITO is connected to the sense capacitor and the top contact is connected to the power supply.
Conclusions
In this work, we demonstrate the feasibility of transparent red Y2O3:Eu-based electroluminescent devices by atomic layer deposition at relatively low temperature. Y2O3:Eu, ZnS, and Al2O3 thin films and related multilayer structure devices were prepared at 300 °C. XRD measurements showed high crystallinity of the Y2O3:Eu and ZnS films. Photoluminescence and electroluminescence measurements showed a bright red emission of the phosphors and electroluminescent devices, respectively. A luminance up to 40 cd/m 2 and an efficiency of 0.28 Lm/W were achieved. Further optimization of the phosphor and EL device is expected to lead to higher emission intensities.
Conclusions
In this work, we demonstrate the feasibility of transparent red Y 2 O 3: Eu-based electroluminescent devices by atomic layer deposition at relatively low temperature. Y 2 O 3: Eu, ZnS, and Al 2 O 3 thin films and related multilayer structure devices were prepared at 300 • C. XRD measurements showed high crystallinity of the Y 2 O 3: Eu and ZnS films. Photoluminescence and electroluminescence measurements showed a bright red emission of the phosphors and electroluminescent devices, respectively. A luminance up to 40 cd/m 2 and an efficiency of 0.28 Lm/W were achieved. Further optimization of the phosphor and EL device is expected to lead to higher emission intensities. | 6,610.8 | 2021-03-01T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Mitigating Denial of Service Signaling Threats in 5G Mobile Networks
With the advent of 5th generation (5G) technology, the mobile paradigm witnesses a tremendous evolution involving the development of a plethora of new applications and services. This enormous technological growth is accompanied with an huge signaling overhead among 5G network elements, especially with emergence of massive devices connectivity. This heavy signaling load will certainly be associated with an important security threats landscape, including denial of service (DoS) attacks against the 5G control plane. In this paper, we analyse the performance of a defense mechanism based randomization technique designed to mitigate the impact of DoS signaling attack in 5G system. Based on massive machine-type communications (mMTC) traffic pattern, the simulation results show that the proposed randomization mechanism decreases significantly the signaling data volume raised from the new 5G Radio Resource Control (RRC) model under normal and malicious operating conditions, which up to 70% while avoiding the unnecessary resource consumption. Keywords—5G New Radio (NR) network; Radio Resource Control (RRC) state model; Denial of Service (DoS); signaling threats; randomization
I. INTRODUCTION
The emergence of the 5G standard was accompanied with a phenomenal rise in traffic volumes emanating from various new services and applications. To meet these new challenges, 5G technology has introduced three new classes of services, namely, the enhanced mobile broadband (eMBB), massive machine-type communications (mMTC) and ultrareliable low latency communications (URLLC) [1]. While the eMBB services will ensure an enhanced throughput, the mMTC services will handle massive number of connected devices with stringent energy efficiency and battery autonomy constraints, and the URLLC use case will provide low latency and high reliability services [2]. These new 5G challenging requirements will certainly increase the complexity of the management procedures designed to handle the rising demand of mobile subscribers.
To reduce network signaling complexity and unnecessary control transmissions, ongoing research works are progressing in many fronts with the aim of optimizing the signaling load for a robust and ultra-lean 5G designs. Indeed, a novel radio resource control (RRC) inactive state RRC IN ACT IV E have been introduced for Next Generation of Radio Access Network (NG-RAN) [3] to enhance the energy efficiency, reduce the latency and optimize the signaling load through optimizing the idle-to-connected state transition. Even if the new 5G RRC model was developed to meet the huge signaling overhead handled by the cellular paradigm, the short inactivity timers joined to the tremendous number of connected devices will entail a number of security flaws, including the problem of Denial of Service (DoS) attacks against the next generation of radio access network (NG-RAN) signaling control plane, named signaling threats. The DoS signaling threats were first emerged in 3G system [4], [5], [6], [7], involving the signaling attack that exploits the Radio Access Bearer (RAB) allocation/ release procedures to overload 3G entities, specifically the Radio Network Controller (RNC) entity. By using the well known network parameter, named inactivity timer T 5Ginac , this attack could be also carried out against the 5G system to overload the signaling control plane, which can disturb the network functionality giving rise to a productivity loss for network operator.
Several research works have tackled the problem of signaling threats in 3G/4G mobile networks and have proposed detection and defense mechanisms to mitigate the impact of such attacks [4], [8], [10], but little research efforts have been dedicated to signaling-based threats in 5G context. A survey of the 5G security architecture related to the primary protocols of the control plane signalling was presented in [11], [12]. In [13], the authors have proposed a defence mechanism to protect the paging protocols against security and privacy attacks [14]. The proposed solution aims at securing the 4G/5G devices from unauthorized/fake paging messages by introducing a new identifier, named P-TMSI, randomizing the paging occasions, and conceiving a symmetric-key based broadcast authentication framework. In [15], the issue of DoS signaling attacks in different mobile network generations was outlined, including the post-5G technologies. This work provided also some security solutions to protect the 5G system against these threats, involving securing the data information exchange over the radio link and make the access more difficult for malicious parties.
Unfortunately, these few research works are still not enough to address the damaging 5G signaling threats, involving the DoS signaling attack tackled in this work. Hence, this paper extends our defense mechanism proposed in [10], as a preventive solution to defend against DoS signaling attack in 3G network, to meet also the problem of signaling threat in 5G system. Based on mMTC traffic model, the proposed mitigation mechanism based randomization technique has shown also promising results in decreasing the signaling load generated by the 5G infrastructure under signaling DoS attack while preventing the unnecessary use of the network resources.
The rest of the paper begins with a background section giving an overview of the new 5G RRC state model, and highlighting some security flaws of this novel RRC three-state model. The section three analyses the 5G DoS signaling attack detection mechanism based randomization technique. This section presents first an overview on related works, then, it outlines the traffic model used for the performance evaluation of the detection framework, which is introduced at a later stage. Still in the same section, the simulation results are carried out to evaluate the effectiveness of the randomization based detection solution in defending against DoS signaling attack in 5G mobile network. Finally, the section four concludes the paper.
II. BACKGROUND
In cellular systems, wireless communications between the devices and the network are carried out using the RRC protocol that is responsible for allocating and releasing the necessary radio resources. The signaling load produced by these resource allocation and release procedures will increase tremendously, specifically with the great variety of applications based on burst traffic (e.g., mMTC use case), which could disturb the proper functioning of the mobile networks infrastructures. As depicted in Fig. 1, in 5G system, a new RRC state, named RRC IN ACT IV E , is introduced to meet the challenge of signaling overhead, battery life and latency. This novel RRC IN ACT IV E state is designed to reduce the latency by minimizing the signaling exchange triggered by the transition to RRC connected state RRC CON N ECT ED among the 5G infrastructure, which would be relevant for many smartphone applications that transmit small data on a frequent basis. This new state will also allow devices to conserve their batteries life by reducing the signaling load generated by the idle-connected states transitions. Indeed, in the RRC IN ACT IV E state, the device stores the RRC context (Access Stratum (AS) context) and maintains the core network (CN) connection established, and any detected traffic activity will trigger the transition to RRC CON N ECT ED state through a resume procedure using only three signaling messages instead of seven messages used in the switching process from the idle state (RRC IDLE ) to the connected state in 4G system [16]. The transitions between RRC CON N ECT ED and RRC IN ACT IV E states occur transparently to the CN. indeed, the CN network may carry any downlink traffic to the RAN entity so that the state transition from RRC IN ACT IV E to RRC CON N ECT ED does not involve any CN signaling exchange. As illustrated in Fig. 1, the new 5G RRC state model involves three states, namely, RRC IDLE , RRC CON N ECT ED and RRC IN ACT IV E . In this RRC three-state model, the transition from RRC IDLE to RRC CON N ECT ED will primarily occur during the first UE attaches to the network or as a fallback to a new RRC connection. Hence, this transition will hardly arise when compared to the transition from RRC IN ACT IV E to RRC CON N ECT ED , and with the shorter inactivity timeouts managing this later transition [17], the signaling load remains important even if the number of exchanged signaling messages related to the 5G RRC three-state transitions is reduced by introducing the RRC IN ACT IV E state, specifically when the 5G NG-RAN network is under a DoS signaling attack. indeed, a malicious exploiting of this inactivity timeout will give arise to two DoS attack scenarios. The first scenario is similar to the signaling attack tackled in [20], which aims at affecting and compromising an important number of MTC devices, and forcing them to send periodic burst packets after the expiration of the inactivity timer to trigger frequent resource allocation and release procedures, thus causing a peak of signaling load that can not be properly sustained by the mobile infrastructure. Adversely, the second attack scenario aims to consume abusively the NG-RAN radio resources by maintaining a set of compromised devices in the RRC CON N ECT ED state for a considerable period of time leading to a network resource starvation. There are other security risks threatening the NG-RAN infrastructure, involving, the integration with the existing vulnerable systems, namely, Internet and 4G network, the immaturity of the 5G production process and maintenance procedures, and the overgrowth of the 5G components. These security flaws could amplify the risk of breaking down the confidentiality, integrity, and availability of network elements, and giving rise to more attack vectors against 5G system. Therefore, developing a robust a defense system that can protect the 5G system against such security threats, will be a serious challenge for mobile service providers.
III. 5G DOS SIGNALING DETECTION MECHANISM BASED RANDOMIZATION TECHNIQUE
In this paper, we will evaluate the proposed detection mechanism based randomization approach regarding the DoS signaling attack exploiting the new 5G RRC three-state machine by analyzing the decreased signaling overhead ratio DSO R and the network resource occupation time ratio ROT R related to NG-RAN RRC handling process regarding different statistical distributions, namely, Gaussian, Log-normal and Exponential distributions. To carry out the performance evaluation of the proposed detection framework within 5G system, we will use the mMTC massive sensors traffic pattern [18] as 5G networks are expected to handle an significant amount of mMTC communications.
A. Related Works
To consolidate the security perimeter against signaling attacks in mobile networks, several protection mechanisms have been proposed in the literature review, specifically, for 3G and 4G networks. Among these defense solutions, a randomization method applied to some configuration parameters, like the channel inactivity timeout, has been proposed in [8], [9], [10], to increase the difficulty of hacking the value of such extremely vital network settings. According to [8], the randomization technique attributes the same random inactivity timeout to www.ijacsa.thesai.org all UEs handled by the same 3G Radio Network Controller (RNC) regardless of the traffic volume handled by these UEs. The randomization approach proposed in [8] presented some drawbacks related to a rise in resource consumption due to system configuration that becomes dynamic and no longer optimal, leading to an unbalanced resource consumption among different traffic patterns. Hence, [10] has proposed an enhanced randomization based detection framework to cope with the DoS signaling attacks in 3G system while optimizing the resulting resource consumption as well. Indeed, this improved randomization technique deployed an additional concept of classifying the devices according to the traffic volume periodically received by the 3G control plane over the corresponding measurement reports. In 5G context, the randomization approach has been used to defend against paging message hijacking attack [13]. Indeed, this solution aims at randomizing the paging occasion, which consists on changing the paging occasion after every paging cycle regardless of whether the 5G device received any paging message in that paging cycle. Such an approach, however, depletes rapidly the available P-TMSI values, and requires that the device and the base-station should be accurately synchronized.
B. Traffic Modeling: mMTC use Case mMTC communications connect a plenty of devices constrained by cost and energy considerations. mMTC can be used for monitoring and area-covering measurements through sensor and actuator deployments. This 5G traffic use case is usually modeled using the 3GPP bursty traffic FTP model 3 [18], which is based on Bursty traffic with a fixed-size packet following a Poisson arrival process with rate λ, packet inter-arrival time f D,mM T C (t) and packet size f Y,mM T C (t). According to [18], the number of mMTC devices is about 25000 per cell, in this paper, we will simulate the traffic pattern related to N mMTC connected devices. Using the traffic model parameters described in Table I, we will first simulate the mMTC signaling load generated by the new 5G RRC state handling under a different DoS signaling attack scenarios in accordance with various T 5Ginac , namely, 1 s, 2 s and 3 s. Then, we will evaluate the DSO R and ROT R metrics to demonstrates the effectiveness of the proposed defense solution in mitigating the DoS signaling attack in 5G system.
C. Detection Framework
For the mMTC traffic model, we have a well known behaviour of devices, which transmit the same amount of data f Y,mM T C during a defined transmission time period f D,mM T C , so the data traffic classification is meaningless in this case. To this end, we will use the randomisation techniques as follows: For the Gaussian distributions, µ is set to T 5Ginac , and σ =T R .
For the exponential case, we use a modified exponential distribution (weighted by a factor w), the λ are computed as: For the log-normal distribution, the µ and the σ are computed as follows: Where: The weighted parameter a is set so that the available inactivity timers remain in the interval [1s 10s] defined for 5G standard.
D. Analysis and Results
To evaluate the performance of the proposed detection mechanism, we will analyze two metrics, namely, the DSO R related to the promotion state transition to RRC CON N ECT ED , and the ROT R which refers to the ration of time period that device remains inactive in RRC CON N ECT ED state in normal case (T 5Ginac is static) regarding the resource occupation time related to randomized T 5Ginac .
Where: By periodically launching a DoS signaling attack using different numbers of compromised mMTC devices (10%, 25% and 50% of the total number of simulated devices N mMTC ) every T 5Ginac (attack period), we will first evaluate the generated signaling load when no defense mechanism is implemented for different inactivity timeouts, namely, 1s, 2s and 3s. From the simulation results depicted in Fig.2, Fig. 3 and Fig. 4, we can infer that the mMTC traffic pattern gives rise to a larger signaling load for the smaller inactivity timers even in case when no DoS signaling attack is initiated. The high amount of signaling traffic for the small value of inactivity timer (T 5Ginac =1s) can be justified by the fact that the mMTC traffic pattern is a Poisson distribution with a mean inter arrival rate λ mM T C about one packet per second, thus, a higher T 5Ginac (superior to 1s) means less state transitions between RRC IN ACT IV E and RRC CON N ECT ED states and then less signaling load. Regarding the two simulated metrics DSO R and ROT R , the performance evaluation of the randomisation based detection mechanism has shown promising results in mitigating the impact of DoS signaling attack against the novel 5G RRC three-state model. As illustrated in Fig. 5 and Fig. 6, the three simulated distributions, namely Gaussian, Log-normal and exponential functions reduce considerably the signaling overhead and the unnecessary resource consumption, which reach 70% and 65%, respectively for the exponential distribution with T 5Ginac = 1s and 50% of compromised mMTC devices. We have choose to evaluate our detection mechanism regarding the T 5Ginac = 1s, due to the large volume of signaling load generated by using this smaller inactivity timer, which constitute the most devastating attack scenario, specifically by compromising 50% of total mMTC devices.
As outlined in Table II context, specifically for the Exponential and Log-normal distributions. Hence, the randomization approach remains very promising solution to be considered in mitigating the signaling threats in new mobile network generations. First, this technique offers a preventive framework that can avoid the occurrence of such attacks or at least mitigate their impact. Secondly and from a hardware perspective, the proposed randomized approach needs simply some low-complexity software updates in only some network entities.
IV. CONCLUSION
In this paper, we have extended our detection mechanism based randomization technique to defend against DoS signaling attack emerged in the new 5G RRC three-state model. The proposed solution has shown promising results in mitigating the impact of these signaling threats in 3G system, and we have demonstrated through simulation based on mMTC traffic pattern, the effectiveness of our detection framework regarding the 5G system as well. Indeed, for an inactivity timeout equal to 1 s and 50% of compromised mMTC device, the three simulated randomisation methods decrease significantly the signaling load while avoiding the unnecessary network resource use. For the exponential distribution, the decreased signaling load is up to 70%, and the resource consumption ratio is around 65%, which constitutes an significant enhancement of network performances concerning the signaling overhead and the resource starvation raised from the new 5G designs, specifically when the network is under a DoS signaling attack. Our future work revolves around deeper analysis of new emerging signaling threats in the next generation (NG) of mobile systems, and new proposals to build a robust detection mechanisms to defend against the signaling attacks. | 4,076.8 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
EFFECT OF HYBRIDIZATION ON THE MECHANICAL PROPERTIES OF POLYPROPYLENE (PP) FIBER-REINFORCED CONCRETE (FRC) EFEITO DA HIBRIDIZAÇÃO COM FIBRAS DE POLIPROPILENO (PP) SOBRE AS PROPRIEDADES MECÂNICAS DE UM CONCRETO REFORÇADO COM FIBRAS (CRF)
Tiago Tadeu Amaral de Oliveira Vladimir José Ferrari Abstract This study assessed the mechanical properties of polypropylene (PP) hybrid fiber-reinforced concrete (FRC). To this end, 10 FRC groups were investigated with respect to both macroand micro-PP fibers. The hypothesis of this study is that the two types of PP fiber act together, contributing at different stages of the post-peak loading history of concrete in bending: due to greater dispersion in the cementitious matrix, microfibers would bridge the microcracks, whereas macrofibers would arrest the propagating macrocracks, substantially improving concrete toughness. To prove this hypothesis, four-point bending tests were performed on prism specimens (150 x 150 x 500 mm) according to the methodology described in the JSCESF4 Japanese standard (1984); cylindrical specimens (100 x 200 mm) were also molded and subjected to compression tests to obtain axial compressive strength, Young’s modulus, and splitting tensile strength. The hybridization enabled production FRC with results 55 times greater than the simple concrete for the best group in terms of tenacity. As well observed also the CRFs presented residual stresses for displacements of L / 600 and L / 150 that did not occur in the simple concrete.
Introduction
It is known that short fibers can be incorporated into concrete to improve its tensile strength, ductility, and resistance to first-crack and crack growth (Lee, 2017).
According to Taerwe and Gysel (1996), the high Young's modulus and stiffness of steel fibers contribute to increased compressive strength and toughness of concrete; however, Hsie et al. (2008) alerted to the high content of steel fibers needed, which brings a disadvantage as to weight gain of the structural member and reduced workability of the mixture. Kosa and Naaman (1990), alerted to the problem of corrosion associated with the use of steel fibers in chemically aggressive or alkaline environments. For this reason and others of economic nature, Maida et al. (2018) reported the increased interest in the application of synthetic fibers, including polypropylene (PP). According to Bayasi and Mcintyre (2002), the good ductility, reduced diameter, and good dispersion of PP fibers in the cementitious matrix contribute to restrain crack growth.
Initially, this study was developed to contribute to the existing information on the behavior of hybrid PP fiber-reinforced concrete (FRC). More specifically, the purpose of this research is to assess the effect of hybridization, that is, addition of PP micro-and macrofibers to the concrete matrix consisting of fine and coarse aggregates (sand and 9.5 mm particle size gravel) aiming of obtain a FRC mixture with greater compressive strength and flexural toughness. To this end, a concrete mixture with 30 MPa strength was prepared; cylindrical specimens (100 x 200 mm) were molded and subjected to compression tests to obtain axial compressive strength, Young's modulus, and splitting tensile strength; four-point bending tests were performed on prism specimens (150 x 150 x 500 mm) according to the methodology described in the JSCE-SF4 Japanese standard (1984).
The purpose of this study was to improve the toughness of the concrete matrix through the addition only of PP fibers and to verify whether the hybridization using the addition of microfibers (12 mm in length and 1.8 µm in diameter) and macrofibers (40 mm long and 0.69 mm in diameter) will have better mechanical behavior in relation to CRFs with only one type of fiber. The hypothesis of this study is that the two types of PP fiber act together, contributing at different stages of the post-peak loading history of concrete in bending.
Materials and Methods
This study aimed to assess the effect of hybridization with polypropylene (PP) microand macrofibers on the mechanical properties of fresh fiber-reinforced concrete (FRC).
Characterization of materials and FRC dosage
High early strength CPV-ARI Portland cement, was used to prepared the mixtures. Fineness modulus, specific gravity, and 28-day compressive strength of the cement used were determined in the Materials Laboratory of the Federal Technological University of Paraná (UTFPR/Campo Mourao). The following values were obtained: specific gravity = 3.10 g/cm 3 , fineness modulus = 0.45%, and 28-day compressive strength = 35.72 MPa. Revista Tecnológica -Universidade Estadual de Maringá -ISSN 1517-8048 DOI: 10.4025/revtecnol.v29i2.51706 ______________________________________________________________________ Natural quartz sand obtained in the Parana River bed and commonly used in the production of concrete in the municipality of Campo Mourao, Parana state, was used as fine aggregate. This material was characterized as for particle size (ABNT NBR NM 7211), specific gravity (ABNT NBR NM 52), and bulk density/volume (ABNT NBR NM 45). Basalt gravel was utilized as coarse aggregate; this material was characterized according to particle size (ABNT NBR NM 7211) and specific gravity (ABNT NBR NM 53). Table 1 presents the attributes of these aggregates. MC-PowerFlow 3100 high-performance superplasticizer was used as admixture in order to provide each of the concrete mixtures with adequate workability (in terms of slump). Technical information on this admixture, according to the manufacturer, is shown in Table 2. The cementitious matrix of the FRC mixtures analyzed in this study was composed of regular strength concrete dosed according to the method of the Brazilian Association of Portland Cement to achieve compressive strength of 30 MPa after 28 days for a slump value of 80-100 mm. The composition of this concrete mixture was defined as 1:1.43:1.92 (cement:sand:gravel) by weight with a water:cement ratio of 0.47.
In this study, two types of PP fibers were added to the FRC mixtures: fiber A ( Figure 1a) is a macrofiber with 40 mm in length and 0.69 mm in diameter provided by Viapol enterprise; fiber B ( Figure 1b) is a microfiber with 12 mm in length and 18 µm in diameter supplied by MaccaFerri enterprise. It is worth noting that both fibers are easily found in the region where this study was developed, facilitating reproduction of the assessed material. Table 3 shows other properties of these fibers according to their manufacturers. Type A macrofibers were added to the cementitious matrix at content rates of 0.3%, 0.6%, and 0.9% to evaluate their effect on the mechanical properties of FRC, resulting in groups 2, 3, and 4, identified as FRCA3B0, FRCA6B0, and FRCA9B0, respectively. Groups 5, 6, and 7 refer to FRC mixtures identified as FRCA3B3, FRCA3B6, and FRCA3B9, respectively, which were prepared using a type A macrofiber content rate of 0.3% with subsequent addition of type B microfiber content rates of 0.3%, 0.6%, and 0.9%. Groups 8 and 9 refer to FRC mixtures identified as FRCA6B3 and FRCA6B6, respectively, which were prepared using a type A macrofiber content rate of 0.6% with subsequent addition of type B microfiber at content rates of 0.3% and 0.6%.
Finally, groups 10 and 11 refer to FRC cementitious matrices identified as FRCA9B3 and FRCA9B6, respectively, which were prepared using a type A macrofiber content rate of 0.9% with subsequent addition of type B microfiber at content rates of 0.3% and 0.6%. Table 4 presents the 11 FRC groups analyzed in this study. For each group, three prismatic specimens (150 x 150 x 500 mm) and 12 cylindrical specimens (100 x 200 mm) were molded, resulting in 33 prismatic specimens and 132 cylindrical specimens. Mixtures were prepared using an electric concrete mixer with capacity of 500 l starting by mixing the aggregates and part of the kneading water. After that, cement and the remainder of the kneading water were added to the mixture. Finally, the fibers were added slowly and progressively with the concrete mixer in motion, and the slump test performed (Figure 2). MC-PowerFlow 3100 high-performance superplasticizer was used as admixture in each group, in sufficient amount and within the limits indicated by the manufacturer, to maintain concrete workability similar to that of the control group. Thus, the slump test was performed during specimen molding.
Demolding occurred 24 h later, with all specimens submerged in a vessel with water, where they remained for up to 48 h prior testing commencement.
Test methods
Cylindrical specimens (100 x 200 mm) were assayed at the UTFPR-CM Materials Laboratory using a universal testing machine, EMIC manufactured, with reading of a double clamping electronic extensometer, as shown in Figure 3. Prismatic specimens (150 x 150 x 500 mm) representative of each FRC were subjected to four-point bending tests using a servo electric universal testing machine with a capacity of 600 kN. The machine, which has an interface for computer connection and electronic instrumentation, enabled acquisition of strength and strain data from two transducers through two channels.
The Compressive strength, elastic modulus and split tensile strength were tested following the methodology of the current Brazilian standards ABNT NBR 5739, ABNT NBR 8522 and ABNT NBR 7222 respectively. The four-point bending test based on the Japanese standard is the most commonly used in Brazil for FRC control due to its simpler design. In this study, the methodology described in the aforementioned standard was applied because this test has been performed without difficulties at the Laboratory of Civil Construction Materials of the State University of Maringá.
Vertical displacement of the prismatic specimens was recorded centrally on the span section of the specimen through the use of two Linear Variable Differential Transformers (LVDT) centralized on each side face using a Yoke type clamp. Figure 4 illustrates a prismatic specimen positioned in a universal testing machine prior to test performance. The side faces of the prismatic specimens submitted to the bending tests were the upper and lower surfaces when the specimens were molded, as prescribed in the JSCE-SF4 standard (1984).
Loading was applied to the specimen, continuously and without impact, by imposing a displacement rate of the machine platen of 0.15 mm/min until the central span section deflection reached the value of 3 mm.
This test procedure uses the open-loop control system. Revista Tecnológica -Universidade Estadual de Maringá -ISSN 1517-8048 DOI: 10.4025/revtecnol.v29i2.51706 ______________________________________________________________________ The main difference between the open loop control system and the closed control system is that the closed system regulates the loading speed depending on the displacement of LVDT, that is, the real displacement of the specimen. In the open system, displacement speed is constant on the press platen. This may result in post-peak instability and increase the spacing between the points on the load vs. vertical displacement graph. Rupture of the specimens always occurred in the central third section, as shown in Figure 5. Thus, none of the bending assays were discarded. Table 5 shows the slump values and the contents of the admixture (superplasticizer) used in each FRC group. It can be clearly observed that the slump values change according to content and type of fiber. A slump value of 170 mm was found for the control group (concrete without fibers). There was no need to add the superplasticizer to maintain the same slump value of the control concrete for type A macrofiber content rates of 0.3% and 0.6%. Revista Tecnológica -Universidade Estadual de Maringá -ISSN 1517-8048 DOI: 10.4025/revtecnol.v29i2.51706 ______________________________________________________________________ Only for the type A macrofiber content rate of 0.9%, addition of 0.2% (the recommended minimum amount) of admixture was needed, resulting in a slump of 210 mm -a value higher than that of the control concrete.
Slump
Comparison between groups 2 and 3 and groups 5 and 8, respectively, evidenced that addition of type B microfiber compromised mixture workability, requiring addition of the superplasticizer. Group 7 with type B microfiber content rate of 0.9% was the FRC mixture that required the highest rate of admixture addition (1.4%). For this type B microfiber content rate, it was also verified the impossibility to proceed with molding when the mixtures were combined with the type B microfiber content rates of 0.6% and 0.9%, because of segregation formation due to lack of material flow. Regarding axial compressive strength (fc), Figure 6 illustrates the influence of macroand microfibers on this property. Figure 6a clearly shows that the increase in type A macrofiber content rate leads to decreased fc,, with this decrease significantly more pronounced for the content rates of 0.6% and 0.9%. It can also be observed that hybridization at type A macrofibers content rate of 0.3% practically did not change fc compared with that of concrete in the control group.
In Figure 6b, it is evidenced that hybridization at type B microfiber content rate of 0.3% did not significantly altered the fc of concrete regardless of the type A macrofiber content rate, whereas for hybridization at type B microfiber content rate of 0.6%, the decrease in fc is more significant. With respect to the values obtained for the Young's modulus (E), it was observed that all FRC groups showed decreased results for this property compared with that of concrete in the control group. The influence of the macro-and microfibers on E is best visualized in Figure 7. Figure 7a shows that increased type A macrofiber content rates resulted in a practically liner decrease in E values. Hybridization at type A macrofiber content rate of 0.3% with both type B microfiber content rates of 0.3% and 0.6% produced no change in the E value only for the FRC group containing type A macrofiber content rate of 0.3%, whereas for the type A macrofiber content rate of 0.6%, hybridization through the increase of type B macrofiber content rate caused reduction in the E values. Effect of hybridization at type A macrofiber content rate of 0.3% with addition of type B microfiber at content rates of 0%, 0.3%, and 0.6% showed splitting tensile strength (fspl) values of 4.44 MPa, 4.60 Mpa, and 4.31 MPa, respectively. Such fspl values for these hybrid FRC groups were higher than that of concrete in the control group. Concerning hybridization at type A macrofiber content rate of 0.6% with addition of type B microfiber content rates of 0%, 0.3%, and 0.6%, fspl values of 4.97 MPa, 4.73 MPa, and 4.32 MPa were found, respectively, which were all higher compared with that of the control concrete. Hybridization at type A macrofiber content rate of 0.9% with addition of type B microfiber at content rates of 0%, 0.3%, and 0.6% resulted in fspl values of 4.11 MPa, 4.09 Mpa, and 4.24 MPa, respectively, and a tendency of slight increase in this property was observed with increased type B microfiber content rate. Figure 8 illustrates the influence of type B microfiber content rate on splitting tensile strength. It can be verified that the fspl value increases when type B microfiber at the content rate of 0.3% is added to the hybrid FRC mixtures with type A macrofiber contents of 0.3% and 0.9%, whereas for the type A macrofiber content of 0.6%, the hybridization resulted in decreased fspl values. The mean P-δ curves (with "P" as the load value and "δ" as the vertical displacement centrally to span section) that represent the behavior of each hybrid FRC group are illustrated in Figures 9 to 11. A curve of one specimen was selected for each CRF group for the representation. Figure 9 shows the P-δ curves of the FRCA0B0 group together with those to which only type A macrofiber was added. Post-peak loading instability was observed in the FRC mixtures in groups 1 to 4, and it was not possible to record curve data after cementitious matrix rupture for groups 1 and 2, which refer, respectively, to the FRCA0B0 and FRCA3B0 concrete mixtures. Therefore, no information on post-peak loading is available for these groups.
Instability was also observed in groups 5 to 7, promoting spacing between the points in the graph soon after matrix rupture. Increase in fiber content reduced this effect in groups 8 to 11. The flexural tensile strength (σb) values shown in Table 7 were calculated according to the JSCE-SF4 standard (1984) as in equation (1). This property is then considered the highest σb value for FRC.
=
. . ℎ 2 ( . 1) Where: • P: maximum load (N). It corresponds to the highest load value recorded throughout loading history; • L: specimen span section (equal to 450 mm); • b and h: width and height of the specimen span section, respectively. They refer to the mean values of two readings recorded in the central third of the specimen where rupture occurs.
Tensile cracking stress (fcr), which is also shown in Table 7, was defined as the resistance to crack growth of FRC mixtures according to the concept described in the ASTM C1609 standard (2012). The values of fcr were obtained with reference to the load corresponding to the end of the elastic straight section and the beginning of behavior change of each curve. In FRC mixtures containing only PP type A macrofiber, gradual evolution of resistance to crack growth was not observed with increase in the content of this fiber. For content rates of 0.3% and 0.9%, increases in tensile cracking stress of 5% and 16% were observed in comparison to the control concrete, whereas the σb value decreased for the content rate of 0.6%. Therefore, the effect of type A macrofibers (at contents of 0.3% and 0.6%) on resistance to crack growth is quite discrete, whereas for the content of 0.9%, significant influence was observed, with σb value increase of up to 16%.
Type A macrofiber content rate of 0.3% with addition of type B microfiber content rates of 0.3%, 0.6%, and 0.9% did not improve resistance to crack growth. This finding suggests that addition of microfiber to macrofiber did not contribute to increased resistance to crack growth in the FRC mixtures assessed. The only exception was observed for the FRCA3B6 group, in which an increase of 19% in the σb value was found.
The effect of macro-and microfibers on resistance to crack growth is best be visualized in the graphs of Figure 12. In Figure 12a, it can be verified that a better response was obtained with the type A macrofiber content rate of 0.6% compared with the content rate of 0.3% for hybridization with all type B microfiber content rates. Figure 12b evidences that, except for the hybrid FRCA6B3 mixture, addition of type B microfiber did not improve resistance to crack growth when combined with type A macrofiber in the FRC mixtures analyzed. Revista Tecnológica -Universidade Estadual de Maringá -ISSN 1517-8048 DOI: 10.4025/revtecnol.v29i2.51706 ______________________________________________________________________ a) influence of macrofibers PP b) influence of microfibers PP Figure 12. Effect of macro-and microfiber contents on resistance to crack growth.
The type A macrofiber content rate of 0.3% with addition of type B microfiber at a rate of 0.3% did not improve the σb value. Similarly, addition of type B microfiber at the rates of 0.6% and 0.9% to the type A macrofiber content rates of 0.6% and 0.9%, respectively, did not result in increased σb values in the FRC mixtures evaluated.
The effect of the contents of macro-and microfibers on flexural tensile strength is best visualized in Figure 13. In Figure 13a, it can be verified that better results were obtained for the type B microfiber content rate of 0.6% compared with the content rate of 0.3%, and that the values of σb for the concrete mixtures containing only type A macrofibers were higher than those for hybrid FRC mixtures. Figure 13b shows that hybridization provided similar σb results for the FRC groups with type A macrofiber content rates of 0.6% and 0.9%, with hybridization at the type A macrofiber content rate of 0.3% yielding higher σb values. This aspect shows that these contents, 0.3% of macrofiber and 0.6% of microfiber, stand out as for flexural tensile strength. a) influence of macrofibers content b) influence of microfibers content Figure 13. Influence of macro-and microfiber contents on flexural tensile strength Table 9 also shows that σb values were higher than fcr values in all FRC groups assessed. The most significant increases in tensile flexural strength in relation to tensile cracking stress were those observed for the FRCA3B0, FRCA6B0, and FRCA3B3 groups -21%, 33%, and 26%, respectively. Hybridization did not significantly improve the concrete strength after cracking, as the levels reached in these FRC mixtures, i.e., FRCA3B3, FRCA6B3, and FRCA9B3, were of the same level as those in mixtures containing only type A macrofibers.
Flexural toughness
With rupture of the concrete matrix, the macro-and microfibers would act to restrain crack growth: microfibers would bridge the microcracks, whereas macrofibers would arrest the propagating macrocracks. This performance after the matrix rupture is measured by the energy absorption capacity, thus an important parameter to assess the effect of fibers on the flexural behavior of FRC for higher strain levels of the concrete element.
In this study, the energy absorption capacity was evaluated according to the methodology described in the JSCE-SF4 standard (1984) , in which the flexural toughness is expressed by the toughness index ( ̅b), measured as the area under a load-deflection curve in the bending of prismatic specimens (150 x 150 x 500 mm) with strain limit given by (span/150), as illustrated in Figure 14, that is, up to deflection of 3.00 mm. The flexural toughness index is calculated according to equation (2) and the ̅b values for each of the FRC groups studied are presented in Table 8. Where: • Tb: flexural toughness, equivalent to the area under the P-δ curve in the interval form 0 to δtb = 3.00 mm (in J). SF-4 standard (1984) and aiming to evaluate the post-crack performance, the concept of equivalent flexural strength (Re,3) was adopted, calculated from the energy absorption capacity up to deflection of 3 mm and the first peak load (P). Thus, the value of the equivalent flexural strength ratio can be calculated from equation (3). The results are shown in Table 8. Figure 15 shows the effect of macro-and microfibers on the flexural toughness index. In Figure 15a, it can be observed that increase in the content of type A macrofibers corresponded to a gradual increase in the σ ̅ b value, with content rate of 0.9% providing the most significant increase (green curve). Hybridization significantly improved the toughness index: addition of type B microfibers at the content rates of 0.3% and 0.6% resulted in an increase in this property compared with FRC mixtures containing only type A macrofibers.
Also with regards to the toughness index, remarking response was obtained for the type A macrofiber content rate of 0.9%, as depicted in Figure 15a. For this content, addition of type B microfibers at the content rate of 0.6% also enabled a 4% increase in the σ ̅ b value. Also in Figure 15b, a positive effect of hybridization on the flexural toughness is verified. For type A macrofiber content rate of 0.3%, the addition of type B microfibers resulted in gradual increase in flexural toughness (green curve). The same aspect was observed for the type A macrofiber content rate of 0.6%; the only exception observed was the combination of type A macrofiber content rate of 0.9% with type B microfiber content rate of 0.3%. Revista Tecnológica - Addition of all types of fibers provided increase in the flexural strength ratio, demonstrating that addition of fibers, regardless of type, resulted in a material with reduced loss at loading levels after rupture of the concrete matrix. However, type A macrofibers provided an even more significant increase in this ratio, as it can be observed in Figure 16. Table 9 presents the residual flexural strength values for the FRC groups analyzed. Figure 17 shows the effect of type A macrofibers on the residual flexural strength values fd,L/600 and fd,L/150. Figure 18 presents the effect of type B microfibers on the referred stresses. In Figure 17, it can be observed that the residual stresses increased with increased contents of type A macrofibers. Higher values of residual flexural strength were obtained with addition of type B microfiber content rate of 0.6% compared with the content rate of 0.3% of this type of fiber, for all type A macrofiber content rates investigated. It is also possible to verify that the highest residual stresses were obtained with the type A macrofiber content rate of 0.9%, highlighting the behavior of the FRCA9B6 group. Figure 18 shows that the effect of hybridization resulted in gradual increase of the residual flexural strength values for the type A macrofiber content rate of 0.3%, as well as for the content rate of 0.6% of this type of fiber. As for the type A macrofiber content rate of 0.9%, hybridization provided no increase in residual stresses. Another aspect to be observed is that the residual flexural strength of vertical displacement L/150 (3 mm) showed a very small reduction (for the FRC groups 3, 4, 5, 10, and 11) compared with that of equivalent vertical displacement of L/600 (0.75 mm). For FRC groups 3 and 4, which contained only type A macrofibers, reduction of 14% was observed in the residual flexural strength values, and for the other FRC groups, it did not exceed 22%. Therefore, the FRC groups containing only type A macrofiber at the content rates of 0.6% and 0.9% showed higher capacity to maintain the level of resistance to crack growth, not neglecting the response obtained with hybridization, especially for the FRCA9B3 and FRCA9B6 groups, which also presented post-peak tensile cracking stress levels higher than those for concrete containing only the type A macrofibers. | 5,875.8 | 2020-03-25T00:00:00.000 | [
"Materials Science"
] |
The Effect of the International Accounting Standards on the Related Party Transactions Disclosure
Problem statement: Several recent North American corporate scandals have brought attention to the potential for accounting manipulations associated with Related Party Transactions (RPTs), which have lead to a decline in perceived earnings quality. We examine the value relevance of disclosed RPTs in Greek corporations. Approach: We focus on two types of RPTs: sales of goods and sales of assets, using a value relevance approach. Results: From 2002-2007, we find that the reported earnings of firms selling goods or assets to related parties exhibit a lower valuation coefficient than those of firms in Greece without such transactions. This result is not observed during 2005-2007 after a new fair value measurement rule for RPTs came into effect. Conclusion: Our evidence suggests that the new RPT regulation in Greece is perceived to be effective at reducing the potential misuse of RPTs for earnings management purposes. Since RPTs have been the subject of numerous scandals in North America, our evidence from the Greek stock markets suggests that new RPT accounting standards could prove an efficient solution to this issue.
INTRODUCTION
In this study we examine value relevance regarding the disclosures of related party transactions made by firms listed in Athens Stock Exchange before and after the adoption of International Financial Reporting Standards (IFRS).
Many Greek listed companies are members of state-owned companies. Other companies are members of business collaborations. Most of the existing related party transactions are an outcome of capital investment process or mergers and acquisitions. The usefulness of the related party transactions within (inside) these corporations is the allocation of the internal resources, the minimization of the transaction costs and the improvement of the Return-On-Assets (ROA). In contrast, these dealings, when used opportunistically by managers and stakeholders can lead to deceptive effects and unfavorably harm shareholders' wealth. Worries has been expressed, concerning shareholders control and their manipulation of listed firms as financing vehicles in order to reallocate the capitals of those firms to other ventures. Furthermore, managers might overestimate earnings to gain rights issue permission through wash sales with related parties and also profit from purchasing and selling at excessive prices, or by exchanging assets with various qualities (Ge et al., 2010).
The unification of international financial markets created the necessity for accounting standards and regulations to be globally comparable (Zarzeski, 1996). The mandatory adoption of International Financial Reporting Standards (IFRS) by listed companies of the European Union, as of January 1, 2005, should help investors to take investing decisions (based on common methods) and increase stock markets profitability (Botosan and Plumlee, 2002;Healy and Palepu, 2001;Leuz, 2003. However, the global verification of IFRS necessitates their high quality (Tendeloo and Vanstraelen, 2005).
Greek listed firms on Athens Stock Exchange adopt IFRS, since it is obligatory for them. On the contrary, not listed firms make use of the Greek GAAP. The financial results of the firms have been affected from the change of Greek GAAP into IFRS (Mandilas et al., 2004, Vazakidis and. The transition had as an outcome the development of an adjustment mechanism, in order firms to avoid any trouble made by the IFRS implementation (Tarca, 2004) and also to the improvement of particular accounting variables, such as efficiency and compensation, aiming at the straightening of the firms' financial position (Weil et al., 2006).
Furthermore, this study investigates if the adoption of IFRS is effective and earnings managements cannot be used opportunistically from related party transactions, there is no need for investors to discount firms' involvement in related party transactions. Moreover, examines whether the earnings management has been reduced due to the IFRS implementation and if the value relevance of accounting numbers based on IFRS has been increased. The examining periods of our study is before and after the IFRS were officially adopted.
In the year of the first adoption, 2005, firms reported lower key accounting volumes, such as liquidity, profitability and growth, reasoning to the fair value measurement of IFRS and the associated variable costs. In the years followed, the financial measures reported were improved and their value relevance was higher (Athianos et al., 2005, Iatridis andRouvolis, 2010).
In our analysis we firstly investigate whether the information concerning the disclosures of related party transactions are value relevant for investors, before the adoption of IFRS, where fair value measurement does not exist. Secondly, we examine if investors confrontation concerning the reliability of related party transaction information has been changed due to the adoption of the IFRS and the fair value installation.
Literature review: An accounting amount is determined as value relevant if it is associated with the equity market value predictions. Accounting measures are supposed to be value relevant if they have a predicted significant relation with share prices, as long as the amounts represent value relevant information to investors concerning the firm valuation. Accounting numbers are relevant to financial statement users, only if they are able to differentiate the user's decision. There is no need for the information to be new, in order to be useful for the financial statement interested groups. There is a difference among the principles of value relevance and decision relevance (Barth et al., 2001).
Value relevant research is of great interest for a wide number of parties; not only for academics, but also standard setters are interested like the FASB and the International Accounting Standards Board (IASB), policy makers and regulators, managers of the firms and users of the financial statements (Barth et al., 2001).
According to prior research the adaptation of IFRS promotes accounting numbers with comparability and quality, concluding to accounting harmonization, growth of investments and decrease in the cost of capital (Barth et al., 2005). Reduction of earnings management is a consequence of the firms IFRS implementation (Render and Gaeremynck, 2007).
It is questionable the firm's performance with regard to the related party transactions. Finally, it would be of great interest, to investigate if there is an association between the related party transactions and the properties of financial reports or the introduction for earning management incentives (Bushman and Smith, 2001;Gordon and Henry, 2003;Sherman and Young, 2001). Kohlbeck and Mayhew (2010) show that, according to their market analysis, significantly lower valuation is obtained by Related Party (RP) firms and non-RP firms report marginally lower subsequent returns. Furthermore, they have stated that related party transactions have the ability for insiders to record firm wealth charging stakeholders. Contrarily, with respect to related party transactions, it can be achieved creative strategic partnership, promoted risk sharing and facilitating contracting. Kohlbeck and Mayhew (2010) resulted in an existing equilibrium of related party transaction disclosure and lower firm valuation. Moreover, they found that related party firms and their valuations are negatively associated, which recommends differential valuation of firms disclosing related party transactions that is statistically and economically significant. Their study findings obtain that the market evaluates residual income more for non-related party firms than for related party firms. The residual income findings verify that investors place less reliance on reported income and/or discount the return to shareholders from future income.
Related party transactions:
In reference to international evidence, expropriation of assets (i.e., tunnelling) by controlling parties impairs minority shareholders, which causes a reduction in the stock market values and returns for those firms that have access in such transactions (Johnson, 2000;Jiang et al., 2005;Jiang and Wong, 2010). In addition, stock market research suggestions indicate that laws that demand disclosure of related party transactions are associated with better developed stock markets (Djankov et al., 2008;La Porta et al., 2006). Gordon et al. (2007) stated, that related party transactions are considered as a natural part of the business and a high volume of such transactions is contained in firms without the commitment of accounting and financial fraud.
The manipulation of accounting accruals will transfer profits for one fiscal year to the next, reported profits of future years will not be affected by this movement. Therefore, the manipulation of the transfer price, that the related party transactions have, is a permanent earning modification. According to Jian and Wong (2008) study, Chinese listed firms use related sales to the controlling owner to sustain earnings. The levels of related sales and operating profits rising from related sales are unusually increased when firms have incentives to manage earnings. Moreover, the discretionary related party accounts receivable is not significantly positive when firms have incentives to meet earnings targets. The high abnormal related sales reported in their study are not an absolute result of abnormal accrued sales, which would produce significantly positive discretionary related accounts receivable; rather, the abnormal related sales can also be cash sales from the listed firms to their controlling owners. In general, prior academic research has focused much more on tunnelling than on propping.
Tunnelling and propping are of particular significance in companies with concentrated ownership. Concentrated ownership structures are very common in many countries around the world and particularly in East Asia (La Porta et al., 1999;Claessens et al., 2000). Controlling shareholders in such firms have the power to expropriate minority shareholders but can also use their private wealth to prop up firms in distress.
There are two relevant streams in prior literature. The first stream has attempted to measure the expropriation of minority shareholders indirectly, using different proxies for the degree of expropriation.
These studies do not examine whether the value of minority shareholdings has declined following specific corporate actions. Some studies use the legal system (in particular investor protection) as a proxy for the likelihood of expropriation (La Porta et al., 1998Porta et al., , 2000bJohnson, 2000;Djankov et al., 2008). The legal system has been shown to affect dividend policy (La Porta et al., 2000a;2000b), firm valuation (La Porta et al., 2002) and stock liquidity (Brockman and Chung, 2003). Other studies use the deviation of cash flow from control rights as a proxy for the likelihood of expropriation. This measure has been shown to affect dividend policy (Faccio et al., 2001), firm valuation (Claessens et al., 2002;Lemmon et al., 2003;Baek et al., 2004), firm profitability (Joh, 2003) and the propagation of earnings shocks within the firm (Bertrand et al., 2002). A second stream of literature examines actions of controlling shareholders that may directly impact the firms they control, typically through related party transactions between publicly listed firms and their controlling shareholders. The literature recognizes three motivations behind related party transactionstunnelling, propping and earnings management. The tunnelling literature provides evidence that the value of minority shareholdings has declined as a result of specific related party transactions. Cheung et al. (2006) examine a large set of related party transactions between Hong Kong listed companies and their controlling shareholders. They find that, on average, firms earn significant negative excess returns both at the initial announcement and during the 12-month period following the announcement of connected transactions that are a priori likely to result in expropriation of minority shareholders. In a similar spirit, Baek et al. (2006) examine private securities offerings by Korean industrial groups. La Porta et al. (2003) examine lending by Mexican banks to firms controlled by the bank's owners. They show that related loans carry lower interest rates compared to arm's length loans; they are more likely to default and have lower recovery rates following default.
A few recent studies examine the Chinese market using different proxies for tunnelling than our study. Berkman et al. (2008) examine loan guarantees issued by Chinese firms to their controlling shareholders and show that these transactions are less likely in statecontrolled firms. Gao and Kling (2008) use the difference between accounts receivable and accounts payable to related parties as a proxy for tunnelling and show that this measure is related to corporate governance characteristics.
Evidence on propping is more limited. Friedman et al. (2003) recognize that propping is the flip side of tunnelling but do not provide direct evidence. In their framework, controlling shareholders can choose to tunnel or to prop up their firm (in the latter case hoping that saving a distressed firm may allow them to tunnel more in the future). Bae et al. (2002) find that the value of Korean firms affiliated with industrial group declines when they are asked to bail out under-performing firms in the group through rescue mergers. Cheung et al. (2006) find some limited examples of propping in the Hong Kong market. Finally, Jiang and Wong (2003) show that Chinese firms belonging to business groups use related party transactions with their parents (in particular trading goods and services) as away of manipulating earnings.
Sample selection:
The basis of this study is composed by annual reports and financial statements obtained from the internet database. The sample consisted by companies were included in FTSE-ASE 20 index (ASE). Our observations span from 2002-2007. The examine period divided in the following sub-periods: 2002-2004 and 2005-2007. The partition point is the year 2005 when the adoption of the IAS has been issued. A regression analysis has been performed on the sub-samples for these two test periods.
Most of the companies for the years 2002-2004 had reported their financial statements in the Greek national accounting system, whereas few of them reported in both Greek GAAP and IAS.
In order to determine if the conversion from Greek GAAP to IAS has increased the harmonisation level, it was important the collection of data before as well as after the adoption period. It is a great need for the performance of the statistical test to have pre-and post-adoption years. Making use of the data formed from 2002-2004 (pre-change), we can compare with the practices formed from 2005-2007 (after the adoption of IAS). Athianos et al. (2007), translating the financial statements from Greek GAAP-IAS has extensive and significant differences in fixed tangible assets, depreciation of fixed tangible assets, valuation of inventories, deferred taxation, foreign currency translation, brand and trademarks and goodwill. Greek GAAP emphasize in the prudence principle and income smoothing, while IASs underline fair-value and balance sheet valuation. Many listed companies, in order to manage their earnings and get rights issue approval, make use of the related party sales.
Research and hypotheses development: According to
In reference to Jiang and Wong (2006), companies make use of related sales in order to achieve securities regulators' earnings targets for share issuance and maintaining listing status. Moreover, the examination of related party transactions in the US context indicates the association of related party transactions with earnings management (Gordon and Henry, 2005). Additionally, earnings management can also be used through the sales of assets (Herrmann et al., 2003).
As a result the hypotheses development of this study is as follows: H1A: In the pre-adoption period of IASs, that allows the manipulation of earnings through related party transactions, the parameter of earnings valuation is lower for firms selling goods to related parties than for firms without such transactions H10: In the post-adoption period of IASs, that prohibit the manipulation of earnings through related party transactions, there is no differentiation in the earnings valuation of firms selling goods to related parties than for firms without such transactions. H2A: In the pre-adoption period of IASs, that allows the manipulation of earnings through related party transactions, the parameter of earnings valuation is lower for firms selling assets to related parties than for firms without such transactions. H20: In the post-adoption period of IASs, that prohibit the manipulation of earnings through related party transactions, there is no differentiation in the earnings valuation of firms selling assets to related parties than for firms without such transactions.
The price levels model is frequently used in the accounting literature to test the value relevance of accounting information. The price levels design is appropriate when the research question is the determination of what accounting numbers are reflected in firm value (Barth et al., 2001;Beaver, 2002;Athianos et al., 2005). It also provides the added benefits of not needing the precise release date of the annual report and does not require that assumption be made about the market expectation model. Therefore, according to Ge et al. (2010) we apply the following regression model to test the above hypotheses: Where: PRICE = Stock price per share four months after the year-end EPS = Annual earnings per share BV = Book value of equity per share S goods = Dummy variable, coded 1 for firms selling goods to related parties and 0 otherwise. S assets = Dummy variable, coded 1 for firms selling assets to related parties and 0 otherwise
RESULTS AND DISCUSSION
In Table 1 we provide descriptive statistics for the sample observations. There are three testing periods, 2002-2004, 2004-2005 and 2005-2007 However, EPS, EPS*S goods and EPS*S assets are correlated at a significant level of 1%. The above results indicate a strong relationship between those variables, confirming the impact to firm's profitability.
Finally, in Table 3 regression results, for three subperiods, are presenting. The BV is 0.549, 0.804 and 0.773 (significant at 5% for the first two testing periods and at1% for the third testing periods. Those results are confirm previous research for Greek market, which are report significant association between BV and Price levels (Athianos et al., 2005). These results for BV indicates that there is downward valuation for the period before mandatory adoption of IFRS.
In addition, EPS is positively associated with stock prices (significant at 1%) for all testing periods. EPS coefficient magnitude also raised (8.209, 8.914 and 9.764, respectively for testing periods).
This positive growth for EPS as valuation variable, indicates a caution by the adoption of IFRS's.
Moreover, estimations for EPS*S goods coefficient are consistent with H1, since the results are -2.089, 3.105 and 3.334, significant at 10% for the first two periods and 5% for the last period. The negative coefficient for 2002-2004 period, indicate lower earnings for firms made inter-company transactions than the firms that not participate in these kinds of transactions. Significance levels also, suggest that the market discount the reported earnings prior and after the adoption, but in different level of significance.
The results for EPS*S assets coefficient are close to EPS*S goods coefficient. More specific, coefficient results are consistent with H2, since the results are-2.345, 2.873 and 3.037, significant at 10%. Comparing those two coefficients results, we can conclude that investors pay more attention to inter-company transactions, discounted the earnings in higher level, in case of selling assets rather than goods. 28.241*** *: Significant at the 0.10 level; **: Significant at the 0.05 level; ***: Significant at the 0.01 level
CONCLUSION
In this study we examine whether the disclosure of information about inter-company transactions contains value relevant information to investors Greece. Our testing period span from 2002-2007 separated into three sub testing periods, the period prior to adoption of IFRS's (2002IFRS's ( -2004, period to change the accounting principles and rules (2004)(2005) and a period after the adoption (2005)(2006)(2007). Our results indicate that, investors discount earnings when valuing firms that engaged in inter-company (related party) sales transactions, concerning sales of goods and other assets.
During the period prior to adoption we observe that investors take account the effect of inter-company transactions (selling goods-EPS*S goods ) discounted the level of earnings and the value of the firm, but in a lower level of significant in comparison to the testing period after the adoption (10% significant to 5%). Also the level of magnitude to earnings is significant.
In addition, investors also discount the level of earnings due to inter-company transactions in assets (EPS*S assets ) in a stable level of significance, prior and after the adoption of new accounting standards.
Concluding, our empirical results suggest that investors take more seriously account an inter-company transaction related to assets rather than goods, discounted in higher degree the earnings parameter.
Those results added value to new regulations, increasing the effectiveness of investors to reducing earnings manipulation by managers. | 4,596.2 | 2011-02-28T00:00:00.000 | [
"Business",
"Economics"
] |
Zooplankton in northern lakes show taxon‐specific responses in fatty acids across climate‐productivity and ecosystem size gradients
Northern lakes are facing rapid environmental alterations—including warming, browning, and/or changes in nutrient concentrations—driven by climate change. These environmental changes can have profound impacts on the synthesis and trophic transfer of polyunsaturated fatty acids (PUFA), which are important biochemical molecules for consumer growth and reproduction. Zooplankton are a key trophic link between phytoplankton and fish, but their biochemical responses to environmental change are not well understood. In this study, we assess the trends in fatty acid (FA) composition of zooplankton taxa among 32 subarctic and temperate lakes across broad climate‐productivity and ecosystem size gradients. We found that genus‐level taxonomy explained most FA variability in zooplankton (54%), suggesting that environmental changes that alter the taxonomic composition also affect the FA composition of zooplankton communities. Furthermore, the FA responses and their underlying environmental drivers differed between cladocerans and copepods. Cladocerans, including widespread Bosmina spp. and Daphnia spp., showed pronounced responses across the climate‐productivity gradient, with abrupt declines in PUFA, particularly eicosapentaenoic acid and arachidonic acid in warmer, browner, and more eutrophic lakes. Conversely, calanoid copepods had high and relatively stable PUFA levels across the gradient. In addition, all zooplankton taxa increased in stearidonic acid levels in larger lakes where PUFA‐rich cryptophytes were more abundant. Overall, our results suggest that climate‐driven environmental alterations pose heterogeneous impacts on PUFA levels among zooplankton taxa, and that the negative impacts of climate warming are stronger for cladocerans, especially so in small lakes.
nutrient-poor clear-water lakes (Seekell et al. 2015;Vonk et al. 2015;Creed et al. 2018), but also increases light attenuation that offsets the effect of nutrients and reduces whole-lake productivity (Karlsson et al. 2009;Creed et al. 2018).Overall, understanding the combined effects of warming, browning, and changes in nutrient concentrations is thus important for predicting food web responses in high-latitude lakes as a result of climate change.
Zooplankton are a key trophic link in lake food webs as they transfer energy, nutrients, and long-chain polyunsaturated fatty acids (LC-PUFA, i.e., PUFA with ≥ 20 carbon) from phytoplankton to fish (Lindeman 1942;Strandberg et al. 2015).Omega-3 LC-PUFA, such as eicosapentaenoic acid (20:5ω3, EPA), docosahexaenoic acid (22:6ω3, DHA), and the omega-6 arachidonic acid (20:4ω6, ARA), are essential for animal growth, reproduction, and a suite of physiological functions (Müller-Navarra et al. 2000;Ahlgren et al. 2009;Ili c et al. 2019).Saturated fatty acids (SAFA) and mono-unsaturated fatty acids (MUFA), in contrast, are mainly used for energy storage and membrane structure, and are less important for consumer growth and fitness (e.g., Goedkoop et al. 2007;Brett et al. 2009).EPA and DHA are synthesized de novo by certain micro-algae such as diatoms and cryptophytes, but not by chlorophytes and cyanobacteria (Ahlgren et al. 1990;Napolitano 1999).Warming, eutrophication and browning are expected to impair the production and transfer of LC-PUFA in aquatic food webs (Hixson and Arts 2016;Keva et al. 2021;Lau et al. 2021).This is partly explained by a reduced physiological demand of LC-PUFA (and increased demand of SAFA) by phytoplankton to adjust membrane fluidity to increasing temperatures according to homeoviscous adaptation (Hixson and Arts 2016;Holm et al. 2022).Furthermore, warming, eutrophication, and browning also induce shifts from PUFA-rich to PUFA-poor species in phytoplankton assemblages, for example the shift from a predominance of diatoms and cryptophytes in nutrient-poor colder waters to a predominance of chlorophytes and cyanobacteria in nutrient-rich and warmer conditions (Weyhenmeyer et al. 2013;Senar et al. 2019;Keva et al. 2021).Such a shift in autotrophic groups then generally lowers the bottom-up supply of LC-PUFA (Müller-Navarra et al. 2000;Keva et al. 2021), although there are exceptions (Hiltunen et al. 2015;Senar et al. 2019).Browning also promotes the importance of microbial heterotrophs (Ask et al. 2009;Berggren et al. 2014), which also tend to be devoid of LC-PUFA (e.g., Brett et al. 2009;Taipale et al. 2018).These changes in phytoplankton assemblages and PUFA production can negatively affect PUFA accumulation by zooplankton grazers and their nutritional quality (Gladyshev et al. 2011;Taipale et al. 2018;Lau et al. 2021), while also impeding zooplankton development (Sundbom and Vrede 1997;Müller-Navarra et al. 2000;Brett et al. 2009).
Cladocerans and copepods are common zooplankton taxa in freshwaters, including northern lakes.Cladoceran grazers are mostly generalist filter-feeders that do not actively select food particles (DeMott 1989;Sterner 1989) and that have parthenogenetic reproduction (Sommer et al. 1986).Copepods instead select which particles they ingest by their taste and shape (DeMott 1989;Sterner 1989), and only reproduce sexually, generally having lower growth rates and longer life cycles than cladocerans (Allan 1976).Copepods also tend to survive longer fasting periods than cladocerans (DeMott 1989), due to their ability to store lipids (fatty acid [FA]) (Hiltunen et al. 2015;Grosbois et al. 2017).Nevertheless, both copepods and cladocerans may avoid periods with adverse conditions via resting eggs (Gyllström and Hansson 2004), and via dormancy in copepod adults and copepodites (Dahms 1995;Gyllström and Hansson 2004).Cladocerans and copepods have different demands for LC-PUFA, that is, cladocerans generally require high EPA levels and slightly higher ARA levels than copepods, while copepods contain more DHA (e.g., Persson and Vrede 2006;Lau et al. 2012).These marked differences in feeding strategy, life histories, and FA requirements between cladocerans and copepods likely mediate their respective FA responses to environmental changes.For instance, the DHA concentrations in copepods and that of EPA in cladocerans (i.e., mg g DW À1 ) decrease with increasing water temperature, which can be related to changes in the taxonomic composition of these groups and to their lower demand for LC-PUFA to maintain cell membrane fluidity at warmer conditions (Lau et al. 2021).Compared to cladocerans, the DHA and EPA concentrations in copepods are also more sensitive to declines in the nitrogen-to-phosphorus ratio in lake water (Lau et al. 2021), as copepods have a higher demand for nitrogen than do cladocerans.Yet, the taxonspecific FA responses among zooplankton to environmental change are still largely unknown.
In this study, we synthesize published and unpublished data to (1) quantify the FA changes of multiple zooplankton taxa in northern lakes across climate-productivity (i.e., temperature, nutrients, and water color) and lake-size gradients and (2) identify the key drivers for zooplankton FA variation.We predict that (1) genus-level taxonomy explains more of the zooplankton FA variation than do the environmental gradients, as zooplankton taxa differ in their requirements for long-chain PUFA that are independent of differences in habitat characteristics and food availability (Persson and Vrede 2006).We further predict (2) general increases in the proportion of SAFA with concurrent PUFA declines in all zooplankton taxa across the climate-productivity gradient, that is, toward warmer, browner, and more eutrophic conditions.This is because phytoplankton assemblages are expected to shift from the dominance of PUFArich cryptophytes and diatoms in cold oligotrophic lakes toward PUFA-deficient green algae and cyanobacteria in warmer nutrient-rich lakes (e.g., Keva et al. 2021), while browning promotes the relative importance of PUFA-deficient terrestrial organic matter and bacteria for zooplankton production (Berggren et al. 2014).We also predict that (3) lake area counteracts the effects of warming on zooplankton FA, as larger lakes support more diverse phytoplankton assemblages and favor the predominance of cryptophytes and diatoms compared to smaller lakes (Lau et al. 2017); accordingly, we predict that (4) levels of FA biomarkers for diatoms (EPA and 16:1ω7) and/or cryptophytes (stearidonic acid [18:4ω3,SDA]) in zooplankton are higher in larger lakes.
Study sites and data compilation
We synthesized published and unpublished data of zooplankton FA in 32 Swedish lakes from 2002 to 2010 and 2020 to 2021 (Supporting Information Table S1).The combined dataset encompasses 100 samples of zooplankton FA collected between June and September across broad gradients in latitude (56.2-68.4N), lake size (surface area: 0.11-68.4km 2 ), and elevation (1-951 m a.s.l.) (Fig. 1; Table 1).We calculated mean FA values of zooplankton per taxon and per lake to avoid overrepresentation of data from lakes with multiple observations of the same taxon.This yielded 78 samples for statistical analysis.A majority of the FA data (71/100 samples) have been published in Johansson et al. (2016), Lau et al. (2012), and Persson and Vrede (2006), while 29/100 samples represent unpublished data (Supporting Information Table S1).
While the compilation of large data sets has a large scientific potential, the analysis of data from different sources also comes with inherent limitations.For this study, however, we have taken several steps to guarantee good comparability of data: First, we have been co-authors of all the studies from which data have been collected (see citations above), meaning we had access to the raw data and full control over the sampling and analytical procedures throughout.Second, we have selected common and easily identifiable FA to minimize the potential effects of methodological differences among datasets.Third, we have also checked that environmental gradients were temporally consistent (see details below).Finally, we strived to minimize possible effects of season, by selecting a vast majority of samples (89%) from the middle of the summer (July-August).All these measures were to maximize the data harmonization among the datasets while keeping a high number of samples that allowed us to test for different predictions.
For spatial analysis of the lake's environmental conditions, we calculated inter-annual means of climate and water chemistry data for the periods that include the years of zooplankton sampling (Supporting Information Table S1).Surface water samples (0.5 m) were analyzed for a suite of water chemistry variables using standardized methods and extracted from previous publications (Persson and Vrede 2006;Persson et al. 2008), as well as data from specific monitoring programs (Erken Laboratory, https:// www.ieg.uu.se/erken-laboratory/lake-monitoring-programme/; or Swedish national and regional monitoring, https://miljodata.slu.se/MVM/Search; see also Fölster et al. 2014).
Absorbance at 420 nm (Abs 420 ), measured in a 5-cm cuvette, was used as a proxy for water color (SS-EN ISO 7887:2012).Water color measured as mg Platinum L À1 was back calculated to Abs 420 by dividing by 500 (Naturvårdsverket 1999), while Abs 430 was recalculated to Abs 420 by multiplying by 1.80 using established relationships for Swedish inland waters in our lab.Total phosphorus (TP) concentrations were analyzed using the SS-EN ISO 6878:2005 method (modified).Mean summer air temperature for the lakes (June-September) was extracted from the Climate Research Unit gridded Time Series dataset (version 4.06;Harris et al. 2020; https://crudata.uea.ac.uk/cru/data/hrg/), using the 2001-2011 and 2012-2021 periods for lakes sampled for zooplankton FA in 2002-2010 and 2020-2021, respectively (Supporting Information Table S1).Temperature data were obtained from grids of 0.5 latitude by 0.5 longitude, which are sufficiently large to cover not only the lakes, but also their catchments.Lake elevation was extracted from a GISelevation layer, while lake area was obtained from the Swedish lake register (Swedish Meteorological Institute, https:// vattenwebb.smhi.se/svarwebb/)and, for the unregistered lakes, via manual areal measurements in the online map of the Swedish Land Survey (https://minkarta.lantmateriet.se/).Our selected inter-annual means of the environmental data (i.e., climate and water chemistry) were strongly correlated with those of the summer means from specific years of zooplankton sampling (r = 0.85-0.97),indicating that the among-lake climate-productivity gradient was consistent over the study period.
Methods for zooplankton FA sampling and analysis were similar among the datasets (Supporting Information Table S1).In brief, samples of zooplankton were collected in the summer by net hauls using 180-or 200-μm nets and freeze-preserved in the field using liquid nitrogen or dry ice.In the lab, samples were freeze-dried, sorted (generally to species or genus level), and stored at À20 C or À70 C under N 2 .FAs were extracted using established methods such as the chloroform/methanol method (Tadesse et al. 2003), the hexane/isopropanol method (Eriksson and Pickova 2007) or the methanol/toluene/acetylchloride method (Grosbois et al. 2022), and immediately converted into FA methyl esters (FAME) using alkaline or acid transesterification (Supporting Information Table S1).FAME were quantified using GC-MS and/or GC-FID based on internal and external reference standards (Supporting Information Table S1).Among the FA identified in the various data sets (Supporting Information Table S1), we selected 16 common FA for our synthesis.Then, we recalculated percentages of each FA relative to the sum of these 16 FA, aiming to standardize the FA values among datasets (Supporting Information Table S2).These 16 FA are commonly and clearly identifiable by the different analytical methods (i.e., GC-MS, and/or GC-FID), and constituted 79-99% of all the FA in the original datasets.
For the data analysis, we used individual FA, and FA groups that provided information that was relevant to the research questions we addressed: groups of SAFA, MUFA, and PUFA were used to indicate overall changes in zooplankton FA composition across the climate-productivity and lake-size gradients.Individual LC-PUFA (i.e., EPA, ARA, DHA) were used because of their importance for the physiology and fitness of organisms.We also included individual FA biomarkers for different algal groups-that is, 16:1ω7c cis-palmitoleic acid for diatoms and 18:4ω3 SDA for cryptophytes (Taipale et al. 2013)to indicate changes in algal resource use by zooplankton across the gradients.The ratio between omega-3 and omega-6 FA (ω3/ ω6) was additionally used to reflect the overall trophic support by algae vs. terrestrial organic matter for zooplankton (Hixson et al. 2015;Taipale et al. 2015).Zooplankton FA composition may vary with their total lipid content (e.g., Hiltunen et al. 2015;Grosbois et al. 2017).Yet, zooplankton lipid content data are not available from the individual studies, so we are unable to test for the zooplankton response in lipid content across the climate-productivity and lake-size gradients, as well as its associated effects on the zooplankton FA.
Data analyses
FA percentage data were logit-transformed prior to multivariate analysis for normal-distribution approximation.The ω3/ω6 ratio, along with all environmental variables, including TP, water color, and mean annual temperature, were log 10 -transformedprior to analysis.The study lakes differed in elevation, but elevation was not included as an explanatory variable of zooplankton FA composition due to high collinearity with TP (r = À0.70;Supporting Information Fig. S1) and temperature (r = À0.71;Supporting Information Fig. S1), which both are mechanistic drivers of FA availability in seston (zooplankton food) (e.g., Müller-Navarra et al. 2000;Hixson and Arts 2016).
For testing zooplankton FA responses across the environmental gradients for cladoceran and copepod taxa, we first grouped the cladoceran grazers Bosmina spp.Baird, 1845 and Daphnia spp.O.F. Müller, 1785, based on their similar feeding ecology and FA compositions (Fig. 2A) and their presence in all study lake types (Fig. 2; Supporting Information Fig. S2B).Similarly, we pooled calanoid copepods from the family Diaptomidae (i.e., Eudiaptomus spp.Kiefer, 1932, Arctodiaptomus laticeps (G.O.Sars, 1863), and Mixodiaptomus laciniatus (Lilljeborg in Guerne & Richard, 1889)) into copepod grazers.The filter-feeding cladocerans Ceriodaphnia spp.Dana 1853 and Holopedium gibberum Zaddach, 1855 as well as the predatory cladocerans Bythotrephes longimanus Leydig, 1860, and Polyphemus pediculus (Linnaeus, 1758) and the predatory copepods Heterocope spp.G.O. Sars, 1863 were present in relatively few lakes at either ends of the gradient (Supporting Information Fig. S2B).Principal component analysis (PCA) was used to assess the environmental gradients based on the lakes' geographic and physicochemical data, including TP, water color, mean air temperature, lake elevation, and lake area.The first PCA axis (PC1) was strongly and positively correlated with temperature, water color, and TP (r > 0.7; Supporting Information Fig. S1), and explained 55.4% of the total variance in lake abiotic variables (Fig. 1B; Supporting Information Fig. S3).Thus, we used PC1 scores as a climateproductivity index (CPI), that is a surrogate for simultaneous effects of eutrophication, browning, and warming that are typical in northern lakes (e.g., Hayden et al. 2017;Keva et al. 2021).The CPI was then used for univariate analysis to test for zooplankton FA responses.
We classified lake types based on the physicochemical variables using k-means clustering and the Calinski-Harabasz criterion to determine the optimal number of partitions.K-means clustering yielded an optimum of three lake groups (k = 3, calinski = 39.9) that we identified as Alpine Oligotrophic, Brown, and Lowland Eutrophic lakes (Fig. 1).The Alpine Oligotrophic lakes are cold, low in nutrients and color, and encompass a broad range in size (Fig. 1; Table 1; Supporting Information Fig. S4).The Brown lakes are generally warmer, small (< 2 km 2 ) and with intermediate nutrient concentrations, while the Lowland Eutrophic lakes have the highest TP concentrations and low to intermediate water color, with temperatures similar to those of the Brown lakes (Fig. 1; Table 1; Supporting Information Fig. S4).
The effects of environmental variables and genus on FA variation of zooplankton were tested using redundancy analysis (RDA) with forward selection.Zooplankton genus, temperature, water color, TP, and lake area were used as explanatory variables.We selected the best model based on the lowest Akaike information criterion (AIC).Similarly, the RDA was applied separately for cladoceran (Bosmina and Daphnia) and copepod grazers (Diaptomids: Arctodiaptomus, Eudiaptomus, and Mixodiaptomus), excluding genus as an explanatory variable.Therefore, to avoid bias, the other genera were excluded from the analysis of genus-specific FA responses across the environmental gradients.
Highly covaried variables were identified with variance inflation factors > 2.5 (O'Brien 2007), leaving only one of the collinear variables for subsequent analysis.The statistical significance of the selected independent variables in the RDA was determined using permutation tests (n.perm = 999) on their marginal effects at α = 0.05.Differences in the FA composition among zooplankton groups were tested using PER-MANOVA, and the multivariate homogeneity of group dispersions in FA variation was tested using the "betadisper" function in R. Linear regressions were used to investigate the changes in FA of copepod and cladoceran groups (i.e., including predators and rare taxa along the gradients) across the CPI and lake-size gradients.To simplify visualization, we present regressions of physiologically important FA (i.e., DHA, EPA, ARA) and other FA that correlated with CPI and lake area gradients in either copepods or cladocerans (r > 0.5, p < 0.05).Multivariate analyses were performed using the vegan package version 2.6-4 (Oksanen et al. 2022) and all analyses were performed using R version 4.2.0 (R Core Team 2022).
Results
The study lakes encompassed a broad range of environmental conditions from ultra-oligotrophic to eutrophic (mean TP < 1.0-42.3μg P L À1 ), clear-water to brown colored (< 0.005-0.51Abs 420 ), and with summer (June-September) mean temperatures ranging from 6.8 C to 15.9 C (Table 1).
Genus explained most of the variation in FA composition among zooplankton (54.1%) according to the RDA (Table 2, model A), with a distinct separation between cladoceran and copepod genera that was characterized by high MUFA and high DHA, respectively (Fig. 2).The RDA also identified temperature, lake area, and water color as significant explanatory variables of the zooplankton FA composition (Fig. 2B; Table 2, model A), but only contributing to 8.0%, 2.5%, and 1.0% of the variation, respectively.High temperature and water color were associated with high SAFA levels in zooplankton, while larger lake areas were associated with higher levels of PUFA, such as SDA, EPA, and ARA (Fig. 2B).
Bosmina and Daphnia did not differ in FA composition (PERMANOVA, F 1,28 = 0.599, p = 0.543) and showed highly similar responses across the environmental gradients (Fig. 2; Supporting Information Fig. S2).A separate RDA conducted for cladoceran grazers (i.e., Bosmina and Daphnia) selected temperature, lake area, and water color as the main explanatory variables (Table 2, model B), which explained 58% of the FA variation.Temperature alone explained the highest portion of cladoceran grazers' FA variation (28.0%) (Table 2, model B).Lake area and water color explained 4.1% and 3.5% of the FA variation of cladoceran grazers, respectively, although permutation tests showed that their effects on cladoceran FA were not significant (Table 2, model B).Increasing temperature and water color were associated with higher SAFA in the cladocerans, at the expense of lower PUFA, particularly EPA, while larger lake area was associated with higher SDA and higher ω3/ω6 (Fig. 3A,B).
The separate RDA for copepod grazers (i.e., Eudiaptomus, Mixodiaptomus, and Arctodiaptomus) selected water color and lake area in the best model, which explained 57% of the copepod FA variation (Table 2, model C).For copepod grazers, the lake area explained slightly more of the FA variation (22.5%) than did water color (18.9%).Increasing lake area was associated with higher SDA, but lower EPA and DHA, in copepod grazers (Fig. 3C,D).Responses of copepod grazers across the climateproductivity gradient differed from those of cladoceran grazers, as increasing water color was associated with an increase in EPA and a decrease in MUFA in copepod grazers (Fig. 3B,D).
Univariate regressions of selected FA also showed that the cladoceran and copepod groups (including all taxa) differed in their FA response with increasing CPI (Fig. 4).Cladocerans showed strong declines in PUFA (from 60% to 7%), among the 16 selected FA, mainly due to declines in EPA (from 23% to 0%), ARA (from 10% to 1%) and DHA (from 3% to 0%), and concurrent increases in MUFA (from 15% to 32%) and SAFA (from 25% to 61%) across the increasing CPI gradient.Copepods also increased in SAFA (from 33% to 38%) in response to increasing CPI.Conversely, copepods showed increases in EPA (from 9% to 17%), ARA (from 2% to 6%) and marginally significant decreases in MUFA (from 13% to 7%) as CPI increased (Fig. 4), while there were no trends in DHA and PUFA.
Discussion
Our analysis of a large data set of different lake types across environmental gradients shows that the effects of environmental change on zooplankton FA composition are highly dependent on their taxonomy.In line with prediction (1), genus-level taxonomy explained a major share of FA variation in zooplankton (54%) despite the wide range of environmental conditions addressed (Table 1).This implies that the effects of environmental change on zooplankton FA composition are strongly mediated by zooplankton community assembly (Bergström et al. 2022).However, we also found contrasting FA responses between the major zooplankton groups, that is, cladocerans and copepods, across the environmental gradients (Figs.3-5).This is best exemplified by the dramatic decrease in cladoceran PUFA with increasing CPI (from 60% to 20%), which supports prediction (2), compared to the stable share of PUFA in copepods (ca.60%) (Fig. 4B).This suggests that decreases in the nutritional quality of cladocerans with climate change have potentially negative consequences for lake food webs.We found that lake size contributed to changes in zooplankton FA, which could be related to higher trophic support of PUFA-rich algae (i.e., cryptophytes) for zooplankton in larger lakes, as depicted by the increase in the cryptophyte FA biomarker (SDA) in both cladocerans and copepods (Fig. 5A, supporting predictions (3) and ( 4)).This finding was further corroborated by the positive relationship between cryptophyte-relative biovolume in phytoplankton and lake size for a large set of lakes across the whole of Sweden (Supporting Information Table 2. Best redundancy analysis (RDA) models of the effects of taxonomic group and/or environmental variables on the zooplankton FA composition.Independent variables used for RDA forward selection were taxonomy (at genus level; model A), and temperature, TP, water color, lake area (models A-C).Marginal effects of the selected independent variables were tested using 999 permutations.Fig. S6).Therefore, we infer that the negative effects of climate change on zooplankton nutritional quality for fish are stronger in smaller lakes.
Underlying drivers of FA composition in copepods and cladocerans differed
Temperature explained most of the FA variation of cladoceran grazers, while both lake area and water color explained most of the FA composition in copepod grazers (Figs.3B,D, 4).These findings agree with a spatial study in the Swedish subarctic and boreal lakes, where Lau et al. (2021) found that the EPA concentrations in cladocerans were most responsive to temperature gradients, whereas the EPA and ARA concentrations in calanoid copepods also responded to the specific ultraviolet absorbance of water (i.e., a proxy for water color and DOC aromaticity; Weishaar et al. 2003) (Fig. 3B,D).Warming and browning may thus not only affect zooplankton FA composition in northern lakes (Hiltunen et al. 2015;Keva et al. 2021), but have different impacts on cladocerans and copepods.Lau et al. (2021) also found that the nitrogen (N) to phosphorus (P) ratio was an important predictor for copepod FA, as copepods have a higher demand for nitrogen than cladocerans.Atmospheric N deposition has been historically higher in southern than in northern and central Sweden (Ferm et al. 2019), resulting in a latitudinal gradient in lake N : P ratios (Elser et al. 2009) (Fig. 1).Therefore, we do not exclude the possibility that differences in lake water N : P ratios may explain some of the FA variation in copepods in our dataset.
Effects of the climate-productivity gradient
In line with prediction (2), cladocerans decreased in PUFA, but increased in MUFA and SAFA in warmer, browner, and/or more eutrophic lakes (Fig. 4).The observed decrease in PUFA was mostly attributed to EPA, ARA, and DHA, which together led to drastic show the ordination of zooplankton samples colored by lake type (Alpine Oligotrophic in blue, Brown in brown, and Lowland Eutrophic in green).Ellipses indicate 95% confidence limits of group centroids for individual lake types with n > 2. Panels (B) and (D) show the eigenvectors of the explanatory environmental variables and the zooplankton FA composition (abbreviations are the same as in Fig. 2).Dash arrows show variables selected by forward selection that are non-significant.Variance explained (%) by the RDA axes are indicated in parentheses.
decreases from ca. 40% to 2% LC-PUFA in response to increasing CPI (Fig. 4).Lau et al. (2021) found a similar, yet more moderate decrease in EPA + DHA concentrations of cladocerans with increasing temperature, which may be due to their narrower latitude and elevation gradient (60-68 N, 227-590 m a.s.l.) than in our study (56-68 N, 1-952 m a.s.l.).The steep declines in EPA and ARA in cladocerans in our study likely resulted from differences in the dietary seston composition (DeMott 1989;Sterner 1989).Warming-and browning-induced increases in nutrient concentrations can promote blooms of green algae and cyanobacteria (Taipale et al. 2019;Keva et al. 2021), which are devoid of LC-PUFA such as EPA and ARA (Ahlgren et al. 1990;Napolitano 1999).Browning additionally increases the trophic support of bacterial production and terrestrially derived detritus for zooplankton (Berggren et al. 2014), both of which lack LC-PUFA (Brett et al. 2009;Taipale et al. 2018).We hypothesize that cladoceran grazers incorporate such dietary FA changes due to their largely non-selective feeding (Brett et al. 2009;Taipale et al. 2015Taipale et al. , 2018)), and thereby become highly sensitive to ongoing warming, browning, and eutrophication in northern lakes.In contrast to cladocerans, PUFA and DHA in copepods were not affected across the climate-productivity gradient, which does not support prediction (2) and is similar to the findings by Gladyshev et al. (2015), who showed lower PUFA differences in copepods compared to cladocerans in warm and cold lakes.Furthermore, the magnitude of increase in SAFA across the CPI gradient was seven times lower in copepods than in cladocerans (Fig. 4).These findings suggest that copepods have a strong ability to regulate FA composition irrespective of changes in environmental conditions and seston composition, either by selective feeding on available PUFA-rich food sources (DeMott 1989;Sterner 1989) and/or by internal FA metabolism (Ravet et al. 2010).Copepod grazers are also able to feed on microfauna in the pelagic (e.g., ciliates and protozoans) (Karlsson et al. 2007;Kunzmann et al. 2019) and on meiofauna in benthic habitats in winter (Muschiol et al. 2008), when they spend their copepodite stages in surficial sediments (Goedkoop and Johnson 1996).Trophic upgrading by microfauna and meiofauna, that is, the conversion of dietary precursor FA to LC-PUFA, may, therefore, contribute to PUFA-enrichment of food resources for copepods (Martin-Creuzburg et al. 2005).Alternatively, predatory copepods (e.g., Heterocope) may benefit from the selective accumulation of LC-PUFA in food webs (Persson and Vrede 2006;Strandberg et al. 2015).
Our finding that both EPA and ARA in copepods moderately increased across the CPI gradient, particularly with increasing water color, contrasted with the observed EPA and ARA decreases in cladocerans (Fig. 4).Johansson et al. (2016) showed for seven of our lakes (lakes 1, 3-8; Supporting Information Fig. S3) that the increase in EPA and ARA accumulation in the copepod Eudiaptomus gracilis was related to the intensity and duration of blooms of the flagellate Gonyostomum semen.With two additional lakes (total N = 9), we similarly found positive correlations of copepod EPA and ARA with the relative biovolume of G. semen in phytoplankton (Supporting Information Fig. S7A,D).G. semen is rich in EPA (Gutseit et al. 2007;Taipale et al. 2013), but most cladocerans are unable to feed on G. semen due to its large cell size and trichocyst defense mechanisms (Johansson et al.
2013)
. Copepod grazers such as E. gracilis can feed on and obtain abundant EPA from G. semen at high rates (Johansson et al. 2013).Therefore, more flexible feeding strategies and the capability to feed more selectively and on larger particles likely make copepods less susceptible than cladocerans to climateinduced changes in seston composition and food quality.
The role of lake size Our results revealed that lake size had a strong effect on zooplankton FA composition.The cryptophyte biomarker SDA increased by ca.10%-15% in both copepod and cladoceran grazers across our lake-size gradient from 0.11 to 65 km 2 , supporting prediction (4).These findings were reinforced by positive correlations between SDA and the relative biovolume of cryptophytes in phytoplankton in a subset of our study lakes (N = 17) (Supporting Information Fig. S7H) and by the increases in relative biovolume of cryptophytes in phytoplankton with increasing lake area for 102 monitoring lakes across Sweden (Supporting Information Fig. S6).Cryptophytes are rich in LC-PUFA, including DHA and EPA (Ahlgren et al. 1990;Napolitano 1999).However, the increased trophic support from cryptophytes did not necesarily result in higher LC-PUFAs in zooplankton.For instance, in copepods, LC-PUFAs (i.e., EPA, DHA, and ARA) moderately decreased concurrently with lake size (by 10%, 7%, and 3%, respectively) (Figs.3B, 4), which does not align with expected increases in LC-PUFA in phytoplankton (Supporting Information Fig. S7C,F).These results may be partly influenced by the increase in EPA and ARA with increasing CPI and water color, as the browner lakes were generally smaller than, the clear-water lakes in our study (Table 1; Supporting Information Fig. S1, S4).However, the decrease in DHA in copepods was independent from the CPI and water color, and may thus be a direct effect of lake area on FA regulation.Relative decreases of LC-PUFA have been linked to increases in 18C-PUFA as lipid stores in copepods prior to overwintering (Hiltunen et al. 2015;Grosbois et al. 2017).Thus, the observed decreases of LC-PUFA and increases of SDA in copepods with increasing lake area could be related to the higher investment in energy storage in larger and relatively cooler lakes.Based on the current correlative results, however, we are unable to unravel the mechanisms (e.g., dietary, metabolic, or life-history processes) that underlay the observed effects of lake size on copepod LC-PUFA.Yet, our findings of higher phytoplankton food quality in larger lakes-through the predominance of cryptophytes-suggest that lake size moderates the negative impacts of climate change on zooplankton food quality for fish.
Implications for lake food webs
Our results show that warming, eutrophication, and browning drastically decrease LC-PUFA, that is, the nutritional quality, of cladocerans in northern lake food webs.This decrease of LC-PUFA in cladocerans, combined with increases in cladoceran predominance over copepods (Hayden et al. 2017;Bergström et al. 2022), potentially underlie the observed decreases in LC-PUFA content of zooplankton communities in northern lakes that are warmer, browner, and more eutrophic (Keva et al. 2021).Because LC-PUFA are key for trophic transfer efficiency in aquatic food webs (Müller-Navarra et al. 2000;Ahlgren et al. 2009) and fish productivity (Taipale et al. 2018), our results imply that climate change likely impairs the pelagic trophic transfer efficiency in northern lakes.Our results also show that PUFA in both cladocerans and copepods combined were highest in Alpine Oligotrophic lakes.This highlights the importance of Alpine Oligotrophic lakes for providing high-quality food for higher trophic levels, while also pinpointing their susceptibility to environmental change, particularly if such lakes are small.Rapid climate change in the Arctic/alpine landscape (Schindler and Smol 2006;Allan et al. 2021) thus likely compromises the biochemical food quality of zooplankton and has strong repercussions for the Arctic/alpine lake food webs.
Conclusion
Our study shows that both ecological and phylogenetic differences among zooplankton taxa underlie their contrasting, taxon-specific FA responses to environmental change.These responses are modulated by lake size, whereby smaller lakes likely are more susceptible to simultaneous warming, browning, and eutrophication due to the lower abundances of PUFA-rich phytoplankton taxa such as cryptophytes.These taxon-specific FA responses, in conjunction with expected zooplankton-community shifts towards more sensitive cladoceran taxa will reduce the zooplankton food quality and, ultimately, the trophic transfer of high-quality PUFA to planktivorous fish in northern lakes, which are expected to become warmer, browner, and/or more eutrophic in the face of global change.
Fig. 1 .
Fig. 1. (A) Locations of the 32 study lakes.Lake area is represented by the size of the circle and panels (A1) and (A2) show a higher resolution of the annotated regions.(B) Principal component analysis (PCA) of the lakes' physicochemical variables (color, water color; area, lake area; Elev, elevation; T, temperature; TP, total phosphorus).Ellipses indicate 95% confidence limits of group centroids for individual lake types classified by k-means clustering.Lakes are numbered based on an increasing order of their latitudes.Individual lake names and numbers are shown in Supporting Information Fig. S3.
Table 1 .
Geographic and physicochemical characteristics of the different lake types.Area, lake area; Color, absorbance at 420 nm; TP, total phosphorus concentration; T, mean annual air temperature; N = number of lakes. | 7,641.2 | 2024-02-28T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Enhanced optical path and electron diffusion length enable high-efficiency perovskite tandems
Tandem solar cells involving metal-halide perovskite subcells offer routes to power conversion efficiencies (PCEs) that exceed the single-junction limit; however, reported PCE values for tandems have so far lain below their potential due to inefficient photon harvesting. Here we increase the optical path length in perovskite films by preserving smooth morphology while increasing thickness using a method we term boosted solvent extraction. Carrier collection in these films – as made – is limited by an insufficient electron diffusion length; however, we further find that adding a Lewis base reduces the trap density and enhances the electron-diffusion length to 2.3 µm, enabling a 19% PCE for 1.63 eV semi-transparent perovskite cells having an average near-infrared transmittance of 85%. The perovskite top cell combined with solution-processed colloidal quantum dot:organic hybrid bottom cell leads to a PCE of 24%; while coupling the perovskite cell with a silicon bottom cell yields a PCE of 28.2%.
The authors report 28% power conversion efficiency for a four-terminal perovskite-Si tandem and 24 % efficiency for a perovskite-quantum dot solar cell tandem. At first these numbers seem to be extraordinary. However, the numbers are less impressive when one realizes how small the solar cells are. There is a graph in this paper showing record efficiencies. For many of those data points, the solar cell was a full square centimeter in size. The supplemental section says that the size of the quantum dot solar cell was 0.049 cm^2. The perovskite cell had an area of 0.053 cm^2. The "4T perovskite-Si" tandems is not really a tandem at all. The authors put a much larger perovskite solar cell stack on top of the Si cell and used the stack as a filter while measuring the Si cell efficiency. They then used the small perovskite cell to get an efficiency for the perovskite cell. Now that prototypes of 4T tandems have been demonstrated repeatedly by several research teams, I think the time has come for the community to stop obtaining numbers this way. It's time to start building real 4T tandems with matched areas. A recently reviewed a manuscript that recommends a proper protocol for measuring efficiencies of 4T tandems. I have no problem with the authors reporting the numbers for small cells, but I would also like to see them report the efficiency for 1 cm² device. They should let people know much the efficiency drops because of the additional series resistance in the electrode. Practical solar panels cannot be made with solar cells that are only 2 mm wide.
It is not true that "The 4T tandem arrangement offers a higher theoretical PCE." The theoretical limit for 2T and 4T tandems are the same. In the theoretical limit, the bandgaps for the 2T tandem would be the ideal ones.
On page 3 the authors state "When we increased perovskite precursor solution concentration, crystallization is less controllable before anti-solvent dripping because of the high supersaturation of precursor solution and fast perovskite reaction rate during subsequent thermal annealing33,34. As a result, perovskite films form with a rough and wrinkled surface (Figure 1a-c), in line with previous reports23,24." They have not accurately summarized the explanation provided in the papers they cite. The wrinkling occurs because compressive stresses arise during the film formation process. I do not agree that roughness arises because it is hard to control the crystallization. More precise wording is needed.
While the reported PCE is very impressive, the major conclusions of the work have been reported previously. Although the demonstrates boosted-solvent-extraction technique creates smooth and thick perovskite films, addition of the Lewis base is key to the electronic performance of the layer. However, the Lewis acid-base adduct approach to improve the properties of the perovskite has already been demonstrated before (ref 34). The authors report reduced trap densities in the perovskite thus enabling longer diffusion lengths (2.3 um). However, there is very little data presented in this regard to support these claims. I am afraid this manuscript does not provide the sort of significant conceptual advance nor represent a sufficiently striking advance to justify publication in Nature Communications. For these reasons, I do not believe that the manuscript meets the requirements for publication in Nature Communications.
Reviewer #3 (Remarks to the Author): In this manuscript, the authors reported a four terminals tandem perovskite-silicon solar cell with a record power conversion efficiency exciding 28%. The result was enabled by the combination of a thicker perovskite layer, which the authors achieved modifying the standard spinning and antisolvent procedures, and a defect passivation additive that result in a larger electron diffusion length. I found the work interesting but not sufficiently novel for publication. Besides a well-written story, this work is combining two optimisations of the perovskite film deposition and then applying them in a four-terminal tandem concept. Therefore I see it more as an incremental than a step improvement in the field. In conclusion, I cannot recommend publication since lack of novelty.
The primary novel concept in this manuscript is the process by which a thick metal halide perovskite film is formed by first spinning the precursor solution at a low spinning rate and then increasing the spinning rate just before the anti-solvent treatment. In this way is possible to both have a thick film and rapidly falling off the anti-solvent. There are also exciting advances in making a low bandgap quantum dot solar cell more efficient. I think these discoveries merit publication in Nature Communications, but would like to see the authors address a number of concerns expressed below.
We thank the reviewer for the much-valued feedback below on how to increase the impact of the work.
The authors report 28% power conversion efficiency for a four-terminal perovskite-Si tandem and 24 % efficiency for a perovskite-quantum dot solar cell tandem. At first these numbers seem to be extraordinary. However, the numbers are less impressive when one realizes how small the solar cells are.
There is a graph in this paper showing record efficiencies. For many of those data points, the solar cell was a full square centimeter in size. The supplemental section says that the size of the quantum dot solar cell was 0.049 cm^2. The perovskite cell had an area of 0.053 cm^2.
We have added the device active area into the efficiency summary in Table S1. I have no problem with the authors reporting the numbers for small cells, but I would also like to see them report the efficiency for 1 cm² device. They should let people know much the efficiency drops because of the additional series resistance in the electrode. Practical solar panels cannot be made with solar cells that are only 2 mm wide.
We have fabricated semi-transparent perovskite front cells with active areas of 1.95 cm 2 .
We measured the device with masks of 0.49 cm 2 (17.3% PCE), 1 cm 2 (16.5% PCE), and 1.68 cm 2 (15.9% PCE), respectively. As seen in the table below, the FF decreases as the device area increases, which leads to drops in PCEs. The increased series resistance arises due to the enlarged area of the transparent conductive oxide (TCO) electrode. We now include a comment in the revised paper that, for devices exceeding 1 cm 2 , it will in future become important to incorporate metal fingers/busbars.
On page 10, we have added: "We also fabricated 1 cm 2 size tandem, which yields a PCE of 25.7%. This is limited by fill factor due to increased series resistance from TCEs in the semi-transparent PSC ( Figure S10), and it will in future become important to incorporate metal fingers/busbars." In the supplementary information, we added Figure S10. It is not true that "The 4T tandem arrangement offers a higher theoretical PCE." The theoretical limit for 2T and 4T tandems are the same. In the theoretical limit, the bandgaps for the 2T tandem would be the ideal ones.
On page 1, we have revised the statement to read: "The 4T tandem arrangement offers a broader bandgap selection window for its constituent cells." On page 3 the authors state "When we increased perovskite precursor solution concentration, The authors determined grain boundary size by simply looking at scanning electron microscopy images.
Many people have asserted that there are actually multiple grains between the lines that are visible in the micrographs. The authors should make a comment on this subject. They might use words such as "apparent grain size." A nice paper that was presented recently on this subject is 10.1016/j.joule.2019.09.001.
We agree with the reviewer that "apparent grain size" is a more appropriate term. We have now included the relevant paper and write apparent grain size in the revised manuscript.
On page 7, we now write: " Figure 3e shows the statistical distribution of the apparent lateral grain size from SEM of perovskite films fabricated under different conditions. Urea-treated 700 nm thick perovskite films exhibit a larger apparent grain size (700 nm-Large), averaging 1.3 µm compared to 0.6 µm without any additive (700 nm-Small)." The authors should explain why zirconium-doped In2O3 is superior to ITO.
On page 8, we now write: "One of the key enablers of high NIR transmittance is replacing commercial ITO with previously-developed highly-conductive Zr-doped In 2 O 3 (IZrO) TCOs, whose parasitic free carrier absorption is suppressed, for a given free carrier density, by virtue of its enhanced carrier mobility 44 ."
Figure S6|
Optical transmittance of the semi-transparent perovskite devices with commercial ITO substrate vs. Zr doped In 2 O 3 (IZrO) substrate.
Reviewer #2 (Remarks to the Author): The authors fabricate 1.63 eV perovskite solar cells with thick absorber layers that also have smooth morphology. This results in 19% power conversion efficiency (PCE) for a semi-transparent perovskite solar cell and in turn enabling perovskite-silicon tandem solar cells with a PCE of 28.2%.
While the reported PCE is very impressive, the major conclusions of the work have been reported previously.
Although the demonstrates boosted-solvent-extraction technique creates smooth and thick perovskite films, addition of the Lewis base is key to the electronic performance of the layer. However, the Lewis acid-base adduct approach to improve the properties of the perovskite has already been demonstrated before (ref 34).
Semi-transparent perovskite top cells in tandem suffer from inadequate absorption of above-bandgap
photons, which is due primarily to the lack of effective routes to increase the perovskite thickness while retaining long carrier diffusion length.
We show herein that increasing precursor solution concentration compromises device performance because of rough surface morphology. We sought therefore to develop a novel boosted-solvent-extraction (BSE) technique to enable high Jsc and PCE with negligible hysteresis.
This report is the first to demonstrate that the combination of optically thick perovskite film and the Lewis acid-base adduct approach can benefit perovskite tandems.
The authors report reduced trap densities in the perovskite thus enabling longer diffusion lengths (2.3 um). However, there is very little data presented in this regard to support these claims. Tandem photovoltaics that involve perovskite cells offer a pathway to increased power conversion efficiencies (PCEs). The perovskite top cells in tandems suffer from inadequate absorption of abovebandgap photons; and this has so far kept published tandem performance below 28% PCE.
In this manuscript, we sought to advance perovskite-based tandem solar cells by a general design strategy.
Its implementation enables efficient and transparent perovskite top cells, spectrum-tailored bottom cells for tandem applications. Specifically, we devised a novel fabrication routine to fabricate optically thick perovskite film with long electron diffusion length, and a new organic/colloidal quantum dot (CQD) hybrid for enhanced NIR spectral response.
For these reasons, I do not believe that the manuscript meets the requirements for publication in Nature Communications.
Reviewer #3 (Remarks to the Author): In this manuscript, the authors reported a four terminals tandem perovskite-silicon solar cell with a record power conversion efficiency exciding 28%. The result was enabled by the combination of a thicker perovskite layer, which the authors achieved modifying the standard spinning and antisolvent procedures, and a defect passivation additive that result in a larger electron diffusion length.
I found the work interesting but not sufficiently novel for publication. Besides a well-written story, this work is combining two optimisations of the perovskite film deposition and then applying them in a fourterminal tandem concept. Therefore I see it more as an incremental than a step improvement in the field.
In conclusion, I cannot recommend publication since lack of novelty.
Tandem photovoltaics that involve perovskite cells offer a pathway to increased power conversion efficiencies (PCEs). The perovskite top cells in tandems suffer from inadequate absorption of abovebandgap photons; and this has so far kept published tandem performance below 28% PCE.
We show herein that increasing precursor solution concentration compromises device performance because of rough surface morphology. We sought therefore to develop a novel boosted-solvent-extraction (BSE) technique to enable high Jsc and PCE with negligible hysteresis.
This report is the first to demonstrate that the combination of optically thick perovskite film and the Lewis acid-base adduct approach can benefit perovskite tandems.
In this manuscript, we sought to advance perovskite-based tandem solar cells by a general design strategy.
Its implementation enables efficient and transparent perovskite top cells, spectrum-tailored bottom cells for tandem applications. Specifically, we devised a novel fabrication routine to fabricate optically thick perovskite film with long electron diffusion length, and a new organic/colloidal quantum dot (CQD) hybrid for enhanced NIR spectral response. | 3,122 | 2020-03-09T00:00:00.000 | [
"Materials Science"
] |
Metabolic Profiling of Glycerophospholipid Synthesis in Fibroblasts Loaded with Free Cholesterol and Modified Low Density Lipoproteins*
Currently, the detailed regulation of major pathways of glycerophospholipid synthesis upon cholesterol loading is largely unknown. Therefore, a detailed lipid metabolic profiling using stable isotope-labeled choline, ethanolamine, and serine was performed by quantitative electrospray ionization tandem mass spectrometry (ESI-MS/MS) in free cholesterol (FC), oxidized (Ox-LDL) and enzymatically modified LDL (E-LDL)-loaded primary human skin fibroblasts. As previously described, an adaptive induction of phosphatidylcholine (PC) synthesis via CDP-choline was found upon FC loading. In contrast to PC, CDP-ethanolamine-mediated phosphatidylethanolamine (PE) synthesis was inhibited by FC incubation. Furthermore, FC induced a shift toward polyunsaturated PE and PC species, which was mediated primarily by PE biosynthesis but not PE remodeling, whereas PC species were shifted mainly by fatty acid (FA) remodeling of existing PC. Modified lipoprotein incubation revealed rather different effects on glycerophospholipid synthesis. E-LDL greatly enhanced PC synthesis, whereas Ox-LDL did not change PC synthesis. Addition of different free FAs (FFA) with and without FC coincubation, as major components of E-LDL, clearly indicated an incorporation of FFA into newly synthesized PC and PE species as well as FFA as important driving force for PC synthesis. Because FC and FFA are known to affect lipid membrane properties including membrane curvature, these data support that CTP:phosphocholine cytidylyl-transferase activity and consequently PC synthesis are regulated by modulation of membrane characteristics at the cellular level. In conclusion, the application of high throughput metabolic profiling of major glycerophospholipid pathways by ESI-MS/MS is a powerful tool to unravel mechanisms underlying the regulation of cellular lipid metabolism.
Cell Culture-Fibroblasts were cultured in Dulbecco's modified Eagle's medium supplemented with L-glutamine, nonessential amino acids, and 10% fetal calf serum in a humidified 5% CO 2 atmosphere at 37°C. The experiments described were performed with cells at passages 7-14. Mycoplasma contamination of fibroblasts was routinely tested using MycoAlert Mycoplasma Detection Assay (Cambrex, USA) and only negative tested cells were used for experiments. For lipid analysis, cells were seeded into 6-well plates at a density of 80,000 cells per well. They were grown to confluence and then incubated in serine-and choline-depleted Dulbecco's modified Eagle's medium containing 2 mg/ml fatty acid free bovine serum albumin supplemented with 50 g/ml of [ 13 C 3 ]serine, [D 4 ]ethanolamine, and [D 9 ]choline chloride. In parallel lipid loading was performed using 15 g/ml FC, 40 g/ml E-LDL, or Ox-LDL, respectively. At the indicated time points fibroblasts were rinsed twice with phosphate-buffered saline and lysed with 0.2% SDS.
Lipoprotein Preparation-LDL (d ϭ 1.019-1.063 g/ml) from sera of normolipidemic volunteers was isolated and enzymatically modified as described previously (19) with slight modifications in the preparation of E-LDL. Briefly, for enzymatic modification, LDL was diluted to 2 mg/ml protein in phosphate-buffered saline. Enzyme treatment was performed with trypsin (6.6 g/ml) and cholesteryl esterase (40 g/ml) for 48 h at 37°C. Oxidation of LDL was performed according to published protocols (20). Briefly, LDL was diluted to 1 mg/ml protein in phosphate-buffered saline and dialyzed against 5 M Cu 2ϩ (42 h, 4°C). The modified lipoproteins were stored at 4°C and used within a week.
Protein Determination-Protein concentrations were measured using bicinchoninic acid as described previously (21). Prior lipid extraction an aliquot of SDS lysed fibroblasts was taken for protein determination.
Lipid Extraction-Lipids were extracted according to the procedure described by Bligh and Dyer (22) in the presence of not naturally occurring lipid species as internal standards. The chloroform phase was dried in a vacuum centrifuge and dissolved as described below for quantitative lipid analysis.
Mass Spectrometry-Lipids were quantified by ESI-MS/MS in positive ion mode (as described previously (23)(24)(25)). Samples were quantified by direct flow injection analysis using the analytical setup described by Liebisch et al. (24,25). A precursor ion scan of m/z 184 specific for phosphocholine containing lipids was used for PC, SM (24), and lysophosphatidylcholine (LPC) (26). [D 9 ]Choline-labeled lipids were analyzed by precursor ion scan of m/z 193. Neutral loss scans of m/z 141 and m/z 185 were used for PE and phosphatidylserine (PS), respectively (23). Analogous, neutral loss scans were used for stable isotopelabeled [D 4 ]PE (m/z 145) and [ 13 C 3 ]PS (m/z 188). FC and CE were quantified using a fragment ion of m/z 369 after selective derivatization of FC using acetyl chloride (25). Additionally, lipids present at low concentration were analyzed in a second run by selected reaction monitoring (SRM) to increase precision (especially for stable isotope labeled species at early time points). Correction of isotopic overlap of lipid species as well as data analysis by self-programmed Excel macros was performed for all lipid classes according to the principles described previously (24).
To quantify for all lipid classes analyzed non-naturally occurring lipid species were used as internal standards ( Assuming a similar analytical response for stable isotope labeled and unlabeled species, labeled species were quantified using the internal standards and calibration lines described above. The quantitative values were related to the protein amount of the sample.
Lipid Composition of E-LDL and
Ox-LDL-To characterize and compare the lipid composition of LDL, E-LDL, and Ox-LDL ESI-MS/MS was carried out. FC and CE represented the predominant lipid fraction in all three LDL preparations with ϳ75 mol% of all analyzed lipids (Fig. 1). In contrast to native LDL, which contains about two-thirds of total cholesterol as CE, enzymatic modification of LDL with trypsin and cholesteryl esterase decreased the proportion of CE to about one-third. Mild oxidation of LDL did not significantly influence the FC fraction, whereas the proportion of CE decreased compared with native LDL. Analysis of Ox-LDL using a precursor ion scan of m/z 369 specific for CE (25) revealed peaks absent in native LDL or E-LDL (Fig. 1). These peaks likely arise from oxidative modification of the fatty acid moiety of CE (a detailed lipid analysis of LDL modifications will be subject of a separate article).
Oxidation of LDL also affected polyunsaturated PC species, reducing the PC fraction from 15 mol% in native LDL to 11 mol% in Ox-LDL. Concomitantly, a strong rise of the LPC fraction was found in Ox-LDL compared with LDL (5-fold to 5 mol%; Fig. 1).
Cellular Lipid Level upon Lipid Loading-Primary human skin fibroblasts were loaded with FC, E-LDL, and Ox-LDL, and the lipid loading kinetics were determined by ESI-MS/MS. Cholesterol uptake occurred mainly in the first 24 h, and no major changes in cellular cholesterol levels were observed up to 72 h (data not shown). FC loading increased FC levels to 275%, whereas E-LDL and Ox-LDL increased cellular FC only to 165 and 143% (Table 1), respectively. In contrast to FC loading, where only a marginal increase was observed to 130%, modified lipoproteins induced cellular CE levels of almost 300% compared with unloaded controls (Table 1). Interestingly, FC loading did not increase total PC concentration and led to a remark-able decrease of 40% of total PE level. Modified LDL slightly increased cellular PC level (Table 1), which may be caused by the high content of PC in E-LDL and Ox-LDL (Fig. 1). PE and PS are only minor components of modified LDL (Fig. 1), fitting to minor changes observed in cellular PE and PS level upon E-LDL and Ox-LDL loading ( Table 1).
Effects of Lipid Loading on [D 9 ]PC-One major goal of the present study was to investigate the effects of FC and lipoprotein loading on cellular glycerophospholipid metabolism. Therefore, parallel to loading with lipids stable isotope-labeled precursors were used to monitor the main pathways of glycerophospholipid metabolism ( Fig. 2A). [D 9 ]Choline, [D 4 ]ethanolamine, and [ 13 C 3 ]serine labels were substituted in medium deficient for the natural compounds. Both unlabeled and stable isotope-labeled phospholipids were analyzed by ESI-MS/MS using specific scan types (see "Experimental Procedures"). As expected, newly synthesized PC was solely derived from the Kennedy pathway via CK, CT, and CPT resulting in [D 9 ]PC, but no [D 4 ]PC derived from PE N-methylation was detected (data not shown). Although total PC levels were only marginally influenced (Table 1), pronounced changes were observed in the de novo synthesis of PC upon lipid loading (Fig. 3). Whereas Ox-LDL did not significantly change PC synthesis, FC incubation led to a 40% increase in PC synthesis compared with unloaded control. E-LDL revealed an almost 2-fold increase of PC synthesis compared with control ( Fig. 3).
To analyze, whether the lipid species profile is also influenced, as suggested by Blom et al. (17) both detailed species patterns of undeuterated and deuterated [D 9 ]PC species, were investigated in loaded fibroblasts. FC loading did only induce marginal changes in the species profile of undeuterated PC compared with control ( Fig. 4A), but [D 9 ]PC species profile shifted to longer and more unsaturated species, mainly at the expense of PC 32:1 (increase of PC 36:4, PC 36:3, PC 36:2, PC 36:1, PC 38:5, PC 38:4; Fig. 4B). Calculation of total species shift from control to FC-loaded fibroblasts revealed a significantly increased proportion of 13.5% for [D 9 ]PC compared with 5.0% for undeuterated PC. However, it has to be taken into account that only 10% of total PC was labeled at 24 h (Table 2). Consequently, related to the total PC, it appeared that the de novo synthesized PC contributed the minor proportion with 1.3% compared with 4.5% for unlabeled PC of total species shift.
Similar to FC loading, the [D 9 ]PC species profile showed more pronounced changes upon E-LDL incubation than undeuterated PC species profiles. Thus, a substantial shift (Fig. 5).
The species pattern of undeuterated PE did not reveal major changes upon lipid loading (Fig. 6A). However, significant species shifts were observed upon FC and E-LDL incubation for [D 4 ]PE (Fig. 6B) (Fig. 7). Because the species pattern of [ 13 C 3 ]PS may also depend on a species shift in PC and PE as substrates for PS synthases ( Fig. 2A), the species profile of [ 13 C 3 ]PS was investigated after 72 h incubation. However, even after 72 h of lipid loading no substantial shift was observed in the species pattern of unlabeled PS compared with control (Fig. 8A). [ 13 C 3 ]PS species profile revealed only minor changes with a slight decrease of [ 13 C 3 ]PS 36:1 upon lipid loading (Fig. 8B) Loading of E-LDL Lipid Components and Their Effect on PL Synthesis-We were also interested in which E-LDL components were responsible for the observed effects on PL synthesis. Thus, E-LDL incubation was compared with lipids extracted from E-LDL and a mixture of FFA, which resembles the composition found in E-LDL (FFA1, 16:1/16:0/18:2/18:1/20:4 ϭ 1/1/6/1/1 molar ratio reflecting the esterase-digested CE fraction of E-LDL). Additionally, fibroblasts were incubated with palmitic acid (FFA2) to evaluate the difference between a satu-rated FFA and an FFA mixture including polyunsaturated FFA. Both FFA incubations were combined with FC loading to investigate the effect of FC on FFA incorporation into glycerophospholipids (Table 3).
Similar to the previous experiments, FC and E-LDL increased [D 9 ]PC level after 24 h of incubation about 20% and more than 2-fold, respectively ( (Table 3). Because the main effects of lipid loading were observed on PC and PE synthesis, the species profile of [D 9 ]PC and [D 4 ]PE were analyzed in detail (Fig. 9). The changes in species pattern induced by E-LDL were in accordance with the effects described previously, and incubation with lipids extracted from E-LDL closely resembled changes in the species pattern found upon E-LDL incubation (data not shown). Moreover, the shifts in the species pattern observed after E-LDL loading were similar to those after FFA1 loading (Figs. 4, 6, and 9). All together this clearly indicates an incorporation of FFA either delivered by E-LDL or directly. As expected FFA1 and FFA2 reflect the FFA provided to the media, e.g. the strong increase of newly synthesized 32:0, 32:1, and 34:1 species, which points to cellular desaturation (to 16 In summary, both PC and PE synthesis via the Kennedy pathway were stimulated by FFA supplementation. Whereas PC synthesis revealed a pronounced dependence on the type of FFA, PE synthesis was induced to a similar extent by saturated palmitic acid and a mixture containing (poly)unsaturated FFAs. Interestingly, addition of FC induced PC synthesis, whereas PE synthesis was strongly inhibited. Moreover, newly synthesized PC and PE displayed a shift in their species pattern toward polyunsaturated species upon FC incubation, which was independent of the exogenous FFA supply.
DISCUSSION
Previously established assays based on ESI-MS/MS for high throughput lipid quantification (24 -26) were used to study glycerophospholipid metabolism in fibroblasts loaded with FC, E-LDL, and Ox-LDL, respectively. In general, the used human skin fibroblasts displayed PL de novo synthesis via the so-called Kennedy pathway with direct ethanolamine and choline incorporation into [D 4 ]PE and [D 9 ]PC, as well as [ 13 C 3 ]PS synthesis via PS synthase 1/2 converting PC and PE (27), respectively. As expected PE methylation was not observed, because a relevant contribution to PC synthesis so far only has been described for liver, retina, and brain (27)(28)(29). Additionally, no substantial PS decarboxylation forming PE was found, which may be caused by cell type as well as culture conditions especially the ethanolamine supplementation to the culture media.
Although, the applied stable isotope labels allow an accurate monitoring of the major glycerophospholipid synthesis pathways for PC, PE, and PS, it is not possible to assess directly fatty acid remodeling of PLs (Fig. 2B). Therefore, both species pattern of unlabeled and labeled PLs were analyzed. Assuming that all de novo synthesized species are isotope-labeled, a shift in unlabeled species pattern can be interpreted as fatty acid remodeling (Fig. 2B). Stable isotope labeled species pat- tern results from biosynthesis ( Fig. 2A) and potentially fatty acid remodeling of de novo synthesized species (Fig. 2B). This is especially of importance because it has been shown previously that FC loading induces a shift toward polyunsaturated species of PC and PE in the plasma membrane fraction of fibroblasts (17). Moreover, it is known that fatty acid remodeling by phospholipase and reacylation as well as transacylation pathways represent a major pathway for polyunsaturated PL synthesis (30,31).
Thus, in accordance to previous studies we could demonstrate an increase of the PC synthesis upon FC loading in human skin fibroblasts (14). A novel finding of the present study was a pronounced down-regulation of PE synthesis upon FC loading. Similar to Blom et al. (17), a species shift toward polyunsaturated species was observed for both PC and PE. This represents a mechanism of cells to prevent a decreased membrane fluidity because of the stiffening effect of FC (11), which may even lead to cytotoxicity (9,32). Interestingly, the observed PC species shift was mainly caused by fatty acid remodeling of existing PC species, whereas de novo synthesized PE contributed the majority to the PE species shift upon FC loading.
In contrast to FC loading, Ox-LDL uptake, which is described to be mediated by clathrin-coated pits (4, 5), did not exhibit a substantial effect on PC and PE synthesis as well as the respective species. Although, cellular FC and CE level were increased reflecting cellular uptake, it seemed as if Ox-LDL derived lipids did not reach cellular compartments involved in the regulation of glycerophospholipid metabolism. A potential explanation may be that oxidized lipids, especially oxidized CE, are resistant to lysosomal degradation, which leads to a trapping of Ox-LDL within lysosomes (33). Additionally, cholesterol derived from Ox-LDL accumulates in lysosomes (34) and consequently may not reach cellular sites involved in the regulation of PC and PE synthesis. The induction of PS synthesis by Ox-LDL is potentially related to Ox-LDL induced cytotoxicity (35), since PS exposure on cell surface is a common feature in apoptosis also observed upon Ox-LDL incubation (36). In addition, it has been shown that newly synthesized PS is preferentially externalized in apoptotic U937 cells (37).
Compared with Ox-LDL, E-LDL led to a massive induction of the de novo PC synthesis (Fig. 3). A potential reason could be a different cellular uptake mechanism for E-LDL, which enters the cell via clathrin independent phagocytosis (7). However, another important factor may be the particular lipid composition of E-LDL, which contains due to CE digestion high FC (Fig. 1) and free FFA levels (38). Accordingly, unphysiologic application of lipids extracted from E-LDL induced PC synthesis to a similar extent as E-LDL (Table 3). Moreover, a FFA mixture resembling those found in E-LDL (FFA1) led to a comparable PC synthesis induction (Table 3). Together with the observed changes in the species pattern of newly synthesized PC, this clearly indicates an incorporation of FFA from E-LDL as well as the major role of FFA in the induction of PC synthesis by E-LDL. According to previous studies in HeLa (39) the induction of PC synthesis depends on the FFA type, because an equimolar concentration of saturated palmitic acid (FFA2) was not able to increase PC synthesis to the level observed for the mixture FFA1 (Table 3). This may be related to a conversion of palmitic acid (FFA2) by elongation and desaturation before incorporation into glycerophospholipids. On the other hand a pronounced increase of the PC synthesis was observed, when both FFA (mixtures) were coincubated with FC (Table 3). These data fit to a recently discussed model describing CT activation by Cornell and Northwood (40). CT is an amphitropic protein, i.e. it interconverts between a soluble inactive form and a membrane-bound active form. Thus, both changes in membrane lipid composition and the phosphorylation state of CT may regulate membrane binding and activation of CT. The changes in membrane lipid composition are described by two classes of lipids: Class I lipids increase negative electrostatic surface potential facilitating CT binding. Class II lipids induce negative curvature strain, which was relieved by CT insertion into the membrane (41,42). In this model FFAs share features of both classes with their negative charge as well as negative curvature strain increasing lipids. The latter effect should be more pronounced for unsaturated FA than saturated fitting to the higher PC synthesis observed for FFA1 compared with saturated FFA2. Additionally, an increased PC synthesis upon FC incubation either without or in combination with FFAs would be expected from a class II lipid like FC. This model also may provide an explanation, why Ox-LDL loading despite significant elevation of cellular FC level (Table 1) did not induce PC synthesis (Fig. 3). Because Ox-LDL contains large amounts of LPC (Fig. 1), which is known to decrease CT activity by releasing negative curvature strain (41,42), the stimulating FC effect on CT activity may be balanced by LPC.
In contrast to PC synthesis, PE synthesis seems to be regulated by a different mechanism, because FC loading strongly decreased PE synthesis (Fig. 5, Table 3). However, up to now not much is known about the molecular regulation of CTP:phosphoethanolamine cytidylyltransferase (ET), which is considered to be the rate-limiting enzyme for PE synthesis via Kennedy pathway (27) (Fig. 2A). The difference between E-LDL (no significant regulation) and the lipids extracted from E-LDL on PE synthesis (40% increase compared with control, Table 3), may be explained by a different FC loading efficiency. Thus, E-LDL loading induced a more than 2-fold higher cellular FC level compared with control, whereas E-LDL derived lipid extracts only led to a 50% increase. This argues that PE synthesis is inhibited only above a certain threshold level of cellular FC. Moreover, PE synthesis did not show a strong dependence on the FFA type supplemented (Table 3). One possible explanation for the strong down-regulation of PE synthesis upon FC loading could be that both FC and PE act as class II lipids increasing negative curvature strain in lipid bilayers. Therefore, a decreased PE synthesis does not further increase negative curvature strain and consequently keeps lipid membrane physical properties in a certain range preserving cell function (43). This model is in good agreement with a recent study in yeast, which
TABLE 3 Effects of the E-LDL lipid components on PL synthesis
Cells were loaded, harvested, and analyzed as described in the legend to Fig. 3. Fibroblasts were loaded with FC (15 g/ml), E-LDL (40 g/ml), lipid extract derived from E-LDL. (LIP; lipids were extracted according to the procedure described by Bligh and Dyer (22); the dried extracts were dissolved in ethanol and added to the media equivalent to a concentration of 40 g/ml of E-LDL), free fatty acid mixture (70 M) (FFA2: 16:1, 16:0, 18:2, 18:1, 20:4 in a molar ratio of 1:1:6:1:1 similar to that present in CE present in E-LDL), palmitic acid (70 M) (FFA2: 16:0) and a combination of FFA1/2 with FC. Shown are the concentration of ͓D 9 ͔PC, ͓D 4 ͔PE and ͓ 13 C 3 ͔PS as percent of the unloaded control calculated from nmol/mg cell protein. Values are mean Ϯ S.D. of one representative experiment out of four, each performed in triplicate. presents evidence that intrinsic membrane curvature is maintained in a physiological range by adaptation of lipid physical properties (44).
FC
In Drosophila ET is regulated by the sterol response element binding (SREBP) protein pathway (45). Although SREBP translocation in Drosophila is regulated by PE levels (45) (55% of total PL, Ref. 46)) analogous as FC levels in mammalian cells, there may exist an evolutionary conservation of a sterol response element in the ET promotor. Up to now a contribution of a FC induced down-regulation of ET via SREBP could not be ruled out, even though a recent study investigating the ET promoter in human breast cancer cells MCF-7 did not identify a sterol response element (47). Although less pronounced, PS synthesis seems to be regulated in a similar way as PC synthesis including induction by FFA as well as a dependence on the type of FFA and FC (Table 3).
In summary, the present study could support the idea that changes in lipid membrane composition affecting membrane curvature regulate CT activity and consequently cellular PC synthesis. In addition, an opposite regulation of PC and PE synthesis was found upon FC loading. A species shift toward polyunsaturated PE and PC observed upon FC loading, could be attributed primarily to PE biosynthesis, whereas PC species were shifted to a higher extent by FA remodeling of existing PC. Finally, with the application of high throughput metabolic profiling of major glycerophospholipid pathways by ESI-MS/MS, we could demonstrate that this technique could provide a powerful tool to unravel mechanism underlying the regulation of cellular lipid metabolism. | 5,236 | 2006-08-04T00:00:00.000 | [
"Biology",
"Chemistry"
] |
The IgA in milk induced by SARS-CoV-2 infection is comprised of mainly secretory antibody that is neutralizing and highly durable over time
Approximately 10% of infants infected with SARS-CoV-2 will experience COVID-19 illness requiring advanced care. A potential mechanism to protect this population is passive immunization via the milk of a previously infected person. We and others have reported on the presence of SARS-CoV-2-specific antibodies in human milk. We now report the prevalence of SARS-CoV-2 IgA in the milk of 74 COVID-19-recovered participants, and find that 89% of samples are positive for Spike-specific IgA. In a subset of these samples, 95% exhibited robust IgA activity as determined by endpoint binding titer, with 50% considered high-titer. These IgA-positive samples were also positive for Spike-specific secretory antibody. Levels of IgA antibodies and secretory antibodies were shown to be strongly positively correlated. The secretory IgA response was dominant among the milk samples tested compared to the IgG response, which was present in 75% of samples and found to be of high-titer in only 13% of cases. Our IgA durability analysis using 28 paired samples, obtained 4–6 weeks and 4–10 months after infection, found that all samples exhibited persistently significant Spike-specific IgA, with 43% of donors exhibiting increasing IgA titers over time. Finally, COVID-19 and pre-pandemic control milk samples were tested for the presence of neutralizing antibodies; 6 of 8 COVID-19 samples exhibited neutralization of Spike-pseudotyped VSV (IC50 range, 2.39–89.4ug/mL) compared to 1 of 8 controls. IgA binding and neutralization capacities were found to be strongly positively correlated. These data are highly relevant to public health, not only in terms of the protective capacity of these antibodies for breastfed infants, but also for the potential use of such antibodies as a COVID-19 therapeutic, given that secretory IgA is highly in all mucosal compartments.
Background Though COVID-19 pathology among children is typically more mild compared to adults, approximately 10% of infants under the age of one year experience severe COVID-19 illness requiring advanced care, and an ever-growing number of children appear to exhibit signs of "Multisystem Inflammatory Syndrome in Children (MIS-C) associated with COVID-19" weeks or months after exposure [1,2]. Furthermore, infants and young children can also transmit SARS-CoV-2 to others and the efficacy of vaccines available for adults have not yet been evaluated for young children or infants [3]. Certainly, protecting this population from infection is essential [4].
One potential mechanism of protection is passive immunity provided through breastfeeding by a previously infected mother. Mature human milk contains~0.6mg/mL of total immunoglobulin [5]. Approximately 90% of human milk antibody (Ab) is IgA, nearly all in secretory (s) form (sIgA, which consists of polymeric Abs complexed to J-chain and secretory component (SC) proteins) [6]. Nearly all sIgA derives from the gut-associated lymphoid tissue (GALT), via the entero-mammary link, though there is also homing of B cells from other mucosa (e.g., from the respiratory system), and possibly drainage from local lymphatics of systemic IgA to the mammary gland [6]. Unlike the Abs found in serum, sIgA in milk is highly stable and resistant to enzymatic degradation not only in milk and the infant mouth and gut, but in all mucosae including the gastrointestinal tract, upper airway, and lungs [7]. Notably, it has been shown that after 2 hours in the infant stomach, the total IgA concentration decreases by <50%, while IgG concentration decreases by >75% [8].
Previously we reported on 15 milk samples obtained early in the pandemic from donors recently-recovered from a confirmed or suspected case of COVID-19 [9]. In that preliminary study, it was found that all samples exhibited significant IgA binding activity against the SARS-CoV-2 Spike. Eighty percent of samples further tested for Ab binding reactivity to the receptor binding domain (RBD) of the Spike exhibited significant IgA binding, and all of these samples were also positive for RBD-specific secretory Ab reactivity with only small subsets of samples exhibiting specific IgG and/or IgM activity, strongly suggesting the RBD-specific IgA was sIgA. In the present study, we report on the prevalence and isotypes of Spike-specific milk Ab from a larger cohort of donors obtained 4-6 weeks post-confirmed SARS-CoV-2 infection, on the durability of these Abs up to 10 months post-infection, and on SARS-CoV-2-directed neutralization by Abs in a subset of these samples.
Study participants
This study was approved by the Institutional Review Board (IRB) at Mount Sinai Hospital (IRB 19-01243). Individuals were eligible to have their milk samples included in this analysis if they were lactating and had a confirmed SARS-CoV-2 infection (by an FDA-approved COVID-19 PCR test) 4-6 weeks prior to the initial milk sample used for analysis. This post-infection window was selected so as to minimize any contact with participants or their samples when they might have been contagious to the research team, while still capturing the reported peak period for SARS-CoV-2 Ab responses [10]. Participants were excluded if they had any acute or chronic health conditions affecting the immune system. Participants were recruited nationally via social media in April-June of 2020 and subject to an informed consent process. Certain participants contributed milk they had previously frozen for personal reasons, while most pumped samples specifically for this research project. All participants were either asymptomatic or experienced mild-moderate symptoms of COVID-19 that were managed at home. Participants were asked to collect approximately 30mL of milk per sample into a clean container using electronic or manual pumps, and if able and willing, to continue to pump and save monthly milk samples after the initial sample as part of our longitudinal analysis. If any of the participants submitted longitudinal samples at least 4 months after their initial sample, those samples were also included in the present analysis. As little longitudinal mucosal Ab data in COVID-19-recovered individuals past 3 months has been reported to date, the �4 month time point was selected, and as many samples that were available were used. To estimate the proportion (p) of all COVID-19-recovered milk donors that would exhibit positive IgA titers against SARS-CoV-2 in their milk after infection, we determined based on the reported IgG seroconversion rate of 90% after mild SARS-CoV-2 infection [11], that the precision (d) of the 95% confidence interval (CI) for p (CI = [p+-d]), as a function of the cohort size N of 74 would allow us to estimate p with 6.79% error. In terms of the cohort size N of 20 for milk IgG and secretory Ab analyses, this would allow us to estimate p with 13.15% error.
Milk was frozen in participants' home freezers until samples were picked up and stored at -80˚C until Ab testing. Pre-pandemic negative control milk samples were obtained in accordance with IRB-approved protocol 17-01089 prior to December 2019 from healthy lactating women in New York City, and had been stored in laboratory freezers at -80˚C before processing following the same protocol described for COVID-19 milk samples. All demographic information on participant milk samples is shown in Table 1. Given the diversity of participant ages and stages of lactation, this study sample can be considered representative of a larger population. Notably, 67% of COVID-19-recovered participants reported their race/ethnicity as white or Caucasian, and therefore this sample set is not diverse enough to be considered representative of the USA as a whole. More work needs to be done to obtain sufficient samples from non-white participants. Ten COVID-19-recovered (COV101-COV117) and 10 pre-pandemic control (NEG046-NEG059) participants included in the present study also had their Spike IgA ELISA data reported in the our pilot study publication [9].
ELISA
Levels of SARS-CoV-2 Abs in human milk were measured as previously described [9]. Briefly, before Ab testing, milk samples were thawed, centrifuged at 800g for 15 min at room temperature, fat was removed, and the de-fatted milk transferred to a new tube. Centrifugation was repeated 2x to ensure removal of all cells and fat. Skimmed acellular milk was aliquoted and frozen at -80˚C until testing. Both COVID-19 recovered and control milk samples were then tested in separate assays measuring IgA, IgG, and secretory-type Abs, in which the secondary Ab used for the latter measurement was specific for free and bound SC. Half-area 96-well plates (Fisher cat# 14-245-153) were coated with the full trimeric recombinant Spike protein produced as described previously [12]. Plates were incubated at 4˚C overnight, washed in 0.1% Tween 20/PBS (PBS-T), and blocked in PBS-T/3% goat serum (Fisher cat# PCN5000)/0.5% milk powder (Fisher cat# 50-751-7665) for 1 h at room temperature. Milk was used undiluted or titrated 4-fold in 1% bovine serum albumin (BSA; Fisher cat# 50-105-8877)/PBS and added to the plate. After 2h incubation at room temperature, plates were washed and incubated for 1h at room temperature with horseradish peroxidase-conjugated goat anti-human-IgA, goat anti-human-IgG (Fisher cat# 40-113-5 and #OB201405), or goat anti-human-secretory component (MuBio cat# GAHu/SC/PO) diluted in 1% BSA/PBS. Plates were developed with 3,3',5,5'-Tetramethylbenzidine (TMB; Fisher cat#PI34028) reagent followed by 2N sufuric acid (Fisher cat# MSX12446) and read at 450nm on a BioTek Powerwave HT plate reader. Assays were performed in duplicate and repeated 2x.
IgA extraction from milk
Total IgA was extracted from 25-100mL of milk using peptide M agarose beads (Fisher cat# NC0127215) following manufacturer's protocol, concentrated using Amicon Ultra centrifugal filters (10 kDa cutoff; Fisher cat# UFC901008) and quantified by Nanodrop.
Pseudovirus neutralization assay
Neutralization assays were performed using a standardized SARS-CoV-2 Spike-pseudotyped Vesicular Stomatitis Virus (VSV)-based assay with ACE2-and TMPRSS2-expressing 293T cells (clone F8-2; ATCC CRL-3216-derived) as previously described [13]. This cell line was routinely verified for consistent ACE2 and TMPRSS2 expression by flow cytometry as well as by inclusion of assay-to-assay control virus to monitor consistent infection levels. Pseudovirus was produced by transfection of 293T cells with SARS-CoV-2 Spike plasmid, followed 8 h later by infection with a VSVΔG-rLuc reporter virus. Two days post-infection, supernatants were collected and clarified by centrifugation [13]. Cells and viruses were prepared by and obtained from the Benhur Lee lab. A consistent,pretitrated amount of pseudovirus was incubated with serial dilutions of extracted IgA for 30 min at room temperature prior to infection of cells seeded the previous day. Twenty hours post-infection, cells were processed and assessed for luciferase activity as described [13].
Analytical methods
Control milk samples obtained prior to December 2019 were used to establish positive cutoff values for each assay. Milk was defined as positive for the SARS-CoV-2 Abs if optical density (OD) values measured using undiluted milk from COVID-19-recovered donors were two standard deviations (SD) above the mean ODs obtained from control samples. Endpoint dilution titers were determined from log-transformed titration curves using 4-parameter non-linear regression and an OD cutoff value of 1.0. Endpoint dilution positive cutoff values were determined as above. Percent neutralization was calculated as (1-(average luciferase Relative Light Units (RLU) of triplicate test wells-average luciferase expression RLU of 6 'virus only' control wells) � 100. Mann-Whitney U tests were used to assess significant differences between unpaired grouped data. Paired Student's t-test was used to assess significant differences between longitudinal time points. The concentration of milk IgA required to achieve 50% neutralization (IC 50 ) was determined as described above for endpoint determination. Correlation analyses were performed using Spearman correlations. All statistical tests were performed in GraphPad Prism, were 2-tailed, and significance level was set at pvalues < 0.05.
Ab profile in milk from COVID-19-recovered donors 4-6 weeks after infection
Sixty-six of 74 samples (89%) were positive for Spike-specific IgA, with the COVID-19 samples exhibiting significantly higher Spike-specific IgA binding compared to controls (Fig 1a; p<0.0001). Following this initial screening, 40 of the Spike-positive samples were further titrated to determine binding endpoint titers as an assessment of Ab affinity and/or quantity (Fig 1b). Thirty-eight of 40 (95%) Spike-reactive samples exhibited positive IgA endpoint titers and 19 of these samples (50%) were �5 times higher than the endpoint titer of the positive cutoff value, and were therefore designated as 'high-titer' (Fig 1c). Additionally, 20 samples assayed for Spike-specific IgA were also assessed for Spike-specific secretory Ab (by detecting for SC), and IgG. Nineteen of these undiluted milk specimens (95%) from convalescent COVID-19 donors were positive for Spike-specific secretory Abs compared to pre-pandemic control milk (Fig 2a). One sample (COV125b) was negative for specific IgA but positive for specific secretory Ab, while another sample (COV123b) was positive for specific IgA but negative for specific secretory Ab. Eighteen undiluted milk samples (95%) exhibiting Spike-specific secretory Ab activity also exhibited positive endpoint titers (Fig 2c). Of the samples found to be high-titer for Spike-specific IgA, 7 were also high-titer for specific secretory Ab (70%). Mean OD values for undiluted milk and endpoint titers were used in separate Spearman correlation tests to compare IgA and secretory Ab reactivity (Fig 2e). It was found that IgA and secretory Ab levels were positively correlated (using ODs: r = 0.77, p<0.0001; using endpoint titers: r = 0.86, p<0.0001). Additionally, 15/20 undiluted milk samples from COVID-19-recovered donors were positive for Spike-specific IgG compared to prepandemic controls (75%; Fig 2b), with 13/15 of these samples exhibiting a positive endpoint titer (87%; Fig 2d), and 2/15 designated as high titer with values �5 times cutoff (13%). No correlation was found between IgG and IgA titers or between IgG and SC titers (S1 Fig).
Durability of the SARS-CoV-2 Spike-specific milk IgA response
To assess the durability of this sIgA-dominant response, 28 pairs of milk samples obtained from COVID-19-recovered donors 4-6 weeks and 4-10 months after infection were assessed for Spike-specific IgA. All donors exhibited persistently significant Spike-specific IgA titers at the follow-up time point. Mean endpoint titers from the early to the late milk samples grouped were not significantly different (Fig 3a). Fourteen donors (50%) exhibited >10% decrease in IgA titer, 12 donors (43%) exhibited >10% increase in IgA titer, and 2 donors (7%) exhibited no change in titer (Fig 3a). Notably, only 2 donors (7%) exhibited >50% decrease in titer over time. Furthermore, examining a subset of 14 of these samples with the longest follow-up, obtained 7-10 months after infection, mean endpoint titers measured from the early to the late milk samples were also not significantly different (19.8 and 17.8, respectively; Fig 3b). These longest follow-up samples included 4 donors (29%) with >10% decrease in IgA titer, 7 donors (50%) with >10% increase in IgA titer, and 3 donors (21%) with no change in titer ( Fig Fig 3. The Spike-specific IgA response in milk after SARS-CoV-2 infection is highly durable over time. (A) IgA endpoint titers determined from Spike ELISA for 28 pairs of milk samples obtained from COVID-19-recovered donors 4-6 weeks and 4-10 months after infection are shown. Mean endpoint values for each group are shown. Blue lines: >10% increase, red lines: >10% decrease, grey lines: <10% change. NS: not significant. A paired ttest (2-tailed) was used to assess significance. (B) IgA endpoint titers for a subset of 14 paired samples obtained 4-6 weeks and 7-10 months after infection. Mean with SEM is shown. Mean endpoint values for the 4-6 week and 7-10 month groups are indicated on the y-axis as green and pink ticks, respectively. Blue bars: >10% increase, red bars: >10% decrease, grey bars: <10% change.
Discussion
There has been no evidence that SARS-CoV-2 transmits via human milk, with sporadic cases of viral RNA (not infectious particles) detected on breast skin [14]; however, there have been reports of viral RNA in the milk (reviewed in [15]), though collection methods in these reports did not necessarily include masking, cleaning of the breast, or even handwashing to avoid contamination from the donor's environment. As such, the WHO and CDC recommend that infants not be separated from SARS-CoV-2-infected mothers, and that breastfeeding should be established and not disrupted (depending on the mothers' desire to do so), in combination with masking and other hygiene efforts [16,17].
We and others have reported SARS-CoV-2-specific Abs in milk obtained from donors with previously confirmed or suspected infection [9,14,18,19]. Here, we have significantly expanded our earlier work in which we reported on SARS-CoV-2 Ab prevalence among 75 COVID-19-recovered participants whose milk samples were obtained 4-6 weeks after confirmed SARS-CoV-2 infection. Indeed, we have confirmed among this much larger sample set that a SARS-CoV-2 IgA Ab response in milk after infection is very common. Our analysis of a subset of 20 milk samples from COVID-19-recovered participants suggests that this IgA response dominates compared to the measurable but relatively lower titer IgG response. Importantly, a very strong positive correlation was found between Spike-specific milk IgA and secretory Abs, using both ELISA OD values of Ab binding in undiluted milk as well as Ab binding endpoint titers, indicating that a very high proportion of the SARS-CoV-2 Spike-specific IgA measured in milk after SARS-CoV-2 infection is sIgA, confirming our early reports. This is relevant for the effective protection of a breastfeeding infant, given the high durability of secretory Abs in the relatively harsh mucosal environments of the infant mouth and gut [7,8]. These data are also relevant to the possibility of using extracted milk IgA as a COVID-19 therapy. Extracted milk sIgA used therapeutically would likely survive well upon targeted respiratory administration, with a much lower dose of Ab likely needed for efficacy compared to systemically-administered convalescent plasma or purified plasma immunoglobulin.
All COVID-19 IgA samples analyzed that had been designated as 'high titer' for Spikespecific IgA exhibited significant Spike-directed neutralization capacity, wherein IgA binding endpoint titers and neutralization IC50 values were found to be significantly correlated. Of the 3 samples examined for neutralization capacity that exhibited positive but not high titer Spike-specific IgA, 2 were non-neutralizing. It should be noted that these were all samples obtained 4-6 weeks after infection, and future samples may exhibit neutralization as the Ab response matures. These data extend the recent analyses of SARS-CoV-2 neutralization using diluted whole milk [14,19].
Critically, our IgA durability analysis using 28 paired samples obtained 4-6 weeks and 4-10 months after infection revealed that for all donors, Spike-specific IgA titers persisted for as long as 10 months, a finding that is highly relevant for protection of the breastfeeding infant over the course of lactation, and also pertinent to the size of a potential donor pool for collection of milk from COVID-19-recovered donors for therapeutic use of extracted milk IgA. Notably, even after 7-10 months, only 5 of 14 samples exhibited >10% decrease in specific IgA endpoint titers, while 8 of 14 samples actually exhibited an increase in specific IgA titer. These highly durable or even increased titers may be reflective of long-lived plasma cells in the GALT and/or mammary gland, as well as continued antigen stimulation in these compartments, possibly by other human coronaviruses, or repeated exposures to SARS-CoV-2.
Given the present lack of knowledge concerning the potency, function, durability, and variation of the human milk immune response not only to SARS-CoV-2 infection, but across this understudied field in general, the present data contributes greatly to filling immense knowledge gaps and furthers our work towards in vivo efficacy testing of extracted milk Ab in the COVID-19 pandemic context and beyond.
Limitations of study
One limitation to this study is that all samples were obtained from participants living in the USA, and it should be noted that those in unique geographic areas may exhibit differential immune responses. Notably, 67% of COVID-19-recovered participants reported their race/ ethnicity as white or Caucasian, and therefore this sample set is not diverse enough to be considered representative of the USA as a whole. More work needs to be done to obtain sufficient samples from non-white participants. Additionally, the longitudinal and functional components of these data were conducted on small number of samples, and further study will produce a more complete and accurate analysis. Neutralization and other functional analyses for all Ab classes also must be studied in follow-up samples. As well, this study does not demonstrate that the measured milk Ab response is protective for breastfed babies. | 4,601.8 | 2022-03-09T00:00:00.000 | [
"Biology"
] |
Light Trapping with Silicon Light Funnel Arrays
Silicon light funnels are three-dimensional subwavelength structures in the shape of inverted cones with respect to the incoming illumination. Light funnel (LF) arrays can serve as efficient absorbing layers on account of their light trapping capabilities, which are associated with the presence of high-density complex Mie modes. Specifically, light funnel arrays exhibit broadband absorption enhancement of the solar spectrum. In the current study, we numerically explore the optical coupling between surface light funnel arrays and the underlying substrates. We show that the absorption in the LF array-substrate complex is higher than the absorption in LF arrays of the same height (~10% increase). This, we suggest, implies that a LF array serves as an efficient surface element that imparts additional momentum components to the impinging illumination, and hence optically excites the substrate by near-field light concentration, excitation of traveling guided modes in the substrate, and mode hybridization.
Introduction
The interaction of light and matter and specifically the coupling of light into matter is of both scientific and technological interest. Light trapping is about capturing photons from an incident electromagnetic wave, normally in the range from the infrared to the ultra-violet. Surface texturing with ordered or disordered arrays with subwavelength (or wavelength-scale) features has been demonstrated to increase light trapping in thin films (TF) beyond the Yablonovitch limit [1][2][3][4][5] Furthermore, surface arrays with subwavelength features are an additional strategy for the development of ultra-thin photovoltaic cells. Ultra-thin solar cells with absorption comparable to bulk solar cells directly lead to lower recombination currents and higher open circuit voltages, and therefore to higher photovoltaic efficiencies [6], as well as allowing the commercialization of photovoltaic cells based on scarce materials. Surface texturing with ordered or disordered tiling of subwavelength features has been shown to enhance the broadband absorption of the solar radiation due to light trapping (e.g., vertically-aligned nanopillars (NPs), nanoholes (NHs), rods, nanocones (NCs), nanospheres, etc.) [2,3,5,[7][8][9][10][11][12][13][14][15][16][17][18][19][20]. Note that in the current context, NPs refer to diameters of several hundred nanometers. In a planar semiconducting film, for example, both radiation and trapped traveling modes (guided modes and Bloch modes) are present [21]. However, the wavenumbers of the guided modes (i.e., photonic states) are not accessible to the radiation impinging on the top surface unless some extent of scattering or diffraction takes place. The Yablonovitch limit assumes 'mixing of the light' inside the absorber medium by randomizing the texture of the top and bottom interfaces, and in this manner generates wavenumbers that can occupy both radiation and guided modes and hence maximize light trapping. Arrays of
Materials and Methods
We employed a 3D FDTD optical simulation using Advanced TCAD by Synopsis (Mountain View, CA, USA). The simulation box size was set to the size of the unit cell, with a periodic boundary condition along the lateral dimensions. The bottom boundary condition was defined by the gold back reflector. The periodic boundary condition was applied to the normally incident plane wave excitation using the total-field scattered-field (TFSF) formulation. Both the magnetic and electric fields were copied directly from the periodic facet to the opposing one during field update. For each run (each wavelength), absorption and reflection were calculated using sensors that were located above the device (no transmission was recorded on account of the gold bottom reflector). In addition, for each wavelength the power flux density and the absorbed photon density at each mesh point were calculated. The absorbed photon density was calculated simply by dividing the absorbed power density ( 1 2 × σ × |E 2 | in which σ is the nonzero conductivity of silicon and E is the impinging electric field) by the energy of the impinging photon. TM polarization was used, and the various LF cross-sections showing the absorbed photon density or the power flux density were normal to the plane of incidence. The calculations were performed in the spectrum range of 400-1000 nm in 20 nm steps. For efficient and accurate FDTD simulations, the maximum mesh cell size was kept smaller than 1/10th of the wavelength in silicon; namely, more than 10 nodes per wavelength. The ultimate absorption efficiency (η ult ) is the relative absorption averaged and weighted with the solar spectrum, and where it is assumed that each photon above the bandgap generates an electron-hole pair that is collected at the electrodes. The η ult was calculated in the following manner: where, E g = 1.1 eV is the bandgap of silicon, E is the photon energy, A(E) is absorption spectrum and I(E) is the solar irradiance taken under Air Mass 1.5 Global (AM 1.5G) conditions. The optical constants of silicon material were taken from the literature [54]. Figure 1a presents an illustration of a 3D silicon LF array on top of a substrate. The color coding reflects the normalized absorbed photon density (a certain arbitrary wavelength was selected for the illustration); still, note the formation of higher-order complex modes at the top of the LFs (quadrupole) and the lower-order modes apparent at the bottom interface between the LFs and the substrate (dipole), which reflect near-field light concentration (or forward scattering) by the LF array into the substrate. Figure 1b shows individual LFs on top of various substrates in which the full height of the LF array-substrate complex is maintained (3 µm) but the ratio between the LF height (H LF ) and the substrate thickness (T s ) varies; hence, the considered H LF s are 0, 0.5, 1, 1.5, 2, 2.5 and 3 µm (T s is adjusted such that the total height of the LF array-substrate complex is 3 µm). Note that the 3 µm thickness of the LF array-substrate complex was arbitrarily selected as a study case. In the current examination, we assume a fixed LF top diameter (D t ) of 400 nm and a fixed LF bottom diameter (D b ) of 100 nm. The array period (P) is set to 500 nm, as it was demonstrated for NP arrays that 500 nm periodicity couples best to the solar spectrum, as the solar spectrum peaks at around this wavelength [44]. In the present work, the LF array geometry was not optimized to maximize the absorption of the solar radiation; rather, we consider the deformation of an optimized NP array into a LF array. Therefore, it is most probable that the absorption of the LF array could be further enhanced. The images in Figure 1b are 3D FDTD results for different geometries at certain wavelengths, and the color coding describes the normalized absorbed photon density, which reflects the various mode excitations in the LF array-substrate complex. It is evident from Figure 1b that the presence of LF arrays on top of a substrate concludes various excitations of optical modes both in the LF array and in the substrate. Furthermore, in order to enhance the optical coupling between the LF arrays and the substrate, we consider in the following a conformal 50 nm SiO 2 anti-reflective coating (ARC) decorating the top of the LF array-substrate complex (in practice, a conformal ARC could be a challenge for the inverted LF arrays. However, a conformal ARC could be realized using atomic layer deposition (ALD), producing conformal thin layers regardless of the surface topography) and a gold reflector at the bottom of the substrate (neither the ARC or the gold reflector are shown in Figure 1). Finally, note that the thickness of the ARC was not optimized. (ARC) decorating the top of the LF array-substrate complex (in practice, a conformal ARC could be a challenge for the inverted LF arrays. However, a conformal ARC could be realized using atomic layer deposition (ALD), producing conformal thin layers regardless of the surface topography) and a gold reflector at the bottom of the substrate (neither the ARC or the gold reflector are shown in Figure 1). Finally, note that the thickness of the ARC was not optimized. Figure 2a presents a color map of the simulated relative absorption spectra of the 3 μm LF arraysubstrate complex for various LF heights (and the respective substrate thicknesses that conclude the 3 μm complex). The respective ηult of each spectrum (i.e., for each geometry) is plotted on the right. The bottom of the color map reflects the relative absorption in a 3 μm TF (i.e., no LF array at all), whereas the top-most spectrum in the color map presents absorption in a 3 μm LF array (i.e., no substrate at all). Evidently, the ηult of the 3 μm LF array is ~14% higher than the ηult of the 3 μm TF. Note that in reference [52], the ηult of the LF arrays is significantly higher than the ηult of the thin film. This is because in the current study, we consider a gold bottom reflector, and, as expected, the gold bottom reflector substantially increases the absorption in TFs. Moreover, the gold bottom reflector of the 3 μm LF array is restricted to the LF bottom diameter (i.e., bottom reflector with a diameter of 100 nm), whereas for the thin film, the gold reflector extends throughout the bottom of the simulated unit cell (i.e., throughout the bottom of the film). Still, in the current study, we consider the presence of gold bottom reflector as our current aim is to explore the optical coupling between the LF arrays and the substrates and particularly the optical excitation of the substrates by the LF arrays. To this end, the presence of the gold bottom reflector is assumed, as it inevitably amplifies the optical interaction between the arrays and the substrates. For the 3 μm LF array, the broadband light absorption is attributed to efficient light trapping associated with mode hybridization of localized trapped optical modes (Mie modes) and Fabry-Perot (FP) modes, which are generated due to the bottom gold reflector. For the 3 μm thin film the absorption is due to light trapping associated with FP radiation modes. Interestingly, note that the ηult of the 3 μm LF-substrate complex is always higher than the ηult of the 3 μm LF array (~10%). This suggests an efficient optical coupling between the LF arrays and the substrates and, moreover, an efficient optical excitation of the substrate by the LF array. The presence of the substrate introduces additional photonic states in the form of traveling guided modes such as waveguide modes and Bloch modes and hybridization of these (and with FP). Overall, it is evident that although the LF arrays host a high density of complex Mie modes that conclude efficient light trapping and light absorption, the LF array-substrate complex still offers a Figure 2a presents a color map of the simulated relative absorption spectra of the 3 µm LF array-substrate complex for various LF heights (and the respective substrate thicknesses that conclude the 3 µm complex). The respective η ult of each spectrum (i.e., for each geometry) is plotted on the right. The bottom of the color map reflects the relative absorption in a 3 µm TF (i.e., no LF array at all), whereas the top-most spectrum in the color map presents absorption in a 3 µm LF array (i.e., no substrate at all). Evidently, the η ult of the 3 µm LF array is~14% higher than the η ult of the 3 µm TF. Note that in reference [52], the η ult of the LF arrays is significantly higher than the η ult of the thin film. This is because in the current study, we consider a gold bottom reflector, and, as expected, the gold bottom reflector substantially increases the absorption in TFs. Moreover, the gold bottom reflector of the 3 µm LF array is restricted to the LF bottom diameter (i.e., bottom reflector with a diameter of 100 nm), whereas for the thin film, the gold reflector extends throughout the bottom of the simulated unit cell (i.e., throughout the bottom of the film). Still, in the current study, we consider the presence of gold bottom reflector as our current aim is to explore the optical coupling between the LF arrays and the substrates and particularly the optical excitation of the substrates by the LF arrays. To this end, the presence of the gold bottom reflector is assumed, as it inevitably amplifies the optical interaction between the arrays and the substrates. For the 3 µm LF array, the broadband light absorption is attributed to efficient light trapping associated with mode hybridization of localized trapped optical modes (Mie modes) and Fabry-Perot (FP) modes, which are generated due to the bottom gold reflector. For the 3 µm thin film the absorption is due to light trapping associated with FP radiation modes. Interestingly, note that the η ult of the 3 µm LF-substrate complex is always higher than the η ult of the 3 µm LF array (~10%). This suggests an efficient optical coupling between the LF arrays and the substrates and, moreover, an efficient optical excitation of the substrate by the LF array. The presence of the substrate introduces additional photonic states in the form of traveling guided modes such as waveguide modes and Bloch modes and hybridization of these (and with FP). Overall, it is evident that although the LF arrays host a high density of complex Mie modes that conclude efficient light trapping and light absorption, the LF array-substrate complex still offers a superior system for light trapping, as the LF array excites various modes (and hybridizations) in the substrate in addition to the conventional radiation modes.
Results
It is evident in Figure 2a that the η ult of the LF array-substrate complex depends only weakly on the ratio between H LF and T s . Figure 2b,c show the decoupling of the relative absorption spectrum of the LF array-substrate complexes into the relative absorptions of the substrate and the LF array, respectively. As expected, the higher the LF arrays are, the higher the absorption in the arrays is (Figure 2c), and similarly, the absorption in the substrate increases for smaller LFs and thicker substrates. Decoupling the contributions of the substrates and the LF arrays to the overall absorption of the complex reveals the origin of the strong absorption peaks evident in Figure 2a. For example, note the absorption peaks in Figure 2a marked in S0-S3 and A1-A3. The formation of the S0-S3 absorption peaks is attributed to strong excitations in the substrate (note the marked absorption peaks in Figure 2b), and the formation of the A1-A3 absorption peaks is traced to strong excitations in the LF arrays (note these same peaks in Figure 2c). Importantly, note that the A1-A3 absorption peaks occur at wavelengths smaller than 900 nm, whereas absorption peaks S1 and S2 occur at wavelengths exceeding 900 nm. This suggests that the proposed geometries, when engineered properly, can induce strong optical excitation of the substrate at the near infra-red (NIR) which is of great interest for thin-film photovoltaics, for example. superior system for light trapping, as the LF array excites various modes (and hybridizations) in the substrate in addition to the conventional radiation modes. It is evident in Figure 2a that the ηult of the LF array-substrate complex depends only weakly on the ratio between HLF and Ts. Figure 2b,c show the decoupling of the relative absorption spectrum of the LF array-substrate complexes into the relative absorptions of the substrate and the LF array, respectively. As expected, the higher the LF arrays are, the higher the absorption in the arrays is (Figure 2c), and similarly, the absorption in the substrate increases for smaller LFs and thicker substrates. Decoupling the contributions of the substrates and the LF arrays to the overall absorption of the complex reveals the origin of the strong absorption peaks evident in Figure 2a. For example, note the absorption peaks in Figure 2a marked in S0-S3 and A1-A3. The formation of the S0-S3 absorption peaks is attributed to strong excitations in the substrate (note the marked absorption peaks in Figure 2b), and the formation of the A1-A3 absorption peaks is traced to strong excitations in the LF arrays (note these same peaks in Figure 2c). Importantly, note that the A1-A3 absorption peaks occur at wavelengths smaller than 900 nm, whereas absorption peaks S1 and S2 occur at wavelengths exceeding 900 nm. This suggests that the proposed geometries, when engineered properly, can induce strong optical excitation of the substrate at the near infra-red (NIR) which is of great interest for thin-film photovoltaics, for example. Figure 2d presents the normalized power flux density at wavelength 740 nm (marked in dashed white line in Figure 2b,c) for the selected geometries. The excitation of various optical modes and mode hybridization in the LF arrays and in the substrates is apparent, as well as forward scattering (or near-field light concentration) of the LF arrays into the substrates, which is present in every geometry. Note that at wavelength of 740 nm, the LF arrays are strongly excited; still, the overall contribution of the arrays to the absorption is considerably smaller for short arrays, as is evident, for example, for the LF array of 0.5 μm and the substrate of 2.5 μm, in which the LF array is highly excited but the overall absorption in the array is small compared with the absorption in the substrate. Figure 2b,c) for the selected geometries. The excitation of various optical modes and mode hybridization in the LF arrays and in the substrates is apparent, as well as forward scattering (or near-field light concentration) of the LF arrays into the substrates, which is present in every geometry. Note that at wavelength of 740 nm, the LF arrays are strongly excited; still, the overall contribution of the arrays to the absorption is considerably smaller for short arrays, as is evident, for example, for the LF array of 0.5 µm and the substrate of 2.5 µm, in which the LF array is highly excited but the overall absorption in the array is small compared with the absorption in the substrate. Figure 3 presents the normalized absorbed photon density for different geometries and wavelengths pertaining to the A1-A3 and S0-S3 absorption peaks marked in Figure 2a-c with white circles. Firstly, note the different excitation mechanisms that are responsible for the strong absorption peaks. The strong absorption of the TF at S0 is due to FP modes. The strong absorption at S1 is due to the hybridization of FP modes and guided modes in the substrate. In S2, the strong absorption is also due to strong excitation of the substrate but in this case FP modes govern the excitation. In S3 the absorption is due the strong hybridization of FP and traveling guided modes, whereas the excitation of Mie modes in the array is minor despite the considerable height of the array. Finally, the A1-A3 absorption peaks are governed by strong hybridization of FP and Mie modes in the arrays. Figure 3 presents the normalized absorbed photon density for different geometries and wavelengths pertaining to the A1-A3 and S0-S3 absorption peaks marked in Figure 2a-c with white circles. Firstly, note the different excitation mechanisms that are responsible for the strong absorption peaks. The strong absorption of the TF at S0 is due to FP modes. The strong absorption at S1 is due to the hybridization of FP modes and guided modes in the substrate. In S2, the strong absorption is also due to strong excitation of the substrate but in this case FP modes govern the excitation. In S3 the absorption is due the strong hybridization of FP and traveling guided modes, whereas the excitation of Mie modes in the array is minor despite the considerable height of the array. Finally, the A1-A3 absorption peaks are governed by strong hybridization of FP and Mie modes in the arrays.
Conclusions
In the current work we study light trapping in an LF array-substrate complex. We show that the broadband light absorption is higher in LF array-substrate complexes compared with the broadband absorption in an LF array (without a substrate) of the same height. The absorption enhancement is attributed to the generation of additional momentum components to the impinging illumination, which results in near-field light concentration by the LF array and mode excitation and mode hybridization in the substrate. Finally, we show that the ratio between the height of the LF array and the thickness of the substrate has little effect over the broadband absorption of the solar spectrum by the complex.
Conclusions
In the current work we study light trapping in an LF array-substrate complex. We show that the broadband light absorption is higher in LF array-substrate complexes compared with the broadband absorption in an LF array (without a substrate) of the same height. The absorption enhancement is attributed to the generation of additional momentum components to the impinging illumination, which results in near-field light concentration by the LF array and mode excitation and mode hybridization in the substrate. Finally, we show that the ratio between the height of the LF array and the thickness of the substrate has little effect over the broadband absorption of the solar spectrum by the complex. | 4,898.6 | 2018-03-01T00:00:00.000 | [
"Physics"
] |
Survival asymptotics for branching random walks in IID environments
We first study a model, introduced recently in \cite{ES}, of a critical branching random walk in an IID random environment on the $d$-dimensional integer lattice. The walker performs critical (0-2) branching at a lattice point if and only if there is no `obstacle' placed there. The obstacles appear at each site with probability $p\in [0,1)$ independently of each other. We also consider a similar model, where the offspring distribution is subcritical. Let $S_n$ be the event of survival up to time $n$. We show that on a set of full $\mathbb P_p$-measure, as $n\to\infty$, (i) Critical case: P^{\omega}(S_n)\sim\frac{2}{qn}; (ii) Subcritical case: P^{\omega}(S_n)= \exp\left[\left( -C_{d,q}\cdot \frac{n}{(\log n)^{2/d}} \right)(1+o(1))\right], where $C_{d,q}>0$ does not depend on the branching law. Hence, the model exhibits `self-averaging' in the critical case but not in the subcritical one. I.e., in (i) the asymptotic tail behavior is the same as in a"toy model"where space is removed, while in (ii) the spatial survival probability is larger than in the corresponding toy model, suggesting spatial strategies. We utilize a spine decomposition of the branching process as well as some known results on random walks.
1. Introduction 1.1. Model. We first consider a model, introduced recently in [4], of a critical branching random walk Z = {Z n } n≥0 in an IID random environment on the d-dimensional integer lattice as follows. The environment is determined by placing obstacles on each site, with probability Key words and phrases. Branching random walk, catalytic branching, obstacles, critical branching, subcritical branching, random environment, spine, leftmost particle, change of measure, optimal survival strategy.
DATE: March 30, 2017 0 ≤ p < 1, independently of each other. Given an environment, the initial single particle, located at the origin at n = 0, first moves according to a nearest neighbor simple random walk, and immediately afterwards, the following happens to it (see Fig. 1.1): (1) If there is no obstacle at the new location (we call it then a vacant site), the particle either vanishes or splits into two offspring particles, with equal probabilities. (2) If there is an obstacle at the new location, nothing happens to the particle.
The new generation then follows the same rule in the next unit time interval and produces the third generation, etc. We will also consider the same model when critical branching is replaced by a subcritical one, with mean µ < 1. In this latter case we will make the following standard assumption.
Assumption 1 (L log L condition). In the subcritical case let L denote the random number of offspring, with law L. We assume that ∞ k=1 Prob(L = k)k log k < ∞.
Let p ∈ [0, 1). In the sequel, K = K(ω) will denote the set of lattice points with obstacles, P p will denote the law of the obstacles and P ω will denote the law of the BRW given the environment ω ∈ Ω. (Here Ω may be identified with {0, 1} Z d .) Define also P p := E p ⊗ P ω . We will say that a statement holds 'on a set of full P p -measure,' when it holds under P ω for ω ∈ Ω ⊂ Ω and P p (Ω ) = 1.
Finally, I A will denote the indicator of the set A, and for f, g : (0, ∞) → (0, ∞), the notation f ∼ g will mean that lim t→∞ f (t) g(t) = 1.
1.2.
Quenched survival; main result. We are interested in the asymptotic behavior, as time tends to infinity, of the probability that there are surviving particles, and on its possible dependence on the parameters. (Note that in the extreme case, when p = 0, the asymptotic behavior is well known.) We consider the quenched case, and so, we can only talk about the almost sure asymptotics, as the probability P ω itself depends on the realization of the environment.
Let S n denote the event of survival up to n ≥ 0. That is, S n = {|Z n | ≥ 1}, where |Z n | is the population size at time n. Our main result will concern the a.s. asymptotic behavior of P ω (S n ).
Theorem 2 (Quenched survival probability). Let d ≥ 1 and p ∈ (0, 1), and recall that q := 1 − p. Then the following holds on a set of full P p -measure, as n → ∞.
(i) Critical case: (ii) Subcritical case: where C d,q is a positive constant that does not depend on the branching law.
1.3.
Motivation; heuristic interpretation. Consider first the case of critical branching. Recall the classic result due to Kolmogorov [8,Formula 10.8], that for critical unit time branching with generating function ϕ, as n → ∞, .
As a particular case, let us consider now a non-spatial toy model as follows. Suppose that branching occurs with probability q ∈ (0, 1), and then it is critical binary, that is, consider the generating function It then follows that, as n → ∞, Turning back to our spatial model (with critical branching), simulations suggested (see [4]) the self averaging property of the model: the asymptotics for the annealed and the quenched case are the same. In fact, this asymptotics is the same as the one in (1.4), where p = 1 − q is the probability that a site has a obstacle. In other words, despite our model being spatial, in an asymptotic sense, the parameter q simply plays the role of the branching probability of the above non-spatial toy model. To put it yet another way, q only introduces a 'time-change.' In the present paper we would like to establish rigorous results concerning survival.
Our main result will demonstrate that while for critical branching, self-averaging indeed holds, this is not the case for subcritical branching.
For further motivation in mathematics and in mathematical biology, see [4]. For topics related to the quenched and annealed survival of a single particle among obstacles in a continuous setting, see the fundamental monograph [11]. Finally, we mention the excellent current monograph [10] on branching random walks, which also includes the spine method relevant to this paper.
Next, we give a heuristic interpretation of Theorem 2.
(i) Critical case: The intuitive picture behind the asymptotics is that there is nothing the BRW could do to increase the chance of survival, at least as far as the leading order term is concerned (as opposed to well known models, for example when a single Brownian motion is placed into random medium [11]). Hence, given any environment, the particles move freely and experience branching at q proportion of the time elapsed, and the asymptotics agrees with the one obtained in the non-spatial setting as in (1.4).
Note that whenever the total population size reduces to one, the probability of that particle staying in the region of obstacles is known to be much less than O(1/n). So the optimal strategy for this particle to survive is obviously not to try to stay completely in that region and thus avoid branching. Rather, survival will mostly be possible because of the potentially large family tree stemming from that particle.
Since Z is a P ω -martingale for any ω ∈ Ω, E ω (|Z n |) = 1 for n ≥ 1, and so In fact, we suspect that on a set of full P p -measure, under P ω (· | S n ), the law of |Zn| n converges to the exponential distribution with mean q/2. (Cf. Theorem C(ii) in [9].) (ii) Subcritical case: Now the situation is very different. In this case spatial strategy, that is, the avoidance of vacant sites, does make sense, since those sites are now 'more lethal.' Unlike in the critical case, the result now differs from what the non-spatial toy model would give us, namely, in that case, by the previously mentioned result of Heathcote, Seneta and Vere-Jones (Theorem B in [9]), under the LlogL condition. In our spatial setting, the survival probability has thus improved! Finally, we note that in [4], in the annealed case, the second-order survival asymptotics has also been observed through simulations. Those simulation results 1 suggest that spatial survival strategies do exists, which are not detectable at the logarithmic scale but are visible at the second-order level.
Some preliminary results
In this section we present two simple statements concerning our branching random walk model which were proven in [4], and also some a priori bounds.
Although [4] only handles the critical case, the same proof carries through for the subcritical case as well. The proof only uses the fact that if ϕ is the generating function of the offspring distribution, then ϕ(z) ≥ z on [0, 1]. This remains the case for subcritical branching too, since ϕ(1) = 1, ϕ (1) < 1 and ϕ is convex from above on the interval.
Lemma B (Extinction (Theorem 2.2 in [4])
). Let 0 ≤ p < 1 and let A denote the event that the population survives forever. Then, for P p -almost every environment, P ω (A) = 0.
Again, [4] only handles the critical case, but the same proof carries through for the subcritical case as well. (One then uses that the population size is a supermartingale, instead of a martingale.) Lemma A yields the following a priori bounds.
(i) Critical case: On a set of full P p -measure, as n → ∞.
(ii) Subcritical case: On a set of full P p -measure, Proof. (i) In the critical case, by comparing with the p = 0 (no blockers) case, when survival is less likely, and for which the non-spatial result of Finally, use the Markov inequality to get (2.2).
(ii) In the subcritical case (µ < 1), the proof is very similar, taking into account the well known result of Heathcote, Seneta and Vere-Jones (see Theorem B in [9]) that under the L log L condition, (2.3) holds with limit instead of lim inf for p = 0.
Further preparation: Size-biasing and spine in the critical case
Consider the critical case in this section. Given (1.5), the asymptotic relation under (1.1) is tantamount to as n → ∞. We will actually prove that (3.1) holds on a set of full P p -measure.
In the particular case when q = 1 (branching always takes place) and in a non-spatial setting, this has been shown in [9] (see formula (4.1) and its proof on p. 1132). We will show how to modify the proof in [9] for our case.
(In the subcritical case, we will also reduce the question to the study of the behavior of E ω (|Z n | | S n ) as n → ∞.)
3.1.
Left-right labeling. At every time of fission, randomly (and independently from everything else in the model) assign 'left' or 'right' labels to the two offspring. So, from now on, every time we write P ω (· | S n ), we will actually mean, with a slight abuse of notation, P ω (· | S n ), augmented with the choice of the labels; we will handle P ω (· ∩ S n ) similarly. Ignoring space, and looking only at the genealogical tree, we say that at time n, a particle is 'to the left' of another one, if, tracing them back to their most recent common ancestor, the first particle is the descendant of the left particle right after the fission. Transitivity is easy to check and thus a total ordering of particles at time n is induced.
3.2.
The size-biased critical branching random walk. Recall that if {p k } k≥0 is a probability distribution on the nonnegative integers with expectation m ∈ (0, ∞), then the corresponding size-biased distribution is defined by p k := kp k /m for k ≥ 1. We will denote the size biased law obtained from L by L.
Given the environment ω, the size-biased critical branching random walk with corresponding law P ω is as follows.
• The initial particle does not branch until the first time it steps on a vacant site, at which moment it splits into a random number offspring according to L. • One of the offspring is picked uniformly (independently from everything else) to be designated as the 'spine offspring.' The other offspring launch copies of the original branching random walk (with ω being translated according to the position of the site). • Whenever the 'spine offspring' is situated next time at a vacant site, it splits into a random number offspring according to L, and the above procedure is repeated, etc.
Definition 4 (Spine). The distinguished line of decent formed by the successive spine offspring will be called the spine.
Note the following.
(i) (Survival) Because of the size biasing, the new process is immortal P ω -a.s. (ii) (Martingale change of measure) For any given ω, the law of the size-biased critical random walk satisfies that where {F n } n≥0 is the natural filtration of the branching random walk, and the lefthand side is a Radon-Nikodym derivative on F n . This is a change of measure by the nonnegative, unit mean martingale |Z|. The proof is essentially the same as in [9]. Even though in that paper the setting is non-spatial, it is easy to check that the proof carries through in our case, because the mean offspring number is always one, irrespective of the site. (See p.1128 in [9].) In particular, when the critical law L is binary (either 0 or 2 offspring, with equal probabilities), the law L is deterministic, namely it is dyadic (that is, 2 offspring with probability one). In this case, the spine particle always splits into two at vacant sites.
In addition to P ω , we also define the law P ω * which is the distribution of the size-biased branching random walk, augmented with the designation of the spine within it. The corresponding, augmented filtration, {G n } n≥0 is richer than {F n } n≥0 , as it now keeps track of the position of the spine as well.
The significance of the new law P ω is as follows. Let us denote the spine's path up to n by {X i } 0≤i≤n . Let A n := {The spine particle is the leftmost particle of Z n .}.
Then, size biasing and conditioning on A n has the combined effect of simply conditioning the process on survival up to n. That is, the distribution of Z restricted on {F n } is the same under P ω (· | S n ) and under P ω * (· | A n ). To see why this is true, let C n,k := {|Z n | = k}. One has for F ∈ F n that In particular,
3.3.
Frequency of vacant sites along the spine in the critical case. Let the branching be critical, and let denote the (random) amount of time spent by X (the spine) on vacant sites between times 1 and n.
Lemma 5 (Frequency of visiting vacant sites). On a set of full P pmeasure, where L i n is defined similarly to L n for Z i,n , the i th particle in Z n on S n ; we define F n, := ∅ on S c n . Since A n concerns only labeling and is independent of the event Ln n − q > , one has and switching back to the original measure now, the righthand side equals In view of Corollary 3(i), it is sufficient to show that on a set of full P p -measure, P ω (F n, ) = o(1/n), as n → ∞.
To this end, let f be a positive function on the positive integers, to be specified later. Using the union bound, where T n denotes the time spent on vacant sites between times 1 and n by a simple random walk in the environment ω, starting at the origin with corresponding probability Q ω . We will use Q p for the law of the environment. By Corollary 3(i), on a set of full P p -measure, the second term on the righthand side is O n f (n) as n → ∞. Therefore it is enough to find a function f such that and that on a set of full Q p -measure, Observe that it is sufficient to verify the upper tail large deviations. Indeed, the lower tail large deviations for the time spent in K c can be handled similarly, since they are exactly the upper tail large deviations for the time spent in K.
The statement reduces to one about a d-dimensional random walk in random scenery. 2 We now have to consider a scenery such that it assigns the value 1 to each lattice point with probability q, and the value 0 (that is, the scenery is the indicator of vacancy). With a slight abuse of notation, we will still use Q ω and Q p for the corresponding laws.
Since it was easier to locate the corresponding annealed result in the literature (see also the remark at the end of the proof), we will use that one, and then show how one easily gets the quenched statement from the annealed one.
To this end, define the random variable Y * := Y − q, where Y is the 'scenery variable,' that is, Y = 1 with probability q and Y = 0 otherwise. Then Y * is centered, and defining one can apply Theorem 1.3 in [5], yielding that for > 0, We now easily obtain the quenched result too, since for any positive sequence {a n } n≥0 , the Markov inequality yields Given that (E Qp ⊗ Q ω ) Tn n > q + = exp −C n d d+2 (1 + o(1)) , to finish the proof, we can pick any sequence satisfying that n a −1 n exp − Then, by the Borel-Cantelli Lemma, Q p Q ω T n n > q + > a n occurs finitely often = 1.
Remark 6. Regarding RWRS, we note that in Theorem 2.3 in [3] the quenched large deviations have been studied in a more continuous version, namely for a Brownian motion in a random scenery, where the scenery is constant on blocks in R d .
Proof of Theorem 2 -critical case
Our goal is to verify (3.1). To this end, note that the spine has a (nonnegative) number of 'left bushes' and a (nonnegative) number of 'right bushes' attached to it; each such bush is a branching tree itself. A 'left bush' ('right bush') is formed by particles which are to the left (right) of the spine particle at time n. It is clear that, under conditioning on A n , each left bush dies out completely by time n (see Fig. 5.1).
Because of (3.2), we are left with the task of showing that The proof of this statement is similar to the proof of (4.1) in [9] (with σ 2 = 1), except that, as we will see, now we also have to show that (4.2) E ω * (number of all bushes along the spine) = qn(1 + o (1)). The reason is that in [9], the spine particle branched at every unit time, which is not the case now. In our case, the spine {X i } 1≤n splits into two at each vacant site and thus bushes are attached each time (larger than zero and smaller than n) when X is at a vacant site.
For n ≥ 1 given, define the set of indices • (LB) j be the event that there is a left bush launched from the space-time point (X j , j); • (RB) j be the event that there is a right bush launched from (X j , j); • (LBE) j be the event that there is a left bush launched from X j which becomes extinct by n. • A n,j := (RB) j ∪ (LBE) j . Then where the events in the intersection are independent under P ω . (See again Fig. 5.1.) Conditioning on A n can be obtained by conditioning successively on A n,j , j ∈ J.
For j ∈ J n , let the random variable R n,j be the 'right-contribution' of the j th bush to |Z n |. That is, R n,j = 0 on (LB) j , and on (RB) j it is the contribution of the right bush, stemming from (X j , j), to |Z n |. The 'left contribution' S n,j is defined similarly, and Z n,j := R n,j + S n,j is the total contribution. Note that A n,j = {S n,j = 0}.
Let {R n,j } j∈Jn be independent random variables under a law Q ω such that Q ω (R n,j ∈ ·) = P ω * (R n,j ∈ · | A n,j ), and let Q ω := P ω * × Q ω , with expectation E ω Q . Furthermore, let R * n,j := I A n,j R n,j + I A c n,j R n,j , and R * n := j∈J R * n,j . Then, for j ∈ J n , P ω * (Z n,j ∈ · | A n,j ) = P ω * (R n,j + S n,j ∈ · | A n,j ) = Q ω (R * n,j ∈ ·). (4.4) (The S n,j term in the second probability has zero contribution.) Using (4.3) along with (4.4), it follows that Hence, the desired assertion (4.1) will follow once we show that (on a set of full P p -measure) Denoting R n := j∈J R n,j , the same proof as in [9] reveals that (The intuitive reason is that A c n,j = {S n,j > 0}, while the probability of the survival of a bush tends to zero as the height of the bush tends to infinity; thus A c n,j only occurs rarely. The fact that now we do not have a bush launched at every position of the spine makes the estimated term even smaller.) In view of (4.5), it is sufficient to show that lim n→∞ E ω * (R n /n) = q/2. Since the {R n,j } j∈Jn are independent of |J n | and E ω * R n,j = 1 2 for each j ∈ J n (as each bush is equally likely to be left or right under P ω * ), one has where we are using the notation of Lemma 5 (|J n+1 | = L n ). Hence, our goal is to show that Now use Lemma 5. Since the first probability on the righthand side is 1 − o(1) and the second is o(1), and since 0 ≤ L n ≤ n, we have that . Since is arbitrary, we are done.
Proof of Theorem 2 -subcritical case
Recall the definition of Q ω = Q ω q from the proof of Lemma 5 and that K is the 'total obstacle configuration.' Just like in Subsection 3.3, let T n be the time spent in K c (vacant sites).
Let Y be a simple random walk on Z d with 'soft killing/obstacles' under Q ω q and let E ω q denote the corresponding expectation. By 'soft killing' we mean that at each vacant site, independently, the particle is killed with probability 1 − µ. Let 3 DV q (n) = DV(ω, q, n, µ) := E ω q (µ n 1 1 K c (Y i ) ) = E ω q (µ Tn ) be the quenched probability that Y survives up to time n. Hence q plays the role of the 'intensity' and µ plays the role of the 'shape function' in this discrete setting.
It is known that for almost every ω, as n → ∞, and C d,q > 0 does not depend on µ. See formula (0.1) on p.58 of [1] for hard obstacles. In fact, the proof for hard obstacles extends for soft obstacles. Indeed, it becomes easier, since in the case of soft obstcles one does not have to worry about 'percolation effects,' that is that the starting point of the process is perhaps not in an infinite trap-free region. Clearly, the lower estimate for survival among hard obstacles is still valid for soft obstacles; the method of proving the upper estimate is a discretized version of Sznitman's 'enlargement of obstacles' in both cases. (See also [2] for similar results and for the enlargement technique in the discrete setting.) Returning to our branching process and the event S n , we first show that on a set of full P p -measure, (5.2) P ω (S n ) = DV q (n) E ω (|Z n | | S n ) .
The expectation E ω |Z n | can in fact be expressed as a functional of a single particle (this is the 'Many-To-One' Lemma for branching random walk): E ω |Z n | = E ω q µ Tn = DV q (n). This follows from the fact that for u n (x) = u ω n (x) := E ω x |Z n |, one has the recursion u n (x) = y∼x u n−1 (y)(I {y∈K} + µ I {y∈K c } )p(x, y), where y ∼ x means that y is a neighbor of x, and p(·, ·) is the one-step kernel for the walk. (See also [10].) Thus (5.3) P ω (S n )E ω (|Z n | | S n ) = E ω |Z n | = E ω q µ Tn , proving (5.2). Since the denominator on the righthand side of (5.2) is at least one, it follows that (5.4) P ω (S n ) ≤ DV µ q (n), where we emphasize the dependence on µ.
On the other hand, we claim that (5.5) P ω (S n ) ≥ DV µ * q (n), where µ * := 1−p 0 , and p 0 > 0 is the probability of having zero offspring (i.e. death) for the law L.
The reason (5.5) is true is the following coupling argument: Every time the particle is at a vacant site, with probability 1−p 0 , we can pick uniformly randomly (independently from everything else) a 'successor,' thus effectively embed a random walk with killing into the branching random walk. Clearly, the probability that we are able to complete this procedure until time n is exactly the probability that a single random walk with soft killing survives, where soft killing means that it is killed with probability p 0 , independently at each vacant site.
It is also evident that if we are able to complete this procedure until time n then the branching process has survived up to time n.
Having (5.4) and (5.5) at our disposal, we can now conclude the assertion of Theorem 2(ii), because DV µ q (n) and DV µ * q (n) both have the asymptotic behavior given in (5.1), despite the fact that µ > µ * in general. | 6,395.2 | 2017-03-28T00:00:00.000 | [
"Mathematics"
] |
Implementation of Mobile Phone Data Collection in the Conduct EPI Comprehensive Review in East and Southern African Countries
Mobile phone data collection tools are increasingly becoming very usable collecting, collating and analysing data in the health sector. In this paper, we documented the experiences with mobile phone data collection, collation and analysis in 5 countries of the East and Southern African, using Open Data Kit (ODK), where questionnaires were designed and coded on an XML form, uploaded and data collected using Android-Based mobile phones, with a web-based system to monitor data in real-time during EPI comprehensive review. The ODK interface supports in real-time monitoring of the flow of data, detection of missing or incomplete data, coordinate location of all locations visited, embedded charts for basic analysis. It also minimized data quality errors at entry level with the use of validation codes and constraint developed into the checklist. These benefits, combined with the improvement that mobile phones offer over paper-based in terms of timeliness, data loss, collation, and real-time data collection, analysis and uploading difficulties, make mobile phone data collection a feasible method of data collection that needs to be further explored in the conduct of all surveys in the organization.
Introduction
A National Immunization Programme Review also referred to as Expanded Programme on Immunization (EPI) Review, is a comprehensive assessment of the strengths and weaknesses of an immunization programme at national, subnational and service-delivery levels, with the sole aim of providing evidence to guide programme's strategic directions and prioritizing activities 7 . The review requires a collection of data from tools, through the conduct of administering questionnaires; observations; review of data, documents, and reports; such as tally sheets and immunization registers, monthly reports and electronic databases of health management information system (HMIS) data (e.g. DHIS2) etc. During previous EPI reviewed a standard paper based questionnaires were used to collect information which This work is licensed under a CC BY 4.0 International license. requires further collation and analysis which is labourous and time consuming 7 . Paper-based system requires data to be entered manually leading to errors in the data, delays in data cleaning and analysis which ultimately does not facilitate availability of information same day and real time 8,12 . A study in Pemba (Zanzibar-Tanzania) raises concerns of time interval between data collection and checking, leading to delay in resolving the data issues. It also noted omission errors and illogical data, as well as inconsistency in enrolment criteria resulting in missing eligible clients which had made the tracking of the proportion of clients missed difficult 12 . Use of paper-based system has been reported to be a time-consuming and difficult when dealing with large data sets 2 . Another challenge of using paper-based approach in the conduct of EPI reviews was the delay between the data collection and the collation and analysis of data to be used for preparing the report, this usually take days before, with missing and incomplete data sets affecting the quality of the feedback used for the report 9 .
To overcome the above-mentioned challenges with paper-based approach, there has been an attempt to use data collection applications on smartphones and tablets in the conduct of EPI Review in Tanzania in the year 2015. This attempt met with challenges, which include, time to translate the tools into mobile application, difficulty in transmitting data due to poor internet services, and timing for training 6 . The use of such technology is not only limited to surveys but has been used in various clinical settings. For instance, studies conducted across different country settings have tried to investigate the use ofcell phones on the patient end in an effort to generate feedback for improved chronic illness care and monitoring 5,12 and monitoring pregnant women in remote areas in Liberia 12 . Furthermore, innovative technologies such as the personal digital assistants (PDAs) have been used to document vital health information and the health seeking behaviour among 21,000 rural dwellers in southern provinces of Tanzania 11 . Additionally, other studies have investigated the use of cell phones on the linking smartphones to web applications for epidemiology, ecology and community data collection 1 or used in clinical microscopy for global health applications purposes 3 . In general, in poor resource settings, the Open Data Kit (ODK) is commonly used because it is free, easy to implement in terms of coding, can work both realtime, online and offline (during data collection) without the need for internet connection as well as userfriendly to the interviewer.
In order to improve real time information on surveillance on other health service-related issues, the Regional office of the World Health Organisation has established the Geographical Information (GIS) centre and has directed that the Polio and IVD programs initiate the use of online surveillance tools (active case search and integrated supportive supervisory tools from 2017. As of week, 31, 2020 a total of 83,981 cumulative supportive supervision visits have been conducted in 44 African countries as compared to 62,456 cumulative supportive supervision visits in 2019 (ref). Despite the increase in the availability of real time AFP active surveillance data (same day of visit) that improved program decision, however, there eare no published studies on the use of mobile phones as a data collection tool in the conduct of EPI reviews in the African region.
addressing the gaps and limitations. This work also explores opportunities towards ensuring a timely, accurate and real-time data to support timely decision-making processes by program managers that will improve programme performance.
Study design
This is a primary data analysis of the EPI Comprehensive Review Checklists, conducted in the 5 countries that implemented the mobile phone data collection method between September 2017 to September 2018 in the sub-region
Study area
The study was conducted in five countries namely; South Sudan, South Africa, Ethiopia, Mauritius, and Kenya. The countries have an estimated population <1 year and <15-year population of 6,013,192 and 78,100,994 respectively. Currently, there are a total of 5 countries in the sub-region that implemented the use of ODK during the conduct of the review, with over 70 different questionnaires containing over 3,500 records already available on the WHO AFRO server.
Study population
A total of 5 countries in the sub-region were selected and included in the study based on the criteria of whether a country had used paper based or electronic ODK based questionnaires during the most recent EPI review that was conducted between September 2017 to September 2018.
Open Data Kit (ODK): The Mobile Phone Data Collection System
Open Data Kit (ODK) is a set of tools that supports data collection using mobile and hand held devices, with enhanced capability of data submission to an online server, even without an Internet connection or mobile carrier service at the point collection. Data collected from the field with ODK Collect, can be uploaded and managed using ODK Aggregate. ODK Aggregate is the intermediary platform used as a server storage, which accepts data and relay it to external applications. The ODK Aggregate enable the user to download a specified dataset in different file formats, such as xls, html, json, csv etc. It allows you to use Google's App Engine, that is hosting the platform managing the data collected remotely in real-time. It was created by developers at the University of Washington's Computer Science and Engineering department and members of Change, Open Data Kit is an open-source project available to all. The user has the option to save the data as Complete or Incomplete. If the questionnaire is completed and saved as COMPLETE, the data moves to the SEND FINALISED FORM, from which it is automatically uploaded to the server. If there is no mobile network coverage, completed questionnaires are stored securely in the phone until a signal is found and the data is automatically uploaded. ODK can incorporate multiple choice, free text, numeric, date, time and other question types (see Figure 1).
In addition, ODK can also accommodate data entry constraints, skip logic as well as enforced validation in the field to reduce errors at the entry level. The data are uploaded using low-cost general packet radio service (GPRS). The ODK platform has a web-based interface that was developed to facilitate the review of data and exporting of results in standard file formats such as comma separated values (CSV), Microsoft Excel, Microsoft Access etc. The data teams communicate with all the teams in the field directly, either through a call to the mobile phone or through SMS messaging. A WhatsApp group was equally created to ease communication and provide support to the team in the field. A dashboard was created on the ODK web-based interface on basic indicators from the survey that are automated in real-time, which provided the programme officers the ability to monitor status of data submission, while the survey is going on (these includes, locations visited based on coordinates taken, number of records available on the server etc.).
All the data were sent to the WHO AFRO server which was secured by firewalls to prevent unauthorized access and denial of service attacks. Access to the ODK web-interface is protected by passwords. For the purpose of these reviews, access to the data was granted to all programme officers using the read-only right, and the data managers have the administrative right that allows for edition (where necessary).
Data collection and Training
Different teams (between 6 to 8) were formed comprising people with different specialties (epidemiologists, communications specialists, data managers, logisticians etc.), which were assigned to different provinces.
The data were collected by the team in all the selected categories agreed (national, regional, districts and health facilities). The Focal person for each of the thematic areas (surveillance, EPI, cold chain, data management, communications) in each level was interviewed. All interviews were conducted face-to-face, in the offices and locations of the focal persons using standard questionnaires.
Android-based mobile phones/tablets were provided for the data collection process and most of the team members had previous experience in the use of Android-based smartphones. Training on the use and configuration of the open data kit (ODK) was conducted before exposing team members to the data collection protocol over a two-day period. The training consisted of a general orientation on using the phone and its data collection application (ODK), downloading and accessing the questionnaires on the phone, data dictionary for each question on the questionnaire and standard care of the device. The training package involved troubleshooting mobile devices, as well as steps for configuring the new or existing devices, to ensure that all data collectors are able to deal with basic technical issues that may arise with the use of the mobile phones in the field. Survey questions were pilot tested during the training prior to implementation to ensure that all the skip logic, constraints, and validation applied are working and to ensure that the questions were understandable to respondents. The data tools (questionnaire) reviewed based on comments from data collection team members, and programme officers from government and partner agencies. The method for data collection was interviewer administered questionnaire. Standard protocols for administering the questionnaires were developed which provided a uniform guidelines on how go through the questionnaires.
The data collection phase was conducted within five working days in all the countries except in South Sudan (where it varies due to insecurity and availability of logistics). Data are shared directly from the field real-time (where there is network) and data flow is being monitored by the data managers, identifying missing information (if any) and providing daily feedback to the teams and the programme managers on the status of data on the server, and inconsistencies and missing information are communicated, rectified, and cleaned in a timely manner. No device failure or application problem was recorded throughout the course of the survey, except for few delays in getting geo-coordinates in some locations, which were addressed immediately.
Results
The use of the mobile based android device in the conduct of the EPI reviews had shown a significant improvement in the timeliness and completeness of the data, with real-time submission of the data from the field as seen in the table below; We noticed from the result that the timeliness and completeness of the data is greater than 95% in all the countries that implemented, proportion of errors found on the data after collation has also improved with an average of 0.01 to 0.003, while number of days taken to collate the data for analysis is 1 day (data submission was real-time). Standard charts and maps were automatically created on the web-based platform and saved to the DASHBOARD, which allowed the review lead team to visualize real-time outputs of the location of visits, number of records per location, geo-codes of areas where the questionnaire were administered on a daily basis. Outputs can be generated when the person wants to analyze the data using external tools (e.g., EXCEL) as often as the user prefers.
Alternatively, built-in graphs and reports on the ODK web -based interface permit real-time visualization of the data, which includes maps, charts ( Figure 3) and save it on to the dashboard for real-time data visualization.
The use of the mobile phone and the web-interface using ODK, allowed the External Team Leads the opportunity to monitor work rate, geographical distribution of the sites visited in each location (Figure 3). The automatic send option was enabled on all the mobile phones, which allows completed questionnaire to be shared automatically whenever there is a network. Daily meetings between the data manager and the national team, chaired by the external team lead were held to know the status of data available in the server and share any experiences/difficulties reported from the field for correction. A daily summary of data available on the server was shared with all teams for follow up and feedback.
One of the major advantages of the mobile phone data collection method was in the realtime detection of suspicious data entry and data falsification. The exported data report provides formatted information files, the time a specific questionnaire was administered, the device_id, latitude, longitude, duration (the time stamp analysis) etc, which you can use to calculate the time taken to finish administering the questionnaire (Fig. 4).
There was a scenario where it was discovered that in a health facility conducting routine immunization session live, it took the surveyor less than 5 minutes to administer the checklist [This was thought to be a short time given our expectations about the amount of time it would reasonably take to observe a session, conduct client exit interview etc.]. The team lead immediately alerted to validate the visit and call the attention of the surveyor to understand the situation, it was later found out that the surveyor finished the activity first and later came and filled the checklist at once. This information was used to further sensitize the other members on the best approach to handle the mobile.
During the survey, some of the challenges identified includes the issue of power as some of the mobile phones could not last the whole day and went off during data collection. Additionally, the lack of network and internet connection in some locations could not allow some of the surveyors to share the data from the field as they had to wait till when they reach an area with good network which resulted in delays to submit completed questionnaires.
Discussion
In general, our findings show that mobile phonebased data collection (using ODK) has greatly improved real-time data collection during the review. This is a great improvement when compared to previous reviews, where data collation took 3 days in the 2016 EPI review for Mozambique 15 , some of the data from health facilities got lost during collation and affected the completeness of the data 13 . The reason for the remarkable improvement compared to the paper based approach is due to the fact that the ODK application platform (both mobile and webbased interface) enhanced real-time supervision and monitoring the flow of data, which markedly improved our ability to detect data fabrication and suspicious data entries 4 . Furthermore, the number of errors during data entry was low because of the inherent data entry constraints, skip logic and enforced validation on the ODK application unlike the paper based approach that has been reported to be prone to errors of omission 4 .
Nevertherless, it is pertinent to note that the application does not negate data fabrication, which could still go undetected. For example, a data collector could key in wrong figures for population and as long as they are numbers (integers) it will not be detected as an error. This disadvantage might be reduced through verification because the application has GPS capabilities which allowed supervisors to know the locations visited by the surveyor as geocodes of each location is required before and after the completion of the data entry. Hence, any suspicious entry can be revalidated since the coordinates for each location was recorded and therefore, traceable for verification. The web-based interface has other advantages such as automated graphs, charts deployed on the 'DASHBOARD' and real-time information allowed the team lead and the programme manager's opportunity to focus their time on other aspects of the review and solving logistical difficulties in the field.
In the conduct of these reviews, in all the countries except Mauritius, we had used the existing mobile phones that were purchased for other programs to enter and upload data at the point of collection. This allows us to implement the process with little or no direct costs of phones. While it may be difficult to compare the costs incurred with using a paper-based approach in this study, it is most likely that the cost of using the mobile phones compares favourably to a paper-based approach. The issue of the cost-effectiveness of the system needs to be explored in future studies that compares the paper-based approach and the phone-based data collection approach.
The automatic sending of data from the mobile phone to the server significantly increases the timely submission of data. In our study, there was no data loss experienced and data that were not sent automatically (due to lack of internet services in the field) were later sent in areas with internet connectivity. This further underscored the benefits of using innovative technology to collect data. However, this finding was inconsistent with other studies where the issue of loss of data due to technical problems, damage, theft or loss of phones was reported 10 . In our study, the loss of data was partially minimized because of the little time lapse between data collection and data upload.
The issue of power to charge the phones in the field that was reported as a challenge could be mitigated by having a power bank, which could be used in the field, while sending of data could be delayed until the team reached a location with good internet access. One of the limitations of the findings of this study is that the design did not consider comparison of the time spent between using paper-based and using the mobile phone data collection and the cost implication of each approach which calls for further studies using a control group to compare the two systems. However, the opportunity of automatic sending of data where network/internet exist and the real-time access significantly improves the timeliness of data, alongside the skip patterns and constraints adapted in the checklist while collecting data provided the ability to validate the data during collection and ensures quality data reaches the server.
Conclusion
The experience gathered during the EPI review in these countries clearly shows that mobile data collection enabled with the real-time monitoring of data flow improves the timeliness and quality of the data, thereby making it an interesting approach as compared the previous paper-based approach. It can support processes that require collecting data from the field using superb validation and can be adopted for all relevant program reviews of different types and sizes. The gains associated with mobile technology, plus the advantage in relation to coordinates location capture that mobile phone offers, ease of use and constraints at entry level makes it a preferred solution of data collection that should be explored. | 4,706.8 | 2021-04-13T00:00:00.000 | [
"Computer Science"
] |
Go.Data as a digital tool for case investigation and contact tracing in the context of COVID-19: a mixed-methods study
Background A manual approach to case investigation and contact tracing can introduce delays in response and challenges for field teams. Go.Data, an outbreak response tool developed by the World Health Organization (WHO) in collaboration with the Global Outbreak Alert and Response Network, streamlines data collection and analysis during outbreaks. This study aimed to characterize Go.Data use during COVID-19, elicit shared benefits and challenges, and highlight key opportunities for enhancement. Methods This study utilized mixed methods through qualitative interviews and a quantitative survey with Go.Data implementors on their experiences during COVID-19. Survey data was analyzed for basic univariate statistics. Interview data were coded using deductive and inductive reasoning and thematic analysis of categories. Overarching themes were triangulated with survey data to clarify key findings. Results From April to June 2022, the research team conducted 33 interviews and collected 41 survey responses. Participants were distributed across all six WHO regions and 28 countries. While most implementations represented government actors at national or subnational levels, additional inputs were collected from United Nations agencies and universities. Results highlighted WHO endorsement, accessibility, adaptability, and flexible support modalities as main enabling factors. Formalization and standardization of data systems and people processes to prepare for future outbreaks were a welcomed byproduct of implementation, as 76% used paper-based reporting prior and benefited from increased coordination around a shared platform. Several challenges surfaced, including shortage of the appropriate personnel and skill-mix within teams to ensure smooth implementation. Among opportunities for enhancements were improved product documentation and features to improve usability with large data volumes. Conclusions This study was the first to provide a comprehensive picture of Go.Data implementations during COVID-19 and what joint lessons could be learned. It ultimately demonstrated that Go.Data was a useful complement to responses across diverse contexts, and helped set a reproducible foundation for future outbreaks. Concerted preparedness efforts across the domains of workforce composition, data architecture and political sensitization should be prioritized as key ingredients for future Go.Data implementations. While major developments in Go.Data functionality have addressed some key gaps highlighted during the pandemic, continued dialogue between WHO and implementors, including cross-country experience sharing, is needed ensure the tool is reactive to evolving user needs.
Background
Responding to infectious disease outbreaks requires careful management of large amounts of data on cases, their contacts, and exposure events.In many settings, a manual and largely paper-based approach to case investigation and contact tracing has introduced strains on field teams and delays in response [1][2][3][4].To respond to Member States' requests for a free, flexible, and open-access platform designed specifically for the outbreak context, the World Health Organization (WHO) and Global Outbreak Alert and Response Network (GOARN) partners hosted a workshop in 2016 to collaboratively outline key functionalities for field data collection tools [5,6].Outputs from this workshop, together with years of collective experience across GOARN deployments, formed the basis for key requirements and principles underpinning the subsequent Go.Data development (Fig. 1).
The first version of Go.Data was released in 2019 and deployed later that year in response to an outbreak of diphtheria in Cox's Bazar, Bangladesh followed by responses in Kasese, Uganda and North Kivu, Democratic Republic of the Congo for the 2018-2020 Ebola outbreak [8,9].
In the case of Go.Data, the tool saw a rapid surge in demand during the COVID-19 pandemic -from previous use within smaller scale outbreaks to implementations in over 65 countries or territories and 115 institutions by the end of 2021 [24,25].Despite its widespread use, there was limited documentation of Go.Data implementations across contexts, and key learnings remained largely anecdotal in nature through informal exchanges with implementors via WHO consultations or technical meetings [26][27][28].
The goal of this mixed methods study was to characterize the landscape of Go.Data implementations during COVID-19, identify enabling factors and impacts of Go.Data on response efforts, and highlight common challenges and areas where future enhancements are needed.
Study design and setting
A mixed methods study was conducted between April and June 2022 with global recruitment and participation across all WHO regions.The study utilized both quantitative and qualitative methods through a pre-interview quantitative survey and qualitative in-depth interviews with Go.Data users.A mixed methods design was chosen Fig. 1 Brief overview of the Go.Data tool to obtain a full range of stakeholder views and distill key themes across Go.Data implementations.Interviews elicited transparent and nuanced feedback on successes, challenges, and lessons learned, while pre-interview surveys captured complementary quantitative descriptors such as institutional and participant profiles, scope of implementation, and logistics of Go.Data roll-out.
Study participants and recruitment
Study participants were primary focal persons, defined as the main project lead during the time of initial rollout, across all eligible implementations where Go.Data was used during the COVID-19 response.The research team screened the WHO-Go.Data implementation database to identify eligible Go.Data implementations and their corresponding focal person.An implementation was deemed eligible if it was successfully installed and used by response personnel between January 2020 and April 2022.Where corresponding personal contact information was missing, WHO regional and country office teams followed up with in-country counterparts for additional verification.The resulting list of individuals were invited to participate, or to otherwise identify a more suitable representative from their institution.All participants signed informed consent and due diligence forms.Sampling for interviews continued past saturation, to ensure appropriate regional representation where possible.This entailed additional contact to WHO regions without existing representation in request of interview participation.Recruitment stopped once there was representation from all six WHO regions.Prior to each interview, participants were asked to fill out an online pre-interview survey (Qualtrics, Provo, UT).
Data collection and analysis methods
The research team comprised of an external GOARN evaluation team (LM, JS, MR) and WHO personnel supporting the Go.Data project (SH, SM), all of which were collectively trained on the study protocol, qualitative methods, and interviewing techniques.Before initiating data collection activities, the research team jointly reviewed and piloted the interview guide and survey to establish validity and reliability.
The interview guide contained open-ended questions probing interviewees to discuss topics such as the appeal of Go.Data, their experiences installing and implementing the tool, and their perceived successes and challenges working with the software during the COVID-19 response.The languages used for interviewing were English, Spanish, and French and were adapted at the request of the interviewee.Interviews (ranging from 20 to 70 min) were conducted virtually via Zoom and were recorded and automatically transcribed with consent from the interviewees.All interview transcripts were reviewed by the research team to verify content.Non-English transcripts were sent to a professional translation and transcription services prior to analysis, and all finalized transcripts were organized, stored, and analyzed in NVivo 12 software to allow for memoing, coding, and categorizing of interview responses.The qualitative codebook went through several rounds of iterative review by the research team until consensus on the final coding frame was reached.All interviews were dual coded and reviewed as per the 10% minimum set forth for interrater reliability [29,30].The qualitative line-by-line coding included inductive as well as deductive analysis.Deductive reasoning was used to compare qualitative input to survey responses.Inductive reasoning was used to look beyond the pre-determined questions to see what alternative patterns emerged.
In contrast to the open-ended structure of the qualitative interview, the quantitative survey aimed to collect close-ended data pertaining to the logistics of using Go.Data, for example, the size of implementation teams, the type of institutions using Go.Data, prior tools used for contact tracing, the length of time it took users to install and implement the software, and in what settings the Go.Data tool was used.Participants were asked to complete this survey prior to the scheduled time of the interview and to pass along the survey to other co-implementers from their institution who may be able to best answer the survey questions.
In alignment with mixed methods best practices, the qualitative interview guide and quantitative survey instrument echoed core questions for the purpose of triangulation [31].Qualitative findings were compared to the quantitative results and iteratively discussed within the research team until a consensus was reached.Finally, two rounds of member-checking, the process of garnering feedback from study participants prior to dissemination, took place to ensure key findings resonated and were representative of participant experiences [32].The first round of member-checking took place with the Go.Data team and the second round of member-checking occurred with all study participants, via email correspondences requesting any clarifying comments and reflections.The results presented in this document reflect the findings and feedback from all rounds of analysis and member-checking.
Study sample characteristics
In April 2022, the research team reached out to 95 focal persons associated with an eligible Go.Data implementation.After initial correspondence and triangulation of implementation details, 15 among the 95 were excluded where Go.Data was eventually not implemented or the focal point could not be identified.After two follow-up reminders, a total of 33 primary focal persons agreed to participate and the rest were classified as no response (n = 47).The study team conducted interviews across English (n = 27), French (n = 4), and Spanish (n = 2) languages.In total, 41 corresponding pre-interview surveys were received.Slightly more surveys were collected than interviews as some implementation teams had more than one focal person or key Go.Data team members that opted to fill a survey to provide feedback.After coding of qualitative interview data, 28 codes were identified and triangulated with quantitative data across key categories aligned with study aims.
Interviewees and survey respondents reflected representation across all six WHO regions and 28 countries (Fig. 2) and acted in variety of roles at their institution, both in general and for the COVID-19 response (Table 1).Of note, there was a significant discrepancy between functions supported within the response and "fixed" roles at the institution during peace-time -for example, an overwhelming majority of respondents supported the response in information technology (IT) (80%) or supervision (78%) capacities, while roles typically requiring these skills, such as IT specialists or contact tracing coordinators, represented only 7% and 10% of fixed roles, respectively.
Aim 1: Landscape of Go.Data implementations
The first objective of this study aimed to document and characterize the landscape of Go.Data implementations during COVID-19.Notably, the distribution of study participants across all six WHO regions roughly mirrored the regional distribution of eligible Go.Data implementations during COVID-19 (Table 1), and information on the scope and type of implementations is shown in Table 2.The majority of participating implementations (58%) were conducted by relevant governmental bodies such as the Ministry of Health or national public health institute and entailed national scope (45%) or focus on a particular subnational area (42%).While most implementations introduced Go.Data as a practical tool to support case listing, contact listing and contact follow-up (71%), several others positioned Go.Data for research purposes that entailed detailed case-level data collection on specific sub-groups (10%) or only a storage of cases and contacts (10%).Interviews further clarified that global research initiatives such as WHO Unity Studies faciliated Go.Data use as a data collection instrument for the First Few X (FFX) cases and contact investigation protocol and assessment of risk factors for COVID-19 in health workers [33].Among interviewees not using Go.Data for contact follow-up, some mentioned this was due to the immense data volumes and data entry burden as contact tracing scaled.Aim 2: enabling factors and positive impact of Go.Data The second objective of this study aimed to identify the enabling factors and perceived positive impacts of Go.Data on COVID-19 response efforts.Among survey responses, the elements appearing most frequently were WHO endorsement (85%), specific features of interest (61%), free cost (61%), and ease of implementation (59%).Features regularly cited in interviews related to data visualization and chains of transmission, as evidenced in the below quote: "Focusing on those very nice features which attracted us to it in the first place, like the graphical description, depiction of the chains of transmission and the mapping and so on, because those are the things which really attracts you to it." (Interviewee 29) Given that several factors independent from tool functionality arose in survey responses, these concepts where explored further in the in-depth interviews and are outlined below.
WHO endorsement
WHO endorsement was a recurring theme across diverse institutional contexts.Interviewees emphasized that Go.Data being a WHO tool not only increased their trust in the software but facilitated buy-in from key stakeholders, such as Ministries of Health, financing bodies and other public health agencies -often necessary for obtaining initial approvals to proceed.As seen in the next quote, interviewees expressed the risk involved in the uptake of the software and how WHO endorsement helped to alleviate this: "…it was a risky time to try something now…but the WHO logo felt reliable and helped us convince our stakeholders that...downloading the software was the right choice to make during the unknown time of the pandemic." (Interviewee 14) Aside from trust and reliability, interviewees alluded to benefits of accessing the wider WHO network.Many interviewees cited the opportunities for cross-country exchange and bi-directional dialogue with the WHO team, on Go.Data and contact tracing more broadly, as a key element in their continued use of the tool.Existing WHO Member State forums or connections facilitated through WHO country office presence facilitated these opportunities, in the eyes of interviewees:
Accessibility and adaptability
More than half of survey respondents (59%) and all interviewees mentioned Go.Data's free cost as an important factor in selecting the tool.Notably, interviewees described that this ensured constrained budgets could be directed towards the necessary human resources and IT infrastructure for adapting the Go.Data tool to the their surveillance context: "Go.Data was available, it was off the shelf, and it was free...it's an important consideration, even though we're a large organization.But free means one thing… there was a lot of work making it work within our context, with which was an investment." (Interviewee 23) Many interviewees expanded on the theme of adaptability, noting that flexible and ready-to-use components enabled rapid tailoring to local needs -for example, language tokens to quickly translate the user interface, default disease outbreak templates aligned to WHO standards for minimum variables, and custom questionnaire builders to align with national investigation forms.These were seen as greatly expediting the set-up process, ensuring quality was upheld, and creating a structure for future outbreaks, as evidenced in the following quotes: "So, I think the easiest [thing] about Go.Data is that it's easy to install.You know just one click and it will be set up in an easy to understand all the variables...With Go.Data, you can easily add or remove whatever you want." (Interviewee 3) "...from our side, you can plan to use the system for other outbreaks that are not COVID.Even when COVID ends, for other outbreaks we can continue to use the system." (Interviewee 8)
Flexible support modalities
While a few institutions simply downloaded Go.Data and proceeded with implementation self-sufficiently, most survey respondents (85%) had sought out at least one mechanism of Go.Data support prior to or during roll-out.This was largely through online modalities, such as virtual training sessions with the Go.Data team (71%), online user manuals (68%) and the selfpaced OpenWHO training course (62%).Although only 29% of survey respondents made regular use of the Go.Data Community of Practice, those who did cited its value during interviews for both troubleshooting urgent IT issues and connecting informally with users from different contexts.When asked about their team's experiences engaging with support functions, one interviewee stated: "I'd like to thank the Go.Data team for their support.
For their understanding about [our] challenges, and [how they] adapt[ed] our comments so far.I hope you can continue this journey and improve the sys-tem…now that we have developed a digital health strategy where everything will be digitalized and then interconnected." (Interviewee 4)
Go.Data impact on COVID-19 response efforts
Key benefits of Go.Data implementation were summarized by survey respondents (Table 3) and further explored in interviews.Participants echoed the perceived benefit that Go.Data brought to daily response activities in terms of structure and standardization across data and people processes.
Surveys and interviews noted that not all institutions had a contact tracing data system in place, nor a dedicated multidisciplinary team to perform activities, prior to COVID-19 and Go.Data introduction.Of those institutions who had some infrastructure in place, most managed all related data on paper forms, Google Forms, or shared Excel files prior to Go.Data implementation (Table 4).Many re-iterated that even if basic, prior infrastructure provided a starting place for Go.Data configuration and task organization to begin.
In this way, introducing Go.Data was cited by many interviewees as means to consolidate and formalize workflows, if and where they existed, and establish these as a foundation for future outbreaks.This meant complementing existing teams and ensuring staff across pillars could input data simultaneously into one system, as evidenced by the two quotes below:
"…our [previous system] created a lot of confusion, because we had different data on various excel sheets. So for us, we were looking for one true tool-a one stop shop where everybody can enter data…[someone] recommended Go.Data, and realized that that is just what we needed. " (Interviewee 2) "…[Go.Data] was a way to centralize all the data coming in so quickly. We didn't have a system in place before Go.Data to handle the load coming in…we also didn't know how massive COVID [would be]…" (Interviewee 3)
Aside from data management alone, improvements in data analysis tasks were cited across surveys and interviews alike.Several interviewees noted increased efficiency in producing and communicating epidemiological information for decision makers through out-of-the-box analytics features:
"...[Go.Data] made it easy...to index and source and build out visualizations and graphs...heat maps and other things... [and to] have a system that's functional and not overly complicated…" (Interviewee 22)
Importantly, all participants emphasized the valueadd of being better prepared for future outbreaks after Go.Data implementation and the wisdom of such advice as "…don't try to implement something new in times of crisis" (Interviewee 10).Across survey respondents, already 12% noted having expanded Go.Data platform for other outbreaks beyond COVID-19, and interviewees echoed similar sentiments:
Aim 3: challenges and opportunities for enhancement
The third objective of this study aimed to identify users' perceived challenges during Go.Data roll-out and which enhancements and support should be prioritized.There were several specific challenges noted by survey respondents, many of which were IT-related, but some of which implied financial and workforce constraints, as shown in Table 5.Some challenges were reported regardless of institution type and scope, such as problems with installation, platform updates and server load across national, subnational and institutional implementations alike.However, certain resource challenges such as turnover of staff and budget constraints were only cited by national and subnational implementations.Interviews elucidated a more nuanced understanding of common challenges and how they could potentially be addressed, outlined in the following sections.
Personnel and skill-mix within implementation teams
Many interviewees noted that while some IT tasks during installation or configuration could be tedious or introduce unforeseen errors if small details were overlooked, many technical issues that persisted, or were ultimately addressed, depended on the capacity and skill-mix of the team.This included, for example, having at least one IT specialist that could set up the server, monitor platform performance over time, and report any issues rapidly to the Go.Data IT support team.Survey data showed that 37% of teams needed to eventually hire contractors to assist with Go.Data setup and use, reflecting that there were varying degrees of IT capacity and existing infrastructure in many institutional settings.One interviewee reflected this sentiment, describing that a diverse baseline level of IT capacities can introduce challenges for supporting a tool's roll-out globally:
"…Go.Data tries to be all things to all men in the sense of trying to meet the needs of a diverse range of countries….[who] may not have the same level of networking or compatible [IT] facilities. " (Interviewee 31)
Some interviewees noted that during major upgrades, teams under time pressure had little time for extensive application programming interface (API) testing or scanning dense documentation.Several recommended more predictable communications from the Go.Data team around IT bugs and fixes and clearer software release notes.
Beyond limitations in IT capacity, participants also reported the lack of managerial staff to oversee contact tracers, in line with findings in Table 1.Per interview data, most teams had fewer than five people in managerial roles and up to hundreds of contact tracers, data collectors or laboratory staff.Training teams on Go.Data at such scale was challenging, and 12.2% of survey respondents reporting no structured and cohesive training that accompanied Go.Data implementation.Furthermore, interview data revealed the difficulty in continuously training personnel despite COVID-19 risk reduction policies which prevented large gatherings and frequent staff turnover.
Data volume
Almost all interviewees described the immense challenge of increasing data volumes that the COVID-19 pandemic yielded.This became overwhelming for users, especially as early 2020 versions of Go.Data were reportedly less performant in visualizing large amounts of data: "The problem is there are some limitations in Go.Data which we cannot configure to be more user friendly…especially when it comes to big data, because I know at the beginning [it was created] for smaller outbreaks, but when it comes to COVID, especially right now we are using [Go.Data] for lab results management as well, with two million records.So, we face some challenges in terms of speed, in terms of data visualizations, especially with analyzing.. there are some limitations in terms of that, so we needed to export to a third-party program." (Interviewee 11) Interviewees recommended potential ways to improve usability with large datasets, such as improved pagination and advanced filter functionalities when locating cases and contacts of interest: over and over and over so it's impossible." (Interviewee 6)
"… [in a search] I can only see the maximum is 50 on a page when you have 800 or thousand. Exactly so you're looking at one, and when you want to go back you want to go back to the same search it brings you back to zero, so you have to start the search again
Noted as less urgent, but frequently mentioned, were aspects of the platform deemed too rigid, such as the inability to add or modify core variables in the case and contact module.Additionally, the lack of localization (not adjusting to a user's specific time-zone of interest) when records were created or modified combined with few recognizable features that most medical professions were familiar with in other digital systems (e.g., comment boxes and electronic signatures) were highlighted across some interviews.These issues in particular posed significant roadblocks for implementations where Go.Data was used by clinical staff, as highlighted in the following quote: "The lack of a timestamp is a major legal issue.
Data entry burden
Not mentioned in the survey but seen throughout the interview data was the challenge of data entry burden on staff with limited bandwidth.Several interviewees mentioned that it was more common for medical professionals tasked as contact tracers during COVID-19 to push back on Go.Data use, as it was seen as an additional burden during an already chaotic and strenuous time for the medical community.
"There are some people who still reason that data is for techies not for the doctors...so we had a bit of culture gap to bridge...this idea that 'this is a crisis I need to be a doctor providing care' ...that data inputting was almost bad practice because it was a crisis and it was inappropriate to ask them to do that." (Interviewee 29)
Institutional approvals
Across surveys and interviews, bottlenecks to timely implementation were discussed, with key intervals such as time to approval and time to installation discussed.Timelines varied across institutions due to factors such as the level of previous training and sensitization on Go.Data, planned implementation scope, size of the Go.Data implementation team and political context.Although a portion of survey respondents (20%) reported receiving institutional or governmental approval, if needed, within 72 hours, nearly half (47%) waited longer than one month to obtain approvals.The main reason for approval delay was related to stakeholder skepticism to try a new software on national servers during a crisis such as the COVID-19 pandemic, as illustrated by interviewee 24: "The Ministry of Health was unsure at first…and took a long time to talk it over…it was a big and new software that needed access to our national [secured] servers during a time where people were already untrusting of the government…so you can see…it took some time to deliberate." (Interviewee 24) Once proper approvals were obtained, participants noted that installation and configuration itself was relatively rapid, with nearly half (42%) completing installation within 72 hours and 21% within 24 hours.
Discussion
This study highlighted the overall potential for Go.Data to enhance case investigation and contact tracing activities across diverse contexts, both during the COVID-19 pandemic and for future outbreaks.It also elicited common enabling factors and issues encountered during implementation and scale.Importantly, it crystallized the challenges inherent in implementing a new information system during a large-scale emergency, where considerable constraints on workforce and system capacities can minimize effectiveness of surveillance activities, regardless of the tool used [34,35].
Factors such as WHO endorsement, accessibility, adaptability, and flexible support were important considerations for implementors during the tool selection phase.Maintaining accessibility should remain central to the Go.Data project ethos to minimize bespoke outsourcing for specific personnel in future responses.Go.Data's use during COVID-19 demonstrated the tool's applicability both in high-income and low-income settings alike [24,25,36], but some required significantly more implementation support than others.Given this reality, decentralization of project support at the WHO regional and country levels can help further ensure that quality and coverage of support is maintained, while ensuring online support materials are frequently updated and available in multiple languages.The ability for the Go.Data project to initiate cross country exchanges through WHO and GOARN forums is a valuable aspect of implementation that was echoed across participants overall.These forums, in addition to the Go.Data Community of Practice, should be further leveraged as the tool evolves to ensure it remains fit-for-purpose and is increasingly country owned, particularly as efforts towards a fully open-source software license through the WHO's Open Source Programme Office are realized [37][38][39].
Study findings suggested Go.Data's inherent compliance with WHO surveillance standards across diseases was seen as credible by public health responders worldwide, thereby creating efficiencies in COVID-19 response teams.Open-access and standards-based toolkits for public health practitioners are becoming increasingly important to both ensure alignment with data management and analysis best practices and de-duplicate efforts [4,40,41].Given the challenges posed by introducing new standards (among other technical or logistical hurdles) during an emergency, study findings reiterated that a platform like Go.Data is most optimally introduced during the preparedness phase.This allows for ample sensitization across high-level stakeholders and end users, alike, and contributes to ensuring data streams we are building today can solve for tomorrow's questions [42].Embedding tools of interest into existing curriculums with global reach, such as Field Epidemiology Training Programs (FETP), holds great potential for future Go.Data implementation activities, given the critical role of FETPs in the global workforce to rapidly detect and respond to outbreaks [43].Such actions could ensure that systems are in place at national, local, and institutional levels and can be scaled as needed while modernizing the toolbox of field epidemiology cohorts over time.
Although surveillance staff and data managers are the most obvious users of the Go.Data tool, other key members played crucial roles throughout interviewee's experiences, namely IT specialists and supervisory staff.Multidisciplinary teams working on a common system proved useful in streamlining operations and formalizing processes.With the increasing digitization and availability digital tools, there is an increasing need for IT and public health personnel to speak the same language.Individuals at this nexus of public health and digital health literacy should be recognized as valuable assets for the public health workforce.Increased advocacy is needed to ensure minimum workforce requirements are met, including balanced teams to achieve collective competence across the epidemiology workforce [44].
This study emphasized persistent challenges with large data volumes witnessed during the COVID-19 pandemic.Fortunately, specific feature requests on loading time, pagination, filters and time localization were addressed in subsequent Go.Data versions [45].The COVID-19 pandemic was an important opportunity to expand the tool beyond its original development use for smaller focal outbreaks, and re-iterated the importance of adapting based on evolving needs of the outbreak response landscape.Given high value placed on dialogue with WHO support team and across users, collecting and addressing feedback in a timely fashion should be among key priorities for WHO and GOARN partners.
Strengths and limitations
A major strength of this study was its coverage and representation across all WHO regions.Regional distribution of participants roughly mirrored that of overall Go.Data implementations during COVID-19, and included participant viewpoints across multiple languages and diverse institutions.However, this study also had several limitations.Given that Go.Data installation files are freely shared by WHO and GOARN partners across headquarters, regional and country offices, the team did not have full visibility of every historical and existing implementation, and only knows of institutions that reach out directly for support.Due to this, the research team likely missed implementations when undertaking participant recruitment.The research team sought to remedy this by engaging WHO regional and country offices to follow up on leads if contact information was missing or not clear, prior to screening the implementation database.As is often the case with qualitative methods, there is the risk that both the interviewers and interviewees were exposed to bias.The research team sought to control for this bias via trainings on interviewing techniques and extensive piloting of the study instruments.In addition, the research team cannot guarantee completely the quality of the survey data as we are not sure if the right focal person completed it.For the scope of this study, the research team assumed that all survey responses were filled appropriately and used the responses accordingly.
Conclusion
This study contributes to improved transparency on Go.Data's global use during COVID-19 and provides steer for where WHO and GOARN partners should target future support.Study findings overall emphasized that Go.Data is not a "silver bullet" solution and relies on a capacitated and well-supervised team, with minimum IT infrastructure in place, in order to work as intended.
Although the tool has limitations, Go.Data's track record in accessibility and adaptability can be a foundation to build on as WHO's continues development and endorsement of standards-based tools during outbreaks.Concerted preparedness efforts across the domains of workforce composition, data architecture and political sensitization should be prioritized as key ingredients for any successful Go.Data implementation, including increased digital literacy across the public health workforce.Continued dialogue between WHO and implementors, including via forums for countries to share experiences, will ensure the tool and support are reactive to evolving user needs.
• fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year
•
At BMC, research is always in progress.
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from:
Fig. 2
Fig. 2 Geographic representation of study participants
Table 1
Participant characteristics
Table 2
Go.Data implementation characteristics a collected from survey responses and de-duplicated by institution.Two in-depth interviews did not have a corresponding survey response b roll-out within individual institutions, such as hospitals, universities or research centers c including local public health authorities/governmental bodies Characteristics (n =
Table 3
Go.Data's impact on COVID-19 response efforts a from survey respondents; multiple responses allowed; 3 provided no response
Table 4
Data management strategies prior to Go.Data introduction a from survey respondents; 3 provided no response
Table 5
Challenges experienced during Go.Data implementation a from survey respondents; multiple responses allowed; 5 provided no response | 7,375.2 | 2023-09-04T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Computer Science"
] |
On CFT and quantum chaos
We make three observations that help clarify the relation between CFT and quantum chaos. We show that any 1+1-D system in which conformal symmetry is non-linearly realized exhibits two main characteristics of chaos: maximal Lyapunov behavior and a spectrum of Ruelle resonances. We use this insight to identify a lattice model for quantum chaos, built from parafermionic spin variables with an equation of motion given by a Y-system. Finally we point to a relation between the spectrum of Ruelle resonances of a CFT and the analytic properties of OPE coefficients between light and heavy operators. In our model, this spectrum agrees with the quasi-normal modes of the BTZ black hole.
Many body quantum chaos is interesting in its own right, but usually hard to quantify. Identifying simple models or general mechanisms that exhibit aspects of quantum chaos is therefore a worthwhile goal. In this note we make three interrelated observations that may help 1) identify a new class of toy models in the form of a simple lattice model built out of parafermionic spin variables 2) clarify the relationship between maximal quantum chaos and the non-linear realization of conformal symmetry at finite temperature, 3) relate the spectrum of Ruelle resonances to analytic properties of OPE coefficients in the CFT. We now briefly describe each of the three components of our story.
1) A discrete model of many body quantum chaos. Useful many body systems that may exhibit chaos are quantum spin chains and matrix models. Another interesting example is the SYK model, which is solvable at strong coupling, maximally chaotic, and exhibits emergent conformal symmetry at low energies [5][6][7][8]. Our model of interest combines ingredients and properties of both examples, with the added feature that its Lyapunov behavior can be exhibited via weakly coupled effective field theory. The model The discrete model is defined on a rhombic lattice. We indicated the center (σ, τ ) of the diamond (σ ± 1, τ ± 1). The equation of motion (1.3) expresses the variable at the top of the diamond in terms of the other three.
described below is a minor specialization of the class of integrable lattice models introduced by Faddeev, Kashaev and Volkov [16][17][18][19][20]. The model is assembled from a collection of Z N parafermionic operators f n , labeled by an integer 1 ≤ n ≤ L with L some large odd integer. We identify f L+1 ≡ f 1 , so the integers n label points on a 1D periodic lattice. The f n satisfy the algebra while [f n , f m ] = 0 for |m − n| ≥ 2. This parafermion algebra can be realized on a finite dimensional Hilbert space H = V 1 ⊗ V 2 ⊗ . . . ⊗ V L with V n an N -dimensional vector space attached to the link between site n and n + 1, on which f n and f n+1 act via appropriate clock and shift matrices. In the end, we imagine taking the continuum limit L → ∞. The integer N is assumed to be large but finite. 1 The time-evolution is discrete and specified as follows [16][17][18]. We relabel the variables f n by means of two integers f σ,τ with σ + τ = even, via f 2r,0 = f 2r and f 2r+1,1 = f 2r+1 . The relabeled variables specify the initial condition of the model. The time evolution will generate a discrete, cylindrical 1+1-D space time formed by a rhombic lattice. The time evolution proceeds via a local propagation rule [16][17][18]. We can focus on a single diamons shaped lattice cell The evolution equation of the model reads as a discretized version of 2D hyperbolic geometry [16][17][18]. The exchange relation (1.1) amounts to a quantization of this hyperbolic geometry. 2 The lattice model is a well defined quantum system, albeit one with a discrete time evolution. The model has been constructed [16][17][18] so that in the large L and IR limit, it describes a 2D continuum CFT with a non-linearly realized conformal symmetry with central charge c = 1 + 6(b + b −1 ) 2 with b 2 = 1/N. As we will explain, this CFT exhibits maximal Lyapunov behavior, and an infinite set of Ruelle resonances match the quasinormal frequencies of the BTZ black hole [21].
It may seem surprising that an integrable model can display properties characteristic of many body quantum chaos. To address this potential worry, one could choose to perturb the system away from integrability, e.g. by introducing frustration or by adding disorder. Since the features of quantum chaos will already become apparent in the unperturbed model, we will not go select among the list of such possible modifications 3 and instead focus on this idealized case, while ignoring the role of exact integrability. Indeed, we can note that there are other systems, such as N = 4 SYM theory at large N , that are believed to be both integrable and chaotic. We will return to this point in the concluding section.
2) Lyapunov from Goldstone. A central part of our reasoning consists of a new physical derivation of the Lyapunov behavior of an irrational CFT at finite temperature. The idea is as follows. 1+1-D CFTs are characterized by an infinite conformal symmetry group, given by reparametrizations of the lightcone coordinates u and v This conformal symmetry is broken by the conformal anomaly and by the presence of a finite energy density at finite temperature (and by the UV-cut-off). For a CFT with a dense asymptotic energy spectrum, it is then natural to expect that the conformal symmetry is non-linearly realized in terms of a light Goldstone mode. This motivates us to consider the effective field theory of the relevant Goldstone excitation, described by the chiral field ξ(u) in (1.4) that parameterizes the conformal group. The effective Lagrangian is uniquely fixed by symmetries, and given by the geometric action of the Virasoro group [22]. In section 2, we will use this insight to derive the commutation relations of the Goldstone fields ξ(u) and η(v). We will find that the thermal expectation value of the commutators squared initially grow exponentially with the time separation, with a temperature dependent Lyapunov exponent λ = 2π/β. In fact, we will derive the somewhat more precise result that, 2 In some way, one may view the model as a many body analogue of a hyperbolic billiard. 3 One could add disorder e.g. by using the freedom of normalization of the f n to set f n † f n = κn N×N , with κn random real numbers picked from a narrow probability distribution centered around κn = κ. Alternatively, one could add frustration e.g. by including a next-to-neighbor interaction in the time step rule (1.2) and (1.
JHEP12(2016)110
inside a thermal expectation value, the commutator between two generic local operators takes the form 4 with ǫ some constant proportional to 1/c. This result, which holds for time-like separations in the intermediate range c ≫ λt 12 ≫ 1, matches with the bulk interpretation of the commutator as resulting from a near horizon gravitational shockwave interaction [1,2,23,24].
3) Ruelle resonances as poles in OPE coefficients. A main characteristic of a chaotic system is that it thermalizes: out of time ordered correlation functions decay to zero at late times. The approach toward equilibrium is governed by Ruelle resonances [25]. They appear as poles in the Fourier transform of the thermal two-point function, or in systems that obey the ETH [26][27][28][29][30], the matrix element between two excited states with total energy M The Ruelle resonances of holographic 2D CFTs are well studied [21,31]. As argued in [32], the matrix element reduces (for small t) to the thermal 2-point function. Its Fourier transform G(ω) has poles at resonant frequencies that coincide with the quasi-normal modes of the BTZ black hole [21]. By factorizing the matrix element (1.7) in the intermediate channel, we can write where we used that in the Cardy regime, we can replace the spectral density ρ(E) = |i δ(E −E i ) by a continuous distribution, and label the CFT states by their energy. We learn that the Ruelle resonances dictate the analytic structure of the matrix element of a light operator O between two highly excited states. This indicates that the resonances must show up as poles in the OPE coefficient of a light operator and two heavy operators. Or in AdS-dual terms, the quasi-normal modes should show up as poles in the absorption and emission amplitudes of wave perturbations by a BTZ black hole.
In section 4 we will show that the analytic continuation of the OPE coefficients of the continuum limit of our model indeed has poles located at the expected frequencies (1.8). This supports the statement that the continuum limit of the model is ergodic.
JHEP12(2016)110 2 Lyapunov from Goldstone
Consider an irrational 2D CFT with central charge c ≫ 1 with an asymptotic density of states given by the Cardy formula, and with a sparse low energy spectrum. We place the CFT on a circle, parameterized by a periodic coordinate x with period 2π. We introduce light-cone coordinates (u, v) = (t − x, t + x).
Consider a finite energy state with a constant expectation value for, say, the left-moving energy momentum tensor In this regime, we can associate to the state a finite inverse temperature β 2π = c 24L 0 . Let us perform a general conformal transformation (1.4). We require that The expectation value of the energy momentum tensor transforms non-trivially The spontaneous breaking of conformal symmetry is displayed via the ξ-dependence of this expectation value. Indeed, we can compare the relation (2.3) with the expression for the energy-momentum tensor of a fluid. The first term is analogous to the usual kinetic energy 1 2 ρv 2 , whereas the second term in (2.3) is the familiar vacuum contribution due to the conformal anomaly. It has a well-known physical explanation in terms of the Hawking-Unruh effect: the coordinate change from u to ξ(u) reshuffles the positive frequency (annihilation) and negative frequency (creation) modes, and thus alters the notion of the vacuum state. Our physical assumption is that, for irrational CFTs at large c and in the Cardy regime, it becomes accurate to treat the coordinate transformation ξ(u) as a Goldstone field, in terms of which the conformal symmetry is non-linearly realized. Adopting this logic, we thus promote ξ(u) to an operator, that acts within the Hilbert subspace spanned by all states with energy density close to L 0 , and their descendants. Within this subspace, we can remove the expectation value in (2.3) and elevate the equality in (2.3) to an operator identity As we will see shortly, the expression (2.5) for the energy-momentum tensor in terms of ξ(u) is familiar from the geometric quantization of Diff(S 1 ), the group of (chiral) conformal transformations in 2D. A cautious reader may view equation (2.5) simply as a (in)convenient parameterization of the energy momentum tensor T (u). Our assumption, however, is that the symmetry parameter ξ(u) acts as a genuine local quantum field that creates and annihilates local JHEP12(2016)110 physical excitations. Given that ξ(u) is a scalar and T (u) is the generator of conformal transformations, we know that 5 The emergence of a light Goldstone mode at finite temperature can be explained as a physical consequence of the fact that an irrational CFT in the Cardy regime has an extremely dense energy spectrum. Equations (2.6) and (2.7) become semi-classical in the large c limit. From equation (2.1) we see that the field ξ(u) has expectation value So semi-classically, we can think of the Goldstone field as: ξ(u) = u + small fluctuations.
We are now ready to state the main technical result of this section: The three relations (2.5), (2.6) and (2.7) uniquely determine the commutation relation of the Goldstone field ξ(u), and are sufficient to derive the Lyapunov growth of commutators.
Working to leading order in 1/c, one finds that [33][34][35] with ǫ(x) the stair step function, defined via ǫ ′ (x) = 2δ(x) with δ(x) the periodic deltafunction: ǫ(x) = 2n + 1 for x ∈ (2πn, 2π(n + 1)). The same argument and derivation goes through for the right-movers. So we also have a right-moving Goldstone mode η(v) = v + small fluctuations, that satisfies the analogous commutation relation (2.9). 6 The left-and right-moving Goldstone fields commute [ξ(u), η(v)] = 0. A detailed derivation of equation (2.9) and (2.10) can be found in [33][34][35]. Here we give a short summary. The constituent relation (2.5) between the energy-momentum tensor and the field ξ(u) can be decomposed as The commutation relations (2.6) and (2.7) then follow from the free field commutator Here we absorb a factor of ≡ 6/c in the definition of T (u). This is a customary step, that exhibits the fact that the commutation relations (2.6) and (2.7) become semi-classical at large c. 6 For simplicity we will assume that the left and right movers have the same temperature.
JHEP12(2016)110
with = 6/c. So our task has been simplified: all we need to do is use relation (2.12) to solve of ξ(u) in terms of ϕ(u), and use the chain rule to deduce the commutator of ξ(u 1 ) and ξ(u 2 ) from the free field commutator (2.13) of ϕ. The free field ϕ(u) is periodic up to a shift ϕ(u + 2π) = ϕ(u) + πλ. (2.14) Using this fact, equation (2.12) integrates to [34,35] With this relation and equation (2.13) in hand, it is now a relatively straightforward calculation to derive the result (2.9) and (2.10).
Let us turn to the physical consequences of equations (2.9) and (2.10). We observe that λ is equal to the maximal Lyapunov exponent λ = 2π/β. We will assume that λ ≫ 1, i.e. the thermal wave length is very short compared to the size of the spatial circle. The second term in the commutator (2.9), and its right-mover counter part, thus grows exponentially with the coordinate differences u 12 and v 12 over the range We will restrict our attention to this coordinate range. In this regime, equation (2.9) implies that the commutator between two local functionsf (u 2 ) ≡ f (ξ(u 2 )) andĝ ≡ g(ξ(u 1 )) of the Goldstone fields satisfy Hence local operators are indeed non-trivial functions of the dynamical Goldstone fields. It is logical to take this observation one step further, and, similarly as we did for the energy-momentum tensor, assume that local operators O(u, v) can be represented as cnumber valued functions of the operator valued fields ξ(u) and η(v) and their derivatives. The collection of these functions is determined by the spectrum and operator algebra of the CFT. Their form is constrained by the locality requirement that space-like separated operators commute. This condition is very restrictive: it prescribes that primary local operators are all of the form [34][35][36][37][38] . (2.20)
JHEP12(2016)110
Equations (2.9) and (2.10) can then be used to compute the commutation relations between time-like separated operators, as follows. The accepted test for Lyapunov growth of the commutator between two local operators W and V is to compute the expectation value where the subscript ǫ indicates a small displacement. This expectation value is equal to the difference between a time ordered and an out-of-time-ordered (OTO) correlation function. The OTO correlation function is obtained via analytic continuation of the time ordered correlation functions, where one circles, say, the coordinate u around the origin. This operation amounts to analytic continuation of the left-moving conformal blocks to the second Riemann sheet. Of course, we could also choose to do the analytic continuation using the coordinate v. This would have given the same final result. The full-circle-monodromy M of a conformal block is the square M = R 2 of half-circlemonodromy known as the R-operation. The R-operator, acting on the left conformal blocks, re-orders the left-moving parts of the operators W and V . In the linearized regime, i.e. to leading order in 1/c, we can write R ≃ 1 − r with r the perturbative operation that takes the commutator between the left-moving parts of W and V . The full-circle-monodromy is M ≃ R 2 = 1 − 2r and thus the full commutator inside (2.21) is equal to acting with (1 − M) = 2r on the two operators W and V . From equation (2.17) we then deduce that This result, which holds for time like separation in the regime (2.16), displays the maximal Lyapunov behavior and the linearized gravitational effect of an early incoming perturbation (created by V ) on the arrival time of the outgoing signal (detected by W ). 7 We end with a brief comment on the extension to higher orders. As indicated by the description of the monodromy moves, one expects that the commutator (2.22) exponentiates to a non-perturbative exchange relation. Fourier transforming the left-moving coordinate via W α (v) = du e iαu W (u, v), this exchange algebra is expected to take the following form If we assume that the bulk interaction is dominated by gravity, then AdS/CFT makes a precise prediction for the monodromy matrix M α β [3]. The prediction precisely matches with the monodromy matrix of Liouville CFT [3].
A chaotic lattice model
In this section, we will connect the FKV lattice model, defined by equations (1.1), (1.2) and (1.3), with the above effective CFT derivation of Lyapunov behavior.
JHEP12(2016)110
The motivation for studying the lattice model is two-fold. First, the geometric theory of the Goldstone fields ξ(u) and η(v) is an effective theory, that only becomes accurate at finite temperature and long distance scales. Like all effective field theories, it does not define a fully consistent CFT by itself, nor does it have a unique UV completion. There are two ways in which one can try to embed an effective field theory into a self-consistent quantum system: a) look for an explicit UV completion, or b) introduce an explicit UV regulator. Approach b) is more practical.
A second motivation is that one can hope that the lattice model, by virtue of being more well defined, may allow for more explicit dynamical understanding of the underlying mechanism for chaos. Indeed, it turns out that the lattice Liouville model can be formulated in a way that preserves the geometric appeal of the continuum theory [16][17][18] The Y-system (1.3) and the expression (2.19) of local operators in terms of the function (2.20) both have a direct connection with hyperbolic geometry. To see this, we first note that the 1+1-D metric defined by describes a hyperbolic space-time with constant negative curvature. The authors of [16][17][18] gave a beautiful discretized description of this 2D hyperbolic metric as follows.
This confirms that the equation of motion of the FKV lattice model is a discretization of the hyperbolic metric (3.1). The parafermionic algebra defines a quantization of the space of discretized hyperbolic metrics.
JHEP12(2016)110
Our new observation is that this lattice model can serve as a useful prototype of quantum chaos. The most direct way to substantiate this claim would be compute an out-of-time ordered four-point function of local operators at finite temperature, as a function of the time difference t. While this would in principle be doable, we will leave this task to future work. Instead we will cut the computation short, by banking on the results of [16][17][18]20] that show that the above lattice model in the large L limit approaches continuum Liouville CFT. Together with the result of the previous section, this is sufficient to demonstrate that the continuum limit of the lattice model displays maximal Lyapunov behavior. For completeness, let us display a few more elements of the dictionary. Working to leading order at large N e ϕ + n e ϕ + m = e ϕ + m e ϕ + n q 2ǫnm e ϕ(u 2 ) e ϕ(u 1 ) = e ϕ(u 1 ) e ϕ(u 2 ) e ǫ(u 12 ) e ϕ + n+L = e 2πλ e ϕ + n e ϕ(u+4π) = e 2πλ e ϕ(u) (3.7) L 2π e ϕ + n = e The right column lists the formulas (2.13), (2.14) and (2.12) that were used to derive the commutation relation (2.9) of the left-moving Goldstone variable ξ(u). The left column is the lattice version of the same set of relations, with ǫ nm the discretized stair-step function. We can write a parallel set of formulas that represent the right-moving modes ϕ(v) and η(v) in terms of lattice variables ϕ − n and η n . Lattice variables ϕ ± n that satisfy the exchange relation in (3.7) are obtained from the local operators f n in two steps [16][17][18]20]. First we define two mutually commuting sets of chiral operators w ± n via These satisfy the algebra w ± n w ± m = q ±2ωmn w ± m w ± n , with ω mn = sgn(m − n)δ |m−n|,1 . The chiral variables ϕ ± n are then defined as At the initial time τ = 0, we can recover the single valued local parafermionic operators f 2n from the non-local chiral variables via f 2n = e ϕ − n e ϕ + n (3.10) This is the lattice version of the relation e 2φ(u,v) = e ϕ(u)+ϕ(v) that expresses a non-chiral free field vertex operator into the product of the two chiral vertex operators. We note, however, that the time evolution (3.4) does not amount to free field propagation.
JHEP12(2016)110
Among many other non-trivial results, [16][17][18] and [20] give an explicit construction of a unitary time evolution operator U that implements the time step (3.4) f σ,τ +1 = U † f σ,τ −1 U. (3.11) This time evolution does not preserve the chiral factorization (3.10). However, it is shown that there exists a Bäcklund operator B that solves the time evolution via This Bäcklund operation is causal but highly non-local, and no explicit representation of B is known at present. Indeed, as exemplified by this equation, all non-trivial dynamics of the Liouville lattice model is encoded in the way in which the two chiral sectors get mixed and become entangled under the time evolution step (3.4). Our results are evidence that this mixing and entangling is happening in a maximally efficient way. Our argument that the lattice model exhibits maximal Lyapunov growth is a copy of the effective CFT derivation presented in section 2. The three relations in the left column of equation (3.7) specify the commutation relations of the ξ n variables, in the same way as the right column fixes the commutator algebra of ξ(u). The commutator algebra is expected to approach the continuum result (2.9) in the large L limit. Our working assumption is that the exact solution (3.12) of the lattice model leads to an expression of the local operators f σ,τ in terms of the chiral modes ξ n and η n that mirrors formula (3.2). Via the same reasoning as in section 2, this expression can then be used to verify that the lattice model is local and to establish that the OTO four point function (3.6) grows exponentially with time.
Ruelle resonances
In this section we will expand on the topic of Ruelle resonances, which provide another signature of chaos and ergodicity. We will briefly review these concepts and then use the intuition for large c irrational conformal field theories to translate the knowledge about these resonances into concrete CFT data. We will introduce a notion of OPE coefficients (of light operators between heavy states) as analytic functions of energy. We will see that the presence of Ruelle resonances, in combination with the conformal bootstrap and AdS/CFT, impose stringent constraints on the form of these analytic OPE functions. We will then verify that the known OPE coefficients of the effective CFT of section 2 and the continuum limit of the lattice model of section 3 satisfy all these physical requirements.
Ruelle resonances in CFT
Ruelle resonances are poles in the Fourier transform of linear response functions that govern thermalization, the decay process towards thermal equilibrium after a quench. Consider a small perturbation produced by a local operator O b (x) to the Hamiltonian (4.1)
JHEP12(2016)110
Here J(x) is an external source. Then one can study how this perturbation influences the time evolution of the expectation value of some other operator O a (0) , which for convenience we place at x = 0. By expanding the evolution operator to linear order is the retarded Green's function. G ret ab (x) may be expressed in terms of two point functions as For 2D CFTs at large c, it has been argued in [32] that the matrix elements (4.4) are dominated by the identity conformal block (which for G + (u, v) is given by the term with h = 0 on the left in figure 2.) For large c, this identity block is well approximated by the thermal 2-point function on an infinite 1D space with β = π c/6M . This is a useful result, that supports both the ETH and the dual identification of the two point function as the boundary-to-boundary propagator of a bulk field in a BTZ black hole background. The validity of equations (4.5) is somewhat limited, however. It only holds for spatial separations that are small compared to the size of the spatial circle, and for the OTO two-point function, the time difference must be short compared to the scrambling time, since otherwise one enters the Lyapunov regime. On the gravity side, the perturbation O b creates an incoming wave that may collide with the outgoing wave detected by O a , and thereby substantially affect its future trajectory. This gravitational effect will show up as a modification of the OTO two-point function G − (u, v), and was studied in section 2. Here we will focus on the late time behavior of the time ordered 2-point function G + (u, v).
JHEP12(2016)110
The incoming wave deforms the black hole horizon state. The subsequent ring down of the black hole towards equilibrium is the dual of the thermalization process of the CFT. Both processes are governed by an infinite set of resonances. On the gravity side, these resonances are the quasi-normal modes. These can be analyzed perturbatively, by considering small fluctuations of fields propagating in the neighborhood of the black hole horizon. This resonant quasi-normal frequencies are an infinite series of complex numbers, labeled by a non-negative integer n via [21] with k is the momentum of the infalling mode and h the conformal dimension of the fluctuating field. This result was derived using the Poincaré patch, corresponding with a CFT on an infinite line, and with vanishing Dirichlet boundary conditions at infinity [21]. 8 It is reasonable to assume that the result generalizes to black holes in global AdS, with a periodic spatial boundary, by replacing the momentum k by an integer angular momentum ℓ.
In the CFT, the quasi-normal modes manifest themselves as Ruelle resonances, that appear as poles in the Fourier transform of the retarded thermal Green's function (4.3) which via equation (4.5) yields a spectrum that matches with the gravity prediction (4.6). Our goal in this section is to use the presence of these Ruelle poles to extract useful information about the OPE coefficients of the CFT. Earlier paper with results that overlap with this section are [32,39].
Resonances and OPE coefficients
As a preparation, let us look at the different conformal block expansions of the matrix elements (4.4), as shown schematically in figure 2. The first equal sign of these identities represents the crossing symmetry relation 9 where F h MM a b (z) represents the Virasoro block shown on the left in figure 2. We see that crossing symmetry relates the 't-channel block' with heavy intermediate channel (labeled by M+ ω) to the 's-channel block' with a light intermediate channel (labeled by h).
The second relation in figure 2 is the exchange algebra relation, that imposes locality in the Euclidean region. In Lorentzian language, it implies that the R-matrix R ω,ω ′ that relates the chiral time-ordered conformal block (labeled by M+ω) to the out-of-time-ordered conformal block (labeled by M+ω ′ ) is an appropriate unitary transformation, so that in the euclidean region, it cancels out between the left-and rightmovers of the complete CFT four-point function. After rotating to Lorentz signature, the R-matrix does show up in a non-trivial way, in the relation between the time-ordered Green's function G + ab (u, v) and the OTO Green's function G − ab (u, v) [3]. We wish to extract information regarding the Fourier transform of G ab from its expansion (4.8) in conformal blocks, in the channel shown in the middle of figure 2. This is not directly possible, since no explicit expression for the Virasoro conformal blocks is known. So let us take a step back and write the crossing symmetry formula as a sum over primary operators and descendants. Let γ ω n denote the n-th coefficient of the Laurent expansion of the Virasoro conformal block F M+ω MM a b (z). From now on we focus on the diagonal part of the two point function G ab (u, v) = G(u, v)δ ab . It has the following expansion and a similar formula holds for G ω R (v). Here |i runs over all conformal primary states of the CFT in the neighborhood of the high energy state |M . In the sum we allowed all states with different left-and right conformal dimension We want to take the Fourier transform (4.7) with respect to both light-cone coordinates. It is useful to introduce the spectral density of CFT primary states (4.12) We then have (4.14) JHEP12 (2016)110 with ω L = 1 2 (ω + ℓ) and ω R = 1 2 (ω − ℓ). For a given CFT, G ret (ω, ℓ) contains exact information about the spectrum of primary fields, in the form of a dense set of poles along the real axis, with residues equal to the corresponding OPE coefficient. The Ruelle resonances appear as a series of poles in G ret (ω, ℓ) located off the real axis. Based on equation (4.5) and the results of [32] and [21], we expect that their location should match with the quasi-normal frequencies (4.6).
The spectrum of an irrational CFT at large c becomes very dense in the Cardy regime. In this type of situation, it is customary to treat the spectrum as a continuum with spectral density given by the Cardy formula, and elevate the OPE coefficients to continuous functions of the conformal weights. The Ruelle resonances are then expected to arise as poles in the analytic continuation of the OPE coefficients. 10 Let us summarize. The OPE coefficients between light and heavy operators satisfy several non-trivial compatibility conditions: they solve the CFT bootstrap equations (4.8) and (4.9), and must be compatible with the known location (4.6) of the Ruelle resonances. The question is: do these conditions uniquely fix the form of the OPE coefficients, in the universal high energy regime in which the CFT spectrum is governed by the Cardy formula? Do we know of any solutions to these conditions?
Ruelle from Liouville
The answer to the last question is affirmative: Liouville theory solves both conditions. The bootstrap program of Liouville CFT is by now on firm footing [40]. Our new observation is that the OPE coefficients of Liouville CFT, given by the famous DOZZ formula [41,42], indeed exhibit a series of poles that precisely match with the quasi-normal frequencies (4.6) of the BTZ black hole. This observation gives extra support to the proposal that Liouville theory should be viewed as the effective CFT that captures universal high energy behavior of holographic CFTs. As we will discuss in the concluding section, this result also sheds light on whether the lattice model of section 3 has ergodic dynamics or not.
Liouville CFT has a continuous spectrum labeled by the momentum variable α via ∆ α = α(Q − α) with c = 1 + 6Q 2 and Q = b + b −1 . In appendix A we review the expression for the three point function C(α 1 , α 2 , α 3 ) for a light operator, labeled by α 1 , and two heavy operators, labeled by α 2 and α 3 . Denoting the conformal dimensions as ∆ 1 = h, ∆ 2 = M and ∆ 3 = M+ω, the corresponding Liouville momenta are where β = 2π/b √ M is the inverse temperature associated with the state M . The DOZZ three-point function C(α 1 , α 2 , α 3 ) has a rich pole structure. As explained in appendix A, the series of poles that are relevant to our physical situation are located at Plugging this into (4.14), and doing the integral (4.13), we learn that the retarded Green's function G ret (ω, ℓ) has poles for These are the Ruelle resonances that govern the thermalization dynamics of Liouville CFT. Notice that relative to the list (4.6) of BTZ quasi-normal modes, the series (4.18) reveals additional poles shifted by the excitation numbers n L and n R of the left-and right-moving Virasoro descendants. These additional poles arise because in our CFT calculation, we did not exclude the possibility that the incoming wave created by O a also excites boundary gravitons. If we ignore the energy stored in the boundary gravitons, we recover the expected BTZ result (4.6).
Conclusions
In this note we have made three observations that clarify the geometric origin of chaotic behavior in irrational 2D CFTs. We argued that in holographic CFTs at finite temperature, conformal symmetry is non-linearly realized by means of universal Goldstone-like fields ξ(u) and η(v), that describe the near-horizon gravitational dynamics of the dual theory. The effective field theory is weakly coupled and its maximal Lyapunov behavior can be demonstrated at the semi-classical level. We expect this mode to dominate in irrational CFT with large central charge and a dense spectrum. Understanding this behavior from first principles is still an open question. We used this insight to propose a new toy model for quantum chaos in the form of the FKV lattice model, with an integrable equation of motion given by a Y-system. Integrability may seem unhelpful for generating ergodic behavior. Indeed, integrable systems are seen as prototypical counter-examples for the ETH: their single state microcanonical ensemble is understood to be described by the generalized Gibbs ensemble (GGE), which has many chemical potentials, one for each conserved quantity [47]. However, this reasoning assumes that the state that defines the microcanonical ensemble is an (approximate) eigenstate of many or all conserved quantities. Instead, if we choose an energy eigenstate that otherwise is a random linear superposition of eigenstates of all other conserved quantities, then the usual ETH can still apply. The conserved quantities in the FKV lattice model are highly non-local, and with respect to local observables the dynamics still looks random and thermalizing. As discussed in the introduction, this random dynamics can be reinforced by introducing some degree of disorder. Indeed the discrete model seems particularly useful for studying propagation of entanglement, and even though the continuum limit is expected to described by a CFT, JHEP12(2016)110 the entanglement propagation generated by the Y-system rule (1.3) is non-ballistic and mixes left-and right-moving signals. Our conjecture that the lattice dynamics is ergodinc is further supported by the fact that the continuum limit of the model is expected to be described by Liouville theory, which via the observation of section 4 has Ruelle resonances that prescribe the approach towards thermal equilibrium.
Of course, underlying all three observations in this note, is the idea that the bulk gravitational dynamics of holographic 2D CFTs is accurately captured by 2D Liouville CFT [3]. This emergent Liouville field can be viewed as encoding the dynamical interplay between geometric entanglement and energy flow. This interpretation combines the idea of kinematic space [43,44], that the entanglement entropy S(u, v) of an interval [u, v] between two space-like separated points x = u and x = v describes a metric on a 2D hyperbolic space via 2) can be integrated [45] into an expression for the energy-momentum tensor T αβ in terms of the entanglement entropy S(u, v), which looks exactly like the Liouville energy momentum tensor, via the identification of the Liouville field with the entanglement entropy. Note that both quantities define locally constant curvature metrics, and both transform inhomogeneously under coordinate transformations. Hence the dynamics of kinematic space seems intimately connected with the emergence of an effective Liouville field in holographic 2D CFT.
JHEP12(2016)110
The dimension of the state in term of its label is ∆ α = ∆ α = α(Q − α). As usual the central charge is c = 1 + 6Q 2 and Q = b + 1 b , where b is a positive real parameter. The semiclassical limit c ≫ 1 corresponds to b ≪ 1. That is the limit we are interested in, although the result for Liouville theory is supposed to be valid more generally. Now we can state the DOZZ formula which for generic α 1,2,3 and b is given by where µ is the cosmological constant, γ(x) ≡ Γ(x)/Γ(1 − x) and Υ 0 = dΥ b (x) dx 0 . Υ b (x) is an entire function. It is usually defined by analytic continuation of an integral representation valid for 0 < Re(x) < Q which can be found in [41,42]. Here we will not need more information about this function other that its zeros since it gives the position of the poles in the DOZZ formula in terms of the α's. Specifically, they are located at Looking at the formula (A.2) we see all the poles are located in terms of the labels α at These are all the poles of the OPE coefficients. Now we will use the semiclassical limit to identify the poles that are physically relevant for the discussion in the main text, i.e. the ones that survive the b → 0 limit. The external operators that we are interested in are such that one is light, α 1 , two are heavy, α 2 and α 3 , and the difference between the two heavy operators is small. This means we fix the scaling with b in the b → 0 limit such that α 1 ∼ b, α 2 ∼ α 3 ∼ b −1 and α 3 − α 2 ∼ b. Then it is clear that the relevant poles to retain are the ones in equation (A.5) and (A.6) for only n non zero. These two sets of pole for n > 0 can be combined into a single formula where now n runs over all the integers. We see this has the right scaling since both the left and right hand side scale as b in the semiclassical limit. All the rest of the poles disappear in the heavy-heavy-light limit we are interested in here.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 9,199.4 | 2016-12-01T00:00:00.000 | [
"Physics"
] |
Comparison of the Usability of Apple M2 and M1 Processors for Various Machine Learning Tasks
This paper compares the usability of various Apple MacBook Pro laptops were tested for basic machine learning research applications, including text-based, vision-based, and tabular data. Four tests/benchmarks were conducted using four different MacBook Pro models—M1, M1 Pro, M2, and M2 Pro. A script written in Swift was used to train and evaluate four machine learning models using the Create ML framework, and the process was repeated three times. The script also measured performance metrics, including time results. The results were presented in tables, allowing for a comparison of the performance of each device and the impact of their hardware architectures.
Introduction
In the current age, artificial intelligence algorithms are becoming more and more omnipresent, not only in robotic applications, but also in a wide range of application areas [1][2][3]. From ads and video recommendations [4][5][6], through text auto completion [7,8], to algorithms capable of producing award-winning art [9,10], an increasing number of people are using deep learning (DL) models in their work [11][12][13][14]. The production cycle of a deep learning model is time consuming and preferably requires understanding of complex concepts such as deep neural networks, neural network topology, training, and validation [15][16][17], as well as the application field, e.g., computer vision or natural language processing. Having that knowledge, training a production-capable model requires a lot of data [15,18,19] (or, alternatively, the use of transfer learning [18][19][20], which requires the knowledge of where to find such a model, and which one to use). Moreover, it is essential to have proficiency in using DL frameworks such as TensorFlow [21] or PyTorch [22]. Learning to create a good model [23] requires an investment of a considerable amount of time (preferably introduced at an early stage of education [24]) and knowledge of the basics of model preparation.
The authors have noticed that the process of learning and experimenting with machine learning for many researchers, students, or professionals is often preceded or accompanied by a difficult question-which hardware platform to choose [25][26][27][28]. This multi-factor optimization always includes an economical aspect [12,29], but the computational capabilities are not inessential [30,31]. Proper evaluation of the 'money-to-value' assessment is actively hindered by a 'marketing fog' [32,33], which tries to make the choice emotion-based instead of being based on any measurable factors.
Choosing a purposeful notebook for both everyday work and DL-oriented research is difficult. The general rule-of-thumb (better CPU, more RAM memory, modest graphics card) might still be a valid intuition-based choice; however, the evolution of CPUs brought a new player to the game: ARM-based (ARM-Advanced RISC Machine) 'Apple M1' chip (and its newer versions) [34][35][36][37], equipped with specialized GPU cores and NPU (Neural Processing Unit) cores. The presence of NPU cores sounds especially promising; however, not much evidence of the actual computational benefits is available. For this reason, the authors have decided to design and conduct a series of typical DL-related models/tasks, based upon readily available datasets, to evaluate and compare the new processors.
In this article, the authors verify the validity of Apple's Deep Learning framework for some of the common DL challenges-image classification and regression using the Animals dataset (available on Kaggle's webpage [38], consisting of over 29,000 images), image classification using a custom-made mini-dataset of 24 photos, a tabular dataset (the Kaggle's Payment Fraud Detection Dataset [39]), and text-based use case-the Kaggle's Steam Reviews [40] dataset.
Motivation
Deep learning researchers and enthusiasts worldwide are keen to obtain knowledge concerning new hardware that is affordable and could potentially speed up their experimentation with models. The marketing language is often not specific enough to explain the performance of the product. While, for most people, Neural Processing Unit performance is not the most important aspect of the laptop, for those who intend to work on deep learning, it could be a deciding matter. Moreover, the knowledge of the performance of particular hardware models at specific price points could be beneficial in terms of deciding whether to buy the more expensive chip or not. Having clear information about current hardware capabilities, especially the newest ones, may be of great interest for researchers, who have to decide on their next project, its budget, and its scope. Although it is feasible to carry out basic ML experiments on contemporary computers, the authors aim to delve into and juxtapose the "ML usability" of the aforementioned hardware platforms. In this context, "usability" is comprehended and examined as a quantitative assessment derived from employing systematic research methodologies for time-based evaluation of specific hardware platforms in preliminary machine learning experiments. The primary criterion under scrutiny is the computation time; nonetheless, it is advisable for the reader to expand the benefits of reading this paper by also taking into consideration the current prices of respective models.
This study is intended to deliver reliable information and arguments to scientists and enthusiasts who are interested in purchasing a new notebook equipped with hardware capable of accelerating neural network computations.
Scope and Limitations
The intended readership of this study comprises scientists and practitioners of deep learning who are considering purchasing an Apple laptop for their research, rather than investing in specialized High-Performance Computing (HPC) or Workstation equipment. Apple's processors with a Neural Processing Unit (NPU) are marketed as ones that are indeed capable of accelerating model computations. Therefore, it is reasonable to compare only laptops that are equipped with Apple's NPU.
Authors of this study compared Apple's M-chip CPU family, including M1, M1 Pro, M2, and M2 Pro (details regarding exact hardware specifications are included in Section 2.4). Research was conducted using the same operating system (latest available to date, which was macOS Ventura 13.2) to introduce as little potential interference as possible. Additionally, Section 3.2 presents an alternative comparison-three different macOS versions whilst using the same hardware. All tests were designed to be used with the same code, environment, framework, and libraries (see Section 2.4) for all processors tested.
The comparison was performed using the Create ML framework [41] developed by Apple, designed and implemented with full compatibility and maximum efficiency of the CPU/hardware. Comparison of other DL frameworks may also be interesting, but, since it is strongly dependent on the availability of hardware support for the Apple M1/M2 chip as well as a proper implementation of the CPU extensions within a particular framework, a fair comparison is not yet possible.
While it would be interesting to measure the performance differences using statistical analysis, this work focuses primarily on the processing time of the datasets, as well as training and evaluation time of models created by Create ML.
Performance Measurements
The performance measurements were conducted with the usage of specialized software that stress-tested the hardware's computational capabilities. The authors utilized common deep learning problems such as computer vision (classification) and regression, while using popular real-world datasets to test the viability of Apple's chips in the tasks presented in Section 2.3. The proper analyses, as well as the correct interpretation of results, are key after collecting enough measurements on a particular hardware platform. The results wre converted and visualized in an easy-to-read and understandable form.
Materials and Methods
To ensure the repeatability and the ease of implementation of the models and datasets used within the research, the authors opted for the most native choices for the macOSbased platforms, which were readily available with fairly low entry threshold. The analyses and comparisons were implemented using the Swift programming language [42], Xcode Integrated Development Environment (IDE) [43], Xcode Playground (part of the Xcode IDE, introduced in 2014 [44], especially useful for rapid prototyping), and Create ML [41] (merged into a unified Apple ecosystem for creating, managing, and using machine learning models, with full support of the available hardware acceleration [45], as well as the ability to deploy the models onto mobile platforms). Swift is a high-level programming language developed by Apple, released in 2014 as a replacement for Objective-C. It is commonly used as the first-choice language for applications built for Apple platforms [42].
Xcode is an integrated development environment (IDE) designed by Apple for developing software for macOS, iOS, iPadOS, watchOS, and tvOS . It includes a suite of tools for developing software, and also provides access to a wide range of Software Development Kits (SDKs) and Application Programming Interfaces (APIs) that are required for building applications for Apple's platforms [43].
Xcode Playground is an interactive programming environment that allows developers to experiment with Swift code in an interactive way. It provides a lightweight environment for writing and running Swift code with live code execution. Xcode Playground is integrated within the Xcode IDE [44].
Create ML [41] is a framework and a collection of components and tools intended for easy preparation of machine learning models as well as their easy integration into custom applications. It features a GUI-based application for creating and training a model, which can be later distributed and used on other devices, including mobile applications. Create ML implements the Core ML framework to be able to benefit from the hardware it targets.
Core ML is Apple's machine learning framework [45], designed specifically to benefit from the hardware acceleration capabilities of processors used in Apple devices. The Core ML framework is said to enable optimization of on-device performance by also using the GPU, NPU (named Neural Engine), and optimization of memory usage and power consumption [45].
Model Creation
The measurements were conducted using the 'Benchmarker.playground' script, available (open-source) in [46]. For a detailed insight into how the experiment was carried out, please refer to [37]. Within the 'Benchmarker.playground' script, each model was created by the use of custom functions written in the Swift programming language. The appropriate datasets were passed as arguments, and the trained models were returned. Finally, the models were tested using custom testing functions. The execution time of each stage and function was measured, which allowed the comparison of devices on which the script was running to be made. All the results were logged into the console output. The process of training and evaluation of all models was repeated three times.
Model Export as .mlmodel
Models created with the use of Create ML, (whether implemented in the Playground sandbox or in an actual application), can be easily exported as a file [47]. This is carried out using Apple's Core ML framework file format-an '*.mlmodel' file [48].
The .mlmodel file contains the prediction methods of a machine learning model, including all of its configuration data and metadata [49]. These parameters were previously extracted from its training environment and then processed to be optimized for Apple device performance [50].
Interface-Defines the input and output features; 3.
Parameters-Stores all values extracted during the model training.
To correctly interpret the input data and produce valid output predictions, features need to be defined and specified in the .mlmodel [51,52]. This includes "Metadata", such as the author, license, and model version, stored in the form of a dictionary [51,53,54].
The "Model description" information such as the names, data types, and shapes of the features are saved in the "Interface" module [51,52,54].
In the next step, the architecture of the model needs to be defined [51]. This involves the definition of the model's structure, including the number and type of layers, the activation functions, and other operations [50,51]. Then, the model parameters are defined [55]. These are values of variables and coefficients, including weights and biases of each layer [51,55,56].
The model metadata, interface, architecture, and parameters are encoded into a data structure called 'protobuf message definitions' [51,54,55]. The Protocol Buffer syntax allows encoding and decoding information about the Core ML model in any language that supports the Protocol Buffer serialization technology, (including Python, C++, C#, or Java) [54,55]. 'Model.proto' is the essential file of the Core ML Model protobuff message definitions [57]. It describes the structure of the model, the type of inputs and outputs it can have, and metadata [54,57]. The file also includes the 'specification version', which determines the versions of Core ML format specification and functionalities that it can provide [54, 55,57,58]. Each version of the target device's operating system has its own way of implementing the model, so it is crucial to also include these in the model specification [58].
All of the Model's protobuff message definitions are encoded into binary format, which can be deployed on Apple platforms and encoded while loading the model [51,55].
Datasets Used
The authors used four distinct datasets for their study. These datasets were used for different purposes: two for image classification, one for tabular classification, and one for tabular regression. Of these, one dataset was created by the authors, and the rest were obtained from the Kaggle [59] website.
The Animals Dataset
The Animals dataset [38] was downloaded from Kaggle.com [59]. It contains 29,071 images divided into a training subset and testing subset. It is published under a Creative Commons license. Images are in different shapes, have three colour channels, and animals often are only partially visible in the picture (e.g., only the head of an ostrich), while in other cases, the whole body is portrayed. To set the image size for the network properly, it is essential to know what objects are in the images. One picture could be presenting a large animal (e.g., an elephant), but from a large distance so that it appearz small, while another could be a picture of a shrimp, but taken from a close distance and zoomed in.
Knowing that the depicted objects may vary in size, and the classes can be similar enough that differentiating between them requires a certain level of detail, it becomes justified to increase the input size of the neural network. Hence, it is important to take a good look at the data.
The training set consists of 22,566 images divided into 80 classes, making the problem a multiclass classification. The distribution of data in the training subset is presented in Figure 1. The testing set is made of 6505 images. Figure 2 portrays a random sample of 20 pictures of different classes from the Animals dataset.
The Payment Fraud Detection Dataset
The Online Payments Fraud Detection Dataset was published on Kaggle under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license. The dataset consists of 5,080,807 entries in a ".csv" file, which translates into 493.5 MB. The data are divided into two classes (fraud or non-fraud), making it a binary classification problem. Every entry has nine features, as explained on the Kaggle's dataset webpage [39].
The Steam Reviews Dataset
The Steam Reviews Dataset 2021, obtained from Kaggle, represents the most recent dataset used in this study. It comprises approximately 21 million user reviews pertaining to approximately 300 games on the Steam platform. The dataset is available under the GNU GPL 2 license. To prepare the dataset for utilization in "Create ML", the authors conducted a data cleaning process using the Python language, along with the "Pandas" library. The cleaning procedure involved removing specific columns such as "comment text", "author", "creation date", and others. The resulting cleaned dataset was saved as "SteamReviewsCleaned.csv", resulting in a reduction in size from 3 GB to 2.15 GB. This dataset was utilized for a tabular regression problem within the context of this study.
The ClassifierData Dataset
This very small custom dataset was created by one of the authors. It was composed of four classes: "IPhone", "MacBook", "Apple Watch", and "AirPods". Every class contained 25 photos-19 in the training subset and 6 in the test subset. Every image was taken from different angles as well as in varyied lighting. Some pictures were taken of objects held in hand, while others were taken while lying on the floor, table, or carpet. Similarly to the Animals dataset, Figure 3 presents a random sample of images from all four classes. The data distribution of both training and test subsets is visualized in Figure 4. The subset was structured in a Create ML-compliant format [47], i.e., as image files placed inside the class-related folders. All images were sampled using a mobile device, photographing the target in various angles and light conditions.
Framework and Hardware Used in the Trials
The study was conducted on four notebooks, each one running the macOS Ventura operating system version 13.2 [60], with the Xcode Integrated Development Environment, version 14.2 [61].
The primary objective of the research was to investigate the usability of modern laptops equipped with ARM-based M1/M2-series CPUs in popular machine learning tasks. Since the manufacturer of M1-and M2-equipped laptops declares that the presence of the NPU cores in the CPUs makes them useful and interesting in machine learning applications, and since there are many researchers willing to buy a suitable portable platform for everyday work, the authors have decided that it may be worthwhile and interesting to put the eligibility of the NPU-equipped CPUs in DL tasks to a test.
All examined machines were ARM-based Apple MacBook Pro notebooks. The first tested computer was the 2020 M1 MacBook Pro. The model started Apple's transition from Intel to ARM architecture [34]. It was equipped with the first version of the Apple M1 chip. The CPU included four high-performance and four energy-efficient cores. The chip was also equipped with an eight-core GPU. The first version of Neural Engine-a 16-core neural processing unit (NPU) -was also included in this chip [34]. The device had 8 GB of RAM. It is referred to as 'M1' in this work.
Another device from the M1-series was the MacBook Pro 2021. It included the strengthened version of the M1 Pro chip, which also included an eight-core CPU, but the allocation of the cores differed: six were high-performance and two were energy-saving.
The M2 series processor is an upgraded version of the previous M1 processor, boasting a speed increase of about 40%. It features eight cores, which are designed to be four performance cores and four efficiency cores [35]. Additionally, the processor includes 10 GPU cores and 16 Neural Engine cores. The M2-equipped laptop used in the research had 16 GB of RAM.
The fourth laptop used for the research was a MacBook Pro equipped with an Apple M2 Pro processor and 16 GB of RAM. This processor was composed of 10 cores, with 6 of them being performance cores and 4 being efficiency cores [36]. The processor also included 16 GPU cores and 16 Neural Engine cores.
Results
The most important result presented within this paper is the comparison of the computational performance of the M1 and M2 processors in ML tasks, presented in Section 3.1 and discussed in Sections 4 and 5. The comparison is made based on the performance (processing time) of ML models included within the 'Benchmarker.playground' project. Section 3.2 includes an additional, also interesting, analysis of a possible impact of the versions of macOS and Xcode on the ML tasks' processing time.
Measurement of the Impact of the Processor Model on the Model Creation Time
During the research, three measurements of model training and testing time were performed. The computer was consistently connected to a power source throughout the script execution to ensure uninterrupted performance and avoid any potential limitations caused by energy-saving features. This allowed us to benchmark the efficiency of M-series processors in machine learning tasks using Apple's ecosystem.
Running the Benchmark
The measurements were performed on four computers. The 'Benchmarker.playground' file was copied to the hard drive of each machine. Then, the file was opened in the Xcode environment and executed. During the tests, each computer was left without any additional tasks. When the running of the script had finished, the console output was saved to a '.txt' file.
A screenshot of the 'M1 Pro.txt' file with the console output log from the 'Benchmarker' playground executed on the M1 Pro Mac is presented in Figure 5.
The Results of the Benchmark
The results of the benchmark performed on the ClassifierData dataset were similar in terms of overall time, except for the M1 Pro, which was over two times slower than the other processors. Each training took, on average, a different number of iterations (epochs); the means spanned from 11 to 14.667. The M1 achieved the best average result by a slight margin, outperforming the second-fastest (which was the M2 Pro) by 383 ms. The third average result was achieved by M2, which lost about 70 ms to the Pro variant. The worst performance on the ClassifierData dataset was the M1 Pro; despite taking the second-lowest average number of iterations (11.333), it scored by far the worst time of 8.622. Every model trained on each chip achieved 100% for both training and validation accuracy. This test was the quickest one due to the small size of the dataset. Table 1 shows the time and accuracy results of the model training and testing process, performed on the 'ClassifierData' dataset using Create ML. The multiclass classification test was performed by utilizing the Animals dataset of over 29,000 images, split into training and testing subsets. The results of the benchmark are presented in Table 2. The quickest of all tested processors while training the model on the Animals dataset was M2 Pro. It took the M2 Pro 169.7 s to complete the test. The secondfastest processor, (M2), took 186.689 s to train the model, which is 9% slower than the Pro variant; however, the evaluation time was basically the same, at 40.796 for the Pro and 40.796 for the basic M2. The M1 Pro finished the training process in 193.219 s, which earned it third place. This result is 12% slower than the M2 Pro. The evaluation time was approximately 5 s slower than both M2 and M2 Pro. The slowest one, M1, achieved a result of 236.285 s. It was 28% slower than the fastest processor; simultaneously, it was the only one that completed the training in over 200 s on average. The evaluation also took the longest, exceeding 47 s. All processors achieved similar training and validation accuracy, of about 88% and 86.5%, respectively. Upon examining the results of the benchmark conducted on the PaymentFraud dataset displayed in Table 3, it is apparent that the accuracy levels for all tested cases were comparable, with minor disparities emerging during the data analysis phase. During this stage, the M2 Pro processor exhibited the quickest performance, taking only 1.924 s, while the slowest was the M1 at 2.310 s. The M2 processor, on the other hand, completed the data analysis in 2.01 s, and the M1 Pro required 2.161 s.
The most notable differences in processing time were found during the overall model building phase. The M2 processor was the speediest in this regard, finishing the model building task in 102.109 s. The M1 processor took 13% longer, completing the same assignment in 117.546 s, while the Pro version of the M1 took 146.641 s, which was 43% slower than the M2 processor. Interestingly, in this case, the M2 Pro processor proved to be the slowest, taking 151.659 s to complete the task, which was 48% slower than the M2's base version. Table 4 displays the benchmark outcomes for the SteamReviewCleaned dataset. Noticeably, the table does not present the maximum error and root-mean-square error findings for the training, validation, and test data. These results are excluded due to their consistency across all cases, as was shown in our preceding publication [37].
The M2 Pro processor boasted the swiftest processing time, taking only 7.981 s to complete the task, while the M1 Pro and M2 processors processed the data in nearly the same amount of time, clocking in at 8.143 s and 8.255 s, respectively. Meanwhile, the M1 proved to be the slowest, taking 9.276 s.
During the model building phase, the M2 Pro processor was once again the fastest, completing the task in only 12.545 s. The M2 processor followed closely, requiring 13.395 s to build the model. The M1 Pro took 14.713 s to finish the task, while the M1 took 15.545 s. With the exception of the M1, which took 2.023 s, all processors required less than 1.75 s to evaluate the model. The M2 Pro processor was once again the quickest in this task, taking only 1.596 s, while the M1 Pro and M2 processors achieved similar times of 1.736 and 1.665 s, respectively. The computer was not used during each execution of the script. The device remained connected to the power source at all times to prevent any limitations due to energysaving features. Table 6 Table 8 presents the results acquired for the PaymentFraud dataset. The training times for the SteamReviewsCleaned dataset on various versions of Macbook Pro with M1 Pro processor are displayed in Table 9
Discussion
The gathered results present the comparative computational performances of the Apple laptops equipped with four different M-family processors, including the most recent M2 Pro chip. All of the tested hardware (including the previous M1 generation) is perfectly capable of performing ML tasks that do not require processing millions of images or hundreds of gigabytes (or more) of data. Each and every dataset has been successfully analyzed and processed. Every created model had similar efficacy; regardless of the chip it was trained on, the results were satisfactory.
Three of the used datasets are available to the public; as well, the hardware and software specifications provided in this research ensure that the reproducibility and comparability of the results is possible for other researchers.
The chips were tested for machine learning applications with the use of Apple's Create ML. A rather surprising average result is the overall performance of the M2 Pro variant. It was outperformed by the base variant by approximately 9%; however, this was mainly due to poor performance of the more expensive variant on the PaymentFraud dataset. This result may affect someone's decision as to whether it is beneficial to increase their budget to buy the Pro chip or save money and buy the cheaper standard M2. The M1 Pro also had the same difficulties with the same dataset that its newer counterpart had, achieving a much worse time than the base version of the chip.
The research also included an evaluation of the script execution time on various macOS versions. The tasks were performed on the same MacBook Pro laptop, with different versions of the operating system and development environment.
The obtained results showed, that the version of the macOS has an impact on the script execution times. For the 'ClassifierData', almost all times were longer after updating the operating system from the previous 'major version ' (macOS 12.4) to the next new 'major version' (macOS 13.0.1). However, the installation of a system update with bug fixes and improvements (macOS 13.2) decreased the execution time to a value which was comparable with the results from the previous 'major version '. In case of the 'Animals' dataset, the data analysis time also changed with the version of the operating system, with macOS 13.0.1 being the fastest. The model training time was comparable in each test, which suggests that system updates have no impact on the training process.
The tests performed on tabular datasets ('PaymentFraud' and 'SteamReviewsCleaned') showed no remarkable difference between the execution time of the training and evaluation process. This shows that differences are visible only when working with image datasets.
Conclusions
Upon conducting an in-depth analysis of the collected results, our study revealed significant findings that may provide insights into the performance of distinct chips when employed for training and testing models using Create ML.
The results presented in the paper demonstrate that the M2 chip may exhibit superior performance compared to the M2 Pro (as shown in Table 5), implying that the M2 chip may be a favorable choice for tasks demanding efficient model creation.
While it is difficult to provide a definitive recommendation for future processors or operating systems due to their evolving nature, in this study, the authors proposed a methodology for evaluating the effectiveness of specific hardware architectures, which can be investigated by the researchers themselves, using the proposed [46] benchmark.
The observations also reveal that the 'Pro' series of respective chips (namely, M1 Pro and M2 Pro) do not meet the anticipated time-related performance efficiency for model creation using the 'Payment Fraud' dataset (see Table 3). Moreover, the processing performance of the 'M1 Pro' chip proved to be below average for the small 'ClassifierData' dataset, whereas the 'M1' chip exhibited surprisingly good performance on the same dataset (see Table 1). These observations indicate that certain characteristics of datasets can indeed impact the performance of specific chip models. The research provided evidence that the expected superiority of the 'Pro' variant should be challenged for model training, even when using Apple's own 'Create ML. ' Lastly, the experimental comparative research resulted in the formulation of additional insights of minor significance: it was confirmed that the multiclass classification performance results were consistent with the CPU-generation-related expectations, and that the operating system version had an impact on the processing time, particularly in the case of image datasets.
The results presented within this study bring theoretical and managerial implications that extend beyond the immediate scope of hardware platform performance evaluation. The insights gained from comparing respective processors and their performance in machine learning tasks using Create ML shed light on the complexities and nuances of hardware-platform-specific characteristics. From a theoretical standpoint, these findings help to understand the impact of particular hardware choices on the efficiency and effectiveness of ML computation time. By examining the performance (and its variations across various chips), researchers can refine their theoretical models and develop more nuanced frameworks for leveraging the benefits of hardware acceleration for typical Machine Learning applications. On a managerial level, the research findings have substantial value for decision-makers who are considering hardware platforms for machine learning researchers and initiatives. The performance disparities observed among the tested chips help to highlight the benefits and importance of careful evaluation of the hardware requirements, based on the specific needs and characteristics of current machine learning projects.
Future Work
The research conducted in this study opens avenues for further exploration in the realm of multi-platform analysis (the comparison of Apple platforms against other, non-Apple, platforms). While this research focused on the performance evaluation of hardware platforms utilizing primarily Create ML, future studies could extend the analysis to encompass a broader range of machine learning frameworks, especially the most popular ones-TensorFlow and PyTorch. By conducting experiments using TensorFlow and PyTorch across different hardware platforms and operating systems, a more comprehensive understanding of the performance variations and platform compatibility can be obtained. Additionally, investigating the impact of different frameworks on the efficiency and effectiveness of model creation would contribute valuable insights to the field. Such multi-platform approaches will provide a more comprehensive assessment of the hardware-platform-specific characteristics and guide researchers and practitioners in making informed decisions regarding the choice of frameworks and hardware configurations for their machine learning tasks. Data Availability Statement: The 'Benchmarker.playground' project, including all source code, is made available as Open Source on GitHub, at https://github.com/dKasperek/Benchmarker (accessed 3 March 2023). The code is implemented to run three iterations, each one creating four ML models, using the following datasets: 'Animals' (available from [38]), 'PaymentFraud' ( [39]), 'SteamReviews' ( [40]), and the 'ClassifierData' (available within the above-mentioned GitHub project). The datasets should be imported into the 'Resources' folder inside the Xcode Playground project.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,292.6 | 2023-06-01T00:00:00.000 | [
"Computer Science"
] |
Properties of Cement-Bonded Particleboards Made from Canary Islands Palm ( Phoenix canariensis Ch.) Trunks and Di ff erent Amounts of Potato Starch
: Wood-cement panels are becoming increasingly widely used as prefabricated building materials. In order to increase the use of renewable resources as materials for industrial applications, the use of alternative plant fibres has been gaining interest. Additionally, it is assumed that new or better board properties can be achieved due to the different chemical and mechanical properties of such alternative sources of fibres. In south-eastern Spain, the Canary Islands palm ( Phoenix canariensis ) is widely used in urban landscaping. Plantations attacked by red palm weevils generate abundant plant waste that must be shredded and taken to authorised landfills. This paper discusses the use of particles of Canary Islands palm for manufacturing fibre panels containing 20% cement in relation to the weight of the particles, using different proportions of starch as a plasticiser. A pressure of 2.6 MPa and a temperature of 100 ◦ C were used in their production. Density, thickness swelling, water absorption, internal bonding strength, modulus of rupture (MOR), modulus of elasticity (MOE), and thermal conductivity were studied. The mechanical tests showed that the MOR and MOE values increased with longer setting times, meaning that the palm particles were able to tolerate the alkalinity of the cement. The board with 5% starch had a MOR of 15.76 N · mm − 2 and a MOE of 1.872 N · mm − 2 after 28 days. The boards with thicknesses of 6.7 mm had a mean thermal conductivity of 0.054 W · m − 1 · K − 1 . These boards achieved good mechanical properties and could be used for general use and as a thermal insulation material in building construction.
Introduction
Wood-cement boards for formwork, roofing, sandwich panels, wall coverings, partition walls, prefabricated houses, false ceilings, and flooring are becoming increasingly widely used in building construction. These are boards made of wood particles or cellulose fibres mixed with cement, water, and chemical additives.
Given the shortage of wood, the latest trends are aimed at using other plant fibres. Around the world, a large amount of plant waste is generated from various sources; thus, using this waste would contribute to its recycling and reutilisation.
The Canary Islands palm is a species of the family Arecaceae or Palmae, of the genus Phoenix, which it is endemic to the Canary Islands (Spain). It forms a natural hybrid with the date palm (Phoenix dactilifera L.), meaning that there are many hybrid cultivars that are difficult to distinguish from one another. The Canary Islands palm is more vigorous than the date palm, and its stipe reaches a height of 20 m and a diameter of 40 to 50 cm.
The red palm weevil (Rhynchophorus ferrugineus O.) invaded the states of the Persian Gulf in the mid-1980s, causing serious damage to palm trees [1]. The nature of the plant, together with the transport of seed material, has added to the rapid development and spread of this pest over a short period of approximately a decade, and it now affects more than 60 countries in Europe, Africa, and the Middle East. The pest usually leads to death of the palm tree unless appropriate curative measures are taken. However, it is often not possible to take curative measures at the start of the attack, as it is difficult to detect at an early stage of infestation [2]. As in most Mediterranean countries, palm trees are widely used in urban landscaping in south-eastern Spain, and the Canary Islands palm (Phoenix canariensis) is one of the most abundant species in the area. Consequently, the numerous plantations affected by red palm weevil damage generate a large amount of plant waste that must be shredded and transferred to an authorised landfill. Using this waste is one way to help adopt sustainable solutions for the control and eradication of contaminated specimens while also producing benefits for the environment.
Different plant fibres have been tested with cements and cement mortars: sisal fibres, bamboo, coconut shell and natural jute [3], hemp fibres [4], sisal and banana fibres [5], date palm rachis fibres [6], Arundo donax L. [7], jute fibres [8,9], and waste jute fibres from the textile industry [10]. Some studies [11] concluded that in general, plant fibres suffer degradation problems with cement. Therefore, different methods have been proposed to modify plant fibres in order to prevent their degradation: alkaline treatment of jute fibres [12], silane treatment of sugar cane bagasse fibres [13], modification of kenaf fibres by various chemical treatments [14], and treatment of date palm rachis fibres in three alkaline solutions [7].
In tests conducted on plant fibre composites with cement [6], it was reported that they suffered a severe decrease in the modulus of rupture and modulus of elasticity after one year's exposure to temperate or tropical environments, concluding that these reductions could be attributed to the carbonation of the matrix followed by leaching and progressive microfisuration. Degradation has also been shown to be due to the carbonation of plant fibres [15].
In a review of research on the use of plant fibres in cement composite materials [16], it was deduced that cellulosic elements present great variability in their mechanical properties because their degradation is mainly due to alkaline degradation and fibre mineralisation. These mechanisms bring about changes in the chemical composition of the fibres, which in turn causes a reduction in strength and degradation of the polymer matrix and of the fibre/polymer matrix interfacial bond.
In general, the literature indicates that in composite materials made from cement reinforced with plant fibres, there was a decrease in the heat of hydration [13,17,18], which is attributed to different components of the plant fibres. Consequently, Vo and Navard [19] stated that selecting sources of biomass with a low content of these compounds would minimise these drawbacks.
A study of cement composites reinforced with natural fibres [20] showed that cement hydration would initially be improved by increasing the curing temperature, adding chemical accelerators, and using materials with a high surface area.
The addition of starch to cement has also been investigated. It has been suggested that a mixture of polysaccharides such as cellulose and starch was a good water retention agent [21]. These additives are also set retarding agents that improve the working time and modify cement hydration. Zhang et al. [22] conducted a study on the dispersion mechanism of sulphonated starch as a water-reducing agent for cement. Ferrández-García et al. [23] studied cement boards with different proportions of starch, concluding that their mechanical properties were suitable for use in load-bearing structures.
The Canary Islands palm trunk has been studied as a material for manufacturing particleboards, and it was shown that using 20% starch as an adhesive and a pressing time of 30 minutes in the hotplate press, the boards could be classified as Grade P2 [24] and used for manufacturing furniture, flooring, and false ceilings [25].
The objective of this study was to evaluate a new composite made from Canary Islands palm trunk biomass agglomerated with cement and starch, to analyse the physical, mechanical, and thermal properties of these panels and to determine whether the palm particles degrade over time. Since these boards are considered products of high environmental value and long duration, the manufacture of such boards could help to decrease the atmospheric concentration of CO 2 by fixing it and thus providing benefits for the environment.
Materials
The materials used were type CEM II/B-LL 32.5 N Portland cement, water, particles of Canary Islands palm trunk (Figure 1), and different proportions of potato (Solanum tuberosum L.) starch.
The Canary Islands palm biomass was obtained from palm trees infested with red palm weevil at the Higher Technical College of Orihuela at Universidad Miguel Hernández, Elche. The palm trunks were cut before being chopped and left to dry outdoors for 6 months. They were then shredded in a blade mill, and a vibrator sieve was used to select a particle size between 0.25 and 1 mm ( Figure 1). The particles had a relative humidity of 54% and were left to air dry for an additional 3 months until a relative humidity of 10% was reached. Potato starch from the food industry was used as a plasticiser, with a purity of 90%. Chemically, starch is a mixture of two very similar polysaccharides, namely amylose and amylopectin. Potato starch typically contains large oval-shaped granules and has a gelatinisation temperature of 58-65 • C. The water was taken directly from the mains drinking water network, with an average temperature of 20 • C.
Manufacturing Process
The manufacturing process involved dry mixing the cement and palm particles with different proportions of starch (0%, 5%, and 10%). Subsequently, 10% water was sprayed onto the mixture, which was stirred for 10 minutes in a blender (LGB100, Imal, S.R.L., Modena, Italy) to homogenise it at 30 rpm.
The mat was formed in a mould of dimensions 600 mm × 400 mm and was subjected to pressure and heat in a hotplate press, applying a pressure of 2.6 MPa and a temperature of 100 • C for 2 or 3 h. The boards were then removed from the press and stacked horizontally at room temperature for the first 3 days of setting, before being positioned vertically and kept under ambient conditions in the laboratory. The approximate dimensions of the boards were 600 mm × 400 mm × 6.7 mm. Six types of board were manufactured, with different compositions and pressing times. The characteristics of each type of particleboard are shown in Table 1.
Eight days after they were prepared, the samples were cut to perform the tests required to determine the mechanical, physical, and thermal properties of each of the six types of board that were studied ( Figure 2). Ten particleboards were manufactured for each type. The samples were kept for 24 h in a refrigerated cabinet (model Medilow-L, JP Selecta, Barcelona, Spain) at a temperature of 20 • C and relative humidity of 65% before performing the tests. Table 1. Characteristics and composition in weight of the manufactured panels.
Methods
The morphology of the inside of the Canary Islands palm trunk was examined using a scanning electron microscope (SEM) (Hitachi model S3000N, Hitachi, Ltd., Tokyo, Japan) equipped with an X-ray detector (Bruker XFlash 3001, Billerica, MA, USA). Images were taken of fractured 5 mm × 5 mm cross-sections.
The method followed was experimental, conducting tests in the Materials Strength Laboratory of the Higher Technical College of Orihuela at Universidad Miguel Hernández, Elche. The values were determined according to the European standards established for wood particleboards [26].
Density [27], moisture content [28], thickness swelling and water absorption after 2 and 24 h immersed in water [29], internal bonding strength [30], and thermal conductivity and resistance [31] were measured 28 days after they were manufactured. The moisture content was measured with a laboratory moisture meter (model UM2000, Imal S.R.L, Modena, Italy). The water immersion test was carried out in a heated tank. The thermal conductivity and resistance tests were conducted with a heat flow meter (NETZSCH Instruments Inc., Burlington, MA, USA). A sample of each board of dimensions 300 mm × 300 mm × 6.7 mm was used for this test.
To assess the possible degradation of Canary Islands palm particles in contact with cement, the modulus of rupture (MOR) and the modulus of elasticity (MOE) were evaluated 8, 28, and 90 days after the boards were manufactured [32]. This test was performed on two samples of each board.
The mechanical tests were performed with the universal testing machine (model IB700, Imal, S.R.L., Modena, Italy), which complies with the velocity of 5 mm·min −1 for the bending test and 2 mm·min −1 for internal bonding strength, as specified by the applicable European standards [30,32].
The standard deviation was obtained for the mean values of the tests, and analysis of variance (ANOVA) was performed for a significance level of α < 0.05. The statistical analyses were performed using SPSS v. 26.0 software (IBM, Chicago, IL, USA).
SEM Observations
The image of the longitudinal section of the Canary Islands palm trunk is shown in Figure 3, where it is possible to observe the typical features of the vascular bundles (fibres, vessels, and phloem embedded in parenchymatous tissue). The vascular bundles are covered by circular silica phytoliths with protruding cones that appear brighter in the SEM image due to their mineral composition. Silica phytoliths are composed of common silicates.
The silicon content may be related to the fact that the particles do not degrade in an alkaline medium [33], but this result was not conclusive in this study.
Physical Properties
The density and moisture content values are shown in Figure 4. When starch is added, the density of the boards decreases from 1083.652 kg·m −3 in the starch-free boards to 1039.259 kg·m −3 in the boards to which 10% starch is added. These can be classified as high-density boards, although the density is lower than that of wood-cement boards. The ANOVA showed that the density depends on the type of board, pressing time, and percentage of starch added.
The relative humidity (RH) of the boards decreases the longer they are kept in the press. The values obtained are between 1.73% and 3.29%, meaning that all the boards tested have a very low RH. According to the ANOVA performed, it can be observed that the RH depends on the type of board and the pressing time.
The results of the thickness swelling test (TS) are shown in Figure 5. In the water immersion test, it can be seen that after 2 h, all three types of board show similar thickness swelling values (14%). After 24 h, the starch-free boards have a mean TS value of 30.2%, those with 5% starch have a mean TS value of 28.8%, and those with 10% starch have a mean TS value of 25.2%.
Thickness swelling over 24 h was greater than that required by the regulations [24] for grade P3 (17%) in all the experimental panels manufactured in this study; therefore, adding a water-repellent substance during the manufacture of the board would improve this parameter.
Although statistically it does not appear to depend on the type of board, pressing time, or starch added, it would be advisable to carry out further tests, subjecting the boards to heat for longer, as is the case with industrial wood-cement boards, which take approximately 8 h to cure applying pressures of 3 and 3.5 MPa at temperatures between 75 and 80 • C.
As shown in Figure 5, the behaviour of the six types of boards is similar in terms of water absorption (WA) after immersion for 2 h (WA ≈ 32%) and 24 h (WA ≈ 56%); there are no significant differences between the six types of board. It can be observed that the standard deviation is higher in the boards with 10% starch. This may indicate that not all of the starch has gelatinised during the manufacture of the boards, meaning that some samples absorbed more water than others in the test.
Mechanical Properties
The internal bonding strength (IB) values are shown in Figure 6 and are very high. The boards with 10% starch reach mean values of 0.80 N·mm −2 , and those without starch reach 0.58 N·mm −2 ; there is a large deviation. This may be due to competition for water between the cement, particles, and starch. The statistical analysis shows that the IB depends on the type of board, pressing time, and percentage of starch used.
The results of the bending test after 8, 28, and 90 days are shown in Figure 7. It can be observed that in all the types of board, the modulus of rupture (MOR) and modulus of elasticity (MOE) values increase over time, indicating that the palm biomass tolerates the alkalinity of the cement. The board with 5% starch and 2 h in the press had a MOR of 15.2 N·mm −2 and a MOE of 1,792.6 N·mm −2 after 28 days. In contrast, without starch, the MOR was 13.6 N·mm −2 and the MOE was 1,766.2 N·mm −2 . Although the boards with 10% starch have good MOR and MOE values, there is no improvement with respect to the boards with 5% starch. This could be explained by the fact that not all the starch has gelatinised, as there may have been competition for water between the palm particles, cement, and starch. The MOR and MOE values depend on the type of board, time in the hotplate press, and amount of starch added, as can be seen in the ANOVA. The time the boards were in the press influences the properties throughout the setting process. The pressing time was 2 or 3 h, while industrial wood-cement boards remain in the press for 8 h. A longer pressing time appears to result in better mechanical properties, although this will need to be confirmed by further testing involving a production process with a longer pressing time.
All the types of board tested can be classed as Grade P2 particleboards [24], for general use in the manufacture of furniture, interior décor, and enclosures (vertical and horizontal) in dry conditions. In order to achieve better performance and to be classed as load-bearing boards, the TS value needs to be reduced.
One of the major problems for the strength of plant fibre boards with cement is the alkalinity of the matrix, which directly affects the durability of the fibres [34]. This can be seen in several studies where the percentage of cement applied is higher than that used in this work [5,6]. The results obtained show that the modulus of rupture (MOR) and modulus of elasticity (MOE) increase in all cases after 90 days of setting, based on which it can be stated that the fibres used have tolerated the alkalinity of the cement. This could be explained by the low cement content that has been used to manufacture the board, although it would be necessary to carry out further mechanical tests 365 days after its manufacture in order to confirm this tendency.
To reduce the effect of the cement's alkalinity, research [13] was carried out showing that treating sugar cane bagasse fibres with silane improved the durability of the compound. Similarly, it has been shown that the silicon content of giant reed [7] could have a beneficial effect on the durability of the reed. As shown in Figure 3, the trunk of the Canary Islands palm contains silicon [35], so it is possible that this compound may help to ensure that the particles do not degrade in an alkaline medium such as cement.
It is possible that the gelatinised starch serves to protect the palm particles, preserving them from degradation and assisting in cement hydration, as seen with other plasticisers [9] and polymers [36]. It is also possible that the composition of the Canary Islands palm trunk may offer better resistance to degradation than other plant fibres because, as seen in Figure 7, its bending strength increases over time, which will need to be confirmed by further tests. Table 2 shows the thermal conductivity values obtained by other authors with boards manufactured from other organic fibres and for wood particleboards.
Thermal Properties
Canary Islands palm-cement boards have thermal properties that are in line with those obtained with boards manufactured with other species of palm trees, achieving a level of thermal insulation that is interesting for a board that is classed as high-density, which may be explained by the low relative humidity found in the manufactured boards. The greater density achieved in this study can be explained by the addition of cement to the board, as urea-formaldehyde was used as a binder for palm particles in other research [37,38]. Figure 8 shows the values achieved in terms of the thermal resistance of the boards; there were no significant differences between them. The mean thermal resistance was 0.105 m 2 ·K·W −1 , which is higher than the theoretical value obtained for wood particleboards (0.067 m 2 ·K·W −1 ) with a density of 300 kg/m 3 , assuming a similar thickness to that used in this work (6.7 mm).
The thermal properties of the boards manufactured in this work are better than of commercial wood and wood-cement particleboards. Moreover, a higher energy consumption is used in the manufacture of industrial wood-cement panels than that used for palm-cement-starch panels: in this work, panels are subjected to a pressure of 2.6 MPa and a temperature of 100 • C for 2 or 3 h, whereas conventional wood-cement panels take approximately 8 h to cure at pressures of 3 and 3.5 MPa and temperature between 75 and 80 • C.
Conclusions
The results show that Canary Islands palm-cement boards with good mechanical and thermal properties can be obtained with small amounts of cement (by applying heat and pressure).
The modulus of rupture (MOR) and modulus of elasticity (MOE) increase in all cases after 90 days of setting, meaning that the Canary Islands palm particles have not undergone degradation due to the alkalinity of the cement, although further testing is needed to establish the exact cause of this behaviour. Furthermore, with 5% starch as a plasticiser, the mechanical properties (MOR, MOE, and IB), and density increase. The starch does not have a significant influence on the other properties of the board.
All the manufactured boards have adequate properties for general use in the manufacture of furniture, interior décor, and enclosures (vertical and horizontal) in dry environments. In particular, the low thermal conductivities achieved would allow them to be used as a thermal insulation material.
Further research is needed regarding the different proportions of Canary Islands palm particles, cement, water, and starch, and a longer time in the hot plate press to obtain a product with less thickness swelling after immersion in water.
The palm-cement-starch boards have a lower energy consumption than the industrial wood-cement boards that are currently manufactured.
Particles of Canary Islands palm trunk can be used as an alternative for manufacturing particle cement boards. The utilisation of these waste materials to manufacture products with a long useful life, such as particleboards, can be beneficial to the environment, as it is a method of carbon fixation and therefore contributes to reducing CO 2 in the atmosphere.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,224.8 | 2020-05-15T00:00:00.000 | [
"Materials Science"
] |
Micro-PINGUIN: microtiter-plate-based instrument for ice nucleation detection in gallium with an infrared camera
. Ice nucleation particles play a crucial role in atmospheric processes; for example, they can trigger ice formation in clouds and thus influence their lifetime and optical properties. The quantification and characterization of these particles require reliable and precise measurement techniques. In this publication, we present a novel droplet freezing instrument to measure the immersion freezing of biotic and abiotic ice-nucleating particles within the temperature range of 0 to − 25 °C. Immersion freezing of the samples is investigated using 384-well PCR plates with a sample volume of 30 µL. Nucleation events are detected with high precision using a thermal camera that records the increase in infrared emission due to the latent heat release. To maximize the thermal contact between the PCR plate and the surrounding cooling unit, we use a gallium bath as a mount
Introduction
Clouds play an important role in the Earth's radiative balance and climate.Cloud properties such as reflectivity and lifetime are, to a large extent, determined by the properties of atmospheric aerosols.Precipitation from mixed-phase and cold clouds that is initiated via the formation of ice particles significantly contributes to the global water cycle (Mülmenstädt et al., 2015).While homogenous freezing of cloud droplets only takes place in cold clouds (< −37 °C), ice-nucleating particles (INPs) are required to initiate freezing in mixedphase clouds (between 0 and −37 °C) (Murray et al., 2012).So far, only biological INPs have been shown to initiate ice formation at temperatures higher than −15 °C at atmospherically relevant concentrations (Murray et al., 2012).This can result in a large fraction (up to 83 % in the Arctic) of ice-containing clouds within this temperature range (Griesche et al., 2021).
To study the nature and concentration of INPs, a variety of droplet freezing techniques have been developed.These droplet freezing techniques essentially differ in (1) the sample volume, (2) the number of droplets investigated per run, (3) the cooling system, and (4) the method used for the detection of nucleation events and temperatures.An overview of various droplet freezing techniques is given by Miller et al. (2021).The sample volume used to investigate the ice nucleation efficiency has a big impact on the type of INPs that can be detected.Instruments with nanoliter and picoliter volumes are primarily used to investigate abundant INPs active at low temperatures (Peckhaus et al., 2016;Reicher et al., 2018;Chen et al., 2018;Budke and Koop, 2015).For these small volumes, the probability of contamination in the negative control is lower, which leads to low freezing temperatures, typically between −30 and −37 °C.Thus, activity can be investigated for samples that nucleate close to the temperatures at which homogenous freezing is initiated.However, low-volume instruments require a high concentration of INPs in the sample for them to be detected.Therefore, smallvolume instruments are less suitable for the analysis of hightemperature INPs, which are often present at low concentrations.In contrast, instruments with larger sample volumes allow for the study of rare INPs.However, with larger volumes, the presence of impurities in the water control becomes more likely, leading to a higher background freezing temperature.Consequently, it is challenging to study INPs that are active at low temperatures with large-volume droplet freezing techniques, as the freezing curves at lower temperatures start to overlap with the curves of the pure water background.For instruments with a sample volume of 50 µL, freezing events in the negative control are reported between −20 and −27 °C (Schiebel, 2017;Harrison et al., 2018;Miller et al., 2021;Beall et al., 2017;Barry et al., 2021;David et al., 2019;Gute and Abbatt, 2020).To obtain a high quality of the freezing spectra, a sufficient number of droplets must be analyzed.This can be achieved by investigating a large number of droplets per run or by repeated experiments with the same substance.A recent modeling study has shown that a small number of droplets (< 100) leads to poor statistics and can cause misrepresentation of the underlying INP distribution (de Almeida Ribeiro et al., 2023).Furthermore, droplet freezing techniques differ in the method they use to cool the samples.The cooling system for the droplet freezing techniques is often composed of a liquid cooling bath (Gute and Abbatt, 2020;Chen et al., 2018;David et al., 2019;Miller et al., 2021) or thermostats circulating a cooling liquid through the cooling block (Beall et al., 2017;Schiebel, 2017;Kunert et al., 2018).Other instruments are based on a cold stage cooled by liquid nitrogen (Peckhaus et al., 2016), Peltier elements (Budke and Koop, 2015;Chen et al., 2018), or a Stirlingengine-based cryocooler (Harrison et al., 2018;Tobo et al., 2019).Another difference between the droplet freezing tech-niques is the way they determine the freezing of the samples.To detect the freezing events, several instruments use an optical camera combined with a temperature sensor to measure the freezing temperatures.As the temperature is measured only at positions where a temperature sensor is placed, gradients within the instrument lead to a reduced accuracy of the detected freezing points.To minimize these gradients, good thermal conductivity of the materials used between the cooling unit and the sample is of great importance.Further, the detection of the freezing temperatures with an optical camera is usually not based on the detection of the ice nucleation event but is instead based on the change in optical properties such as the brightness of the sample during the process of the whole droplet freezing.As a result, the detection of the nucleation temperature based on changing optical properties is challenged by the fact that the total freezing time can take up to several minutes, particularly at larger volumes and temperatures relevant for biogenic INPs.Consequently, it is difficult to determine the exact starting point of ice nucleation using an optical camera.Harrison et al. (2018), for example, reported a freezing time of 100 s for a 50 µL droplet freezing at −12 °C.The delay between the nucleation event and the freezing of the whole droplet can thus result in an error of > 1.5 °C at −12 °C, assuming a cooling rate of 1 °C min −1 , and larger errors are expected at higher nucleation temperatures that are relevant for biogenic INPs.
Despite the differences in the design of various droplet freezing techniques, the results they produce have to be comparable across the instruments.Therefore, a series of intercomparison studies with various instruments using compounds such as Snomax (Wex et al., 2015) and illite NX (Hiranuma et al., 2015) were carried out.Snomax is a commercially available product that consists of freeze-dried cell material of the ice-nucleation-active bacterium Pseudomonas syringae, with freezing temperatures as high as −2 °C, and illite NX is a mineral mix that contains illite, kaolinite, quartz, carbonate, and feldspar ice nucleation active at temperatures below −11 °C.Both substances were found to be suitable for intercomparison studies when taken from the same batch, used at similar concentrations, and stored only short-term.However, Polen et al. (2016) observed that the ice nucleation ability of very active proteins in the Snomax powder changes over time during storage in the freezer, which they suggest is due to aging, thus emphasizing the necessity of short storage times.The fact that the ice nucleation capacity of Snomax is unstable over time causes problems in intercomparison studies and leads to deviations in the measured ice nucleation activity that can span several orders of magnitude.
In this publication, we present a novel ice nucleation instrument, the MICROtiter-Plate-based instrument for Ice Nucleation detection in GalliUm with an INfrared camera (micro-PINGUIN).High accuracy is achieved by combining good thermal contact between the sample and the surrounding cooling unit with the detection of freezing events by an infrared camera.The working principle and the validation of micro-PINGUIN are described in the following sections.Furthermore, we address the challenges existent due to inhomogeneities of the product and due to aging effects and propose a possible solution for using Snomax as a suspension for intercomparison studies and reproducibility measurements.
Description of the instrument
Figure 1 shows a schematic drawing of the micro-PINGUIN instrument.It is composed of three main parts: the cooling unit, the camera tower with an infrared camera for the detection of freezing events, and the electronic components to control the instrument and measure the temperatures.A photograph of the instrument is provided in the Supplement (Fig. S1).
The cooling base of the micro-PINGUIN instrument (Fig. 2) is built up from several layers.Primarily, two Peltier elements, a vapor chamber, and a gallium bath are used to achieve good thermal conductivity, reduce horizontal gradients, and optimize the cooling capacity of the instrument.The temperature and the cooling gradients are controlled by two PID-regulated Peltier elements (QC-241-1.6-15.0M;Quick-Cool, Germany).By applying voltage to the Peltier elements, a temperature difference is achieved between the two sides of the element.A water-cooling bath (Eiszeit 2000; Alphacool, Germany) is connected to the cooling unit and circulates precooled water (2 °C) through the water cooler base plates (Cuplex kryos NEXT sTRX4 FC; Aqua Computer, Germany).Thereby, the heat generated on the lower side of the Peltier elements is removed.The Peltier elements are positioned within a copper base, to which they are connected by thermal pads.Above the lower copper plate, a vapor chamber is used to distribute the temperature evenly and thus minimize horizontal temperature gradients within the instrument.A second copper base is positioned above the vapor chamber.It contains a fixed-point cavity for the temperature measure-ment with the infrared camera and a Pt100 temperature probe (RTDCAP-100A-2-P098-050-T-40; OMEGA, Denmark) in the same position to achieve precise temperature measurements as described in detail in Sect.2.3.The upper copper base contains a gallium bath that melts at around 30 °C.A 384-well PCR plate (384 PCR plate full skirt; Sarstedt, Germany) is submerged in the melted gallium bath that solidifies when cooled to room temperature.Thereby, thermal contact between the PCR plate and the cooling unit is obtained.The freezing events are detected with a thermal camera (FLIR A655sc/25°Lens; Teledyne FLIR, US) that is mounted in a black-painted camera tower and positioned above the PCR plate.A continuous flow (10 L min −1 ) of air with a low relative humidity (< 10 % RH) is circulated within the camera tower to keep the humidity low and to avoid the condensation of water vapor on the PCR plate, which would interfere with the experiments.The instrument is controlled by a custom-made software (Ice Nucleation Controller).By default, a cooling run is started at 10 °C with a cooling rate of 1 °C min −1 until the final temperature of −30 °C is reached.
The role of the gallium bath
For an optimal cooling performance of the instruments, materials with high thermal conductivity are used.Aluminum is a commonly used material in PCR-plate-based instruments (Schiebel, 2017;Kunert et al., 2018;Beall et al., 2017;Hill et al., 2014) as it has a good thermal conductivity and is easy to shape, which makes it suitable for the PCR plate mount.However, it cannot be avoided that a thin layer of air between the PCR plate and the aluminum plate forms, and because of the insulating properties of air, the thermal contact between the cooling system and the samples is hampered.To maximize thermal contact with the sample while minimizing the manufacturing process of the PCR mounting plate, we used gallium as a mount for the PCR plate (Fig. S2 in the Supplement).Gallium is a metal with a low melting temperature (29.8 °C) and high thermal conductivity (29.3-37.7 W m −1 K −1 ) (Prokhorenko et al., 2000).In the micro-PINGUIN instrument, we use gallium to connect the 384well PCR plates to the cooling system.By reversing the polarity on the Peltier elements, the instrument can be heated to 40 °C, causing the gallium to melt.The PCR plate is then inserted into the liquid gallium and the instrument is cooled to 10 °C.During this process, the gallium solidifies when close contact with the PCR plate is made, and any excess air is pushed out.To achieve uniform contact for the whole plate, a predetermined weight is placed on top of the PCR plate during the solidification of the gallium.After mounting the PCR plate, 30 µL of the samples per well is distributed using an automatic eight-channel pipette (PIPETMAN P300; Gilson, US).For future versions of this instrument, we plan to make small modifications to demonstrate that the use of gallium has further advantages.As heating can have an impact on the ice nucleation activity of the samples, the wells are usuhttps://doi.org/10.5194/amt-17-2707-2024Atmos.Meas.Tech., 17, 2707-2719, 2024 ally filled with the suspension after the heating-cooling cycle.However, this procedure can be an additional advantage of using gallium as a mount for the PCR plate.The use of gallium as a heat-conductive medium allows for the application of precise heat treatments of the samples by controlling the temperature of the gallium bath.Consequently, previously measured plates could be remeasured after heating the gallium to the desired temperatures.Heat treatments are a commonly used method to differentiate between different biogenic and inorganic INPs, assuming that the ice nucleation activity of biogenic INPs decreases when they are heated to sufficiently high temperatures (an overview of studies using heat treatments is given in Daily et al., 2022).The approach presented here could substitute traditional heating methods such as ovens or water baths for treating the samples.This would not only simplify the experimental process but also facilitate accurate and reproducible heat treatments for ice nucleation activity studies.Further, when using gallium as a mount for the PCR plates, the instrument is not limited to one type of PCR plate.Small modifications to the instrument allow the use of 96-well plates instead of 384-well plates, thus extending the range of sample volumes and INP concentrations that can be investigated.
The airflow system
During initial tests of the micro-PINGUIN instrument, freezing temperatures as high as −13 °C were observed for the negative control.Similar freezing temperatures were found for Milli-Q water, tap water, and ultrapure water for molecular work and were not affected by filtration or autoclaving of the water, indicating that these high freezing temperatures are not caused by impurities in the water.During the experiments, the condensation of water vapor on the copper base and the PCR plate was observed.Tests with a flow of compressed air with low humidity (< 10 % RH) passing through the camera tower showed that condensation was avoided during the experiment and that the freezing temperatures decreased with increasing airflow until T 50 temperatures, corresponding to the temperature where 50 % of the droplets are frozen, of around −25 °C were reached (Fig. 3).These background freezing temperatures are common for freezing experiments with volumes in the microliter range.Other instruments using volumes of 50 µL (Schiebel, 2017;Harrison et al., 2018;Miller et al., 2021) reported comparable frozen fraction curves obtained by measurements of their negative controls.These airflow experiments indicated that under high-humidity conditions, the freezing was caused by condensed water on the plates instead of INPs in the suspension.Thus, we decided to apply a flow of dry air that is injected at the top part of the camera tower to lower the humidity in the micro-PINGUIN instrument.Before each run, the camera tower is flushed with a high flow of dry air (20 L min −1 ).The flow is reduced to 10 L min −1 during the measurement to minimize the disturbance of the samples and the introduction of warm air.We measured the relative humidity in the camera tower for this procedure and found that a flow of 10 L min −1 is sufficient for maintaining a low relative humidity during the experiment.Further, we evaluated the sample loss due to evaporation and found that this factor is negligible as only 0.36 % of liquid was lost during an experiment.With this procedure, the T 50 temperatures of the negative control were usually as low as −25 °C.Other droplet freezing techniques apply a flow of dry air or N 2 to the instrument to avoid frost formation (Schiebel, 2017;Budke and Koop, 2015).
Temperature measurement and detection of freezing events with a thermal camera
The temperature of the micro-PINGUIN instrument is measured with a thermal camera and a Pt100 temperature probe as a reference.The thermal camera detects the infrared radiation emitted by an object -in this case, a microtiter plate well -and converts it into a visual image.As objects with a higher temperature emit more infrared radiation than objects with a lower temperature and as the temperature increases due to latent heat release once the droplet nucleates, this technique can be used to measure the freezing temperatures of the samples.The camera is sensitive within a wavelength range of 7.5 to 14.0 µm and has a resolution of 640×480 pixels.However, while the thermal camera has high relative precision for the temperature reading (0.06 °C; Appendix A6), it has low absolute precision of ±2 °C; therefore, a fixed-point cavity is used as a reference measurement.The fixed-point cavity is a copper tube with an angled bottom and a black inner surface.The radiation measured by the camera is scattered inside the cavity, which allows for a precise reading of the actual temperature, T cavity .The Pt100 reference temperature probe is positioned directly against the fixed-point cavity and therefore measures the temperature, T Pt100 , at the same position in the upper cooper base.This temperature measurement is used as a reference temperature and its offset is applied to the temperature reading of the camera during the cooling experiment, T camera : Further, the Pt100 temperature measurement serves as the input for the PID-regulated Peltier elements during the heating and cooling of the instrument.During the freezing of the sample, latent heat is released by the sample because of the phase change from water to ice.The initial phase of freezing when ice crystals start to form is a fast process resulting in an immediate temperature increase in the sample to 0 °C.If this phase change is ongoing, the temperature stays at 0 °C and a plateau forms.When the sample is completely frozen, the release of latent heat stops, and the droplet cools down to ambient temperature.This results in a characteristic temperature profile for a freezing event, as shown in Fig. 4b.The length of this plateau at 0 °C is among other factors dependent on the temperature where the nucleation is initiated.For nucleation events close to 0 °C, which is the case for some highly active biogenic INPs, the temperature of the system when a droplet nucleates can differ significantly from the temperature of the system when the same droplet is completely frozen (Fig. S3 in the Supplement).Using an infrared camera for the detection of the freezing event, we detect this immediate temperature increase upon nucleation to be the freezing temperature of the sample.This is an advantage compared to the freezing point detection based on the change in the optical properties of the droplet.As long as the phase change is ongoing, there are minor changes in optical properties of a droplet, which may be difficult to identify.Often, only once a droplet is completely frozen, the optical properties are large enough to be detected.Such variations in freezing point detection are especially crucial when investigating highly active INPs such as biogenic INPs.
Thus, in the micro-PINGUIN instrument, a thermal camera captures an image of the PCR plate every 5 s, and after each run, the data are processed by custom-made software.This is done as follows: (1) a grid is created by the user, making sure that the location of every well is marked in the program (Fig. 4a).( 2) The temperature profile is then processed for each well, and the freezing event is detected as a change in the slope of the temperature of each well (Fig. 4b).
As the change in temperature is smaller for freezing events close to 0 °C, these temperatures can cause problems in the automatic recognition of the nucleation temperature.By default, a freezing event is recognized when the temperature gradient shows a deviation in the temperature profile that is larger than 2 times the standard deviation.This value can be lowered to identify nucleation events at high temperatures.
To avoid false detections, the value with the largest deviation in temperature is always used as the freezing temperature.
Data analysis
Methods to detect droplet freezing events rely on dividing samples into multiple equal volumes and observing the freezing process of these volumes at varying temperatures (known as freezing curves), while maintaining a consistent cooling rate.However, it is important to note that droplet freezing assays have limitations when distinguishing solely between the liquid and frozen states of the droplets.Once the most active INPs among the mixture of INPs initiate freezing and prompt droplet crystallization, the influence of less active INPs is hidden.To comprehensively investigate the freezing characteristics of highly active samples across a broad spectrum of temperatures, it is necessary to study the dilutions of the sample.Thereby, the most active INPs will be diluted out, enabling the analysis of the INPs that are only active at lower temperatures.Calculating the quantity of INPs based on the proportion of frozen droplets requires considering the particle concentration within the solution, the volume of the droplets, and the dilution factor.Following the approach by Vali (1971) and assuming time independence of freezing, the cumulative spectrum, K(T ), corresponding to the number of sites active above temperature T per unit sample volume, is described by the following equation: where V is the droplet volume, N f (T ) is the number of frozen droplets at a given temperature calculated as using the dilution factor, d, and the particle mass concentration, c m .
Measurement accuracy
The measurement accuracy of the micro-PINGUIN instrument is determined relative to a calibrated temperature standard.The individual components that contribute to the temperature uncertainty of the instrument are listed in Table 1 and were examined in separate experiments.A detailed description of the measurements and analysis is given in Appendix A1 to A6.We found that the largest contribution to the uncertainty of the instrument is the vertical gradient within the well.As the freezing temperature of the sample is determined by the infrared camera measurement, which is a surface-sensitive technique, any vertical gradient in the well leads to an uncertainty in the temperature measurement.The total vertical gradient was 0.20 °C at 0 °C and increased by 0.015 °C per degree when the temperature was lowered.Given that temperature measurements are performed at the surface, all temperature readings should be corrected by half the vertical gradient at a given temperature, resulting in a symmetrical contribution.Thus, the freezing temperatures determined within the experiments are corrected by this temperature correction, T correction : where T is the difference between the room temperature, T R (22 ± 1 °C), and the surface temperature, T S , measured by the infrared camera in the well ( T = T S − T R ).The vertical gradient was not dependent on the cooling rate in the range between 0.3 and 3 °C min −1 (Fig. S4 in the Supplement), and thus we conclude that the gradient is not attributed to a poor thermal conductivity between the individual parts of the instrument but rather to the warm air above the sample surface.
The individual uncertainty contributions result in an overall temperature-dependent standard uncertainty (k = 1) for measurements with micro-PINGUIN of Exemplary uncertainty and correction values for different temperatures are given in Table A1.The horizontal gradient of the instrument was below the sensitivity of the infrared camera (< 0.06 °C) and therefore not included in the calculations.Further factors such as the deviation of the fixed-point cavity from a black body or the thermal anchoring between the fixed-point cavity and the Pt100 temperature probe are considered to have only a minor impact on the accuracy of the instrument and were therefore not investigated in detail here.
3 Ice nucleation activity of Snomax and illite
Snomax
The characterization of the micro-PINGUIN instrument involved the use of extensively researched materials, and the obtained outcomes were juxtaposed with outcomes from established ice nucleation instruments.As part of the INUIT (Ice Nuclei research UnIT) initiative, comparative assessments of various ice nucleation instruments were conducted, employing Snomax (Wex et al., 2015) and illite NX (Hiranuma et al., 2015).Snomax is commercially available and consists of the freeze-dried cell material of the icenucleation-active bacterium Pseudomonas syringae and is ice-nucleation-active already at temperatures as high as −2 °C (Wex et al., 2015).As the activity of Snomax was proposed to decrease over time, a new batch was ordered from the manufacturer and stored at −20 °C until usage.Care was taken that the Snomax batch was not subjected to many temperature changes, as recommended by the manufacturer.To cover the temperature range of INPs that are active at lower temperatures, illite NX powder was used.Illite NX powder consists of different minerals including illite, kaolinite, quartz, carbonate, and feldspar.As the same batch of illite NX is used as in the INUIT intercomparison study, our results can be directly compared to the results reported by Hiranuma et al. (2015).Measurements with Snomax suspensions of concentrations ranging from 10 −2 to 10 −7 mg mL −1 were repeated three times.Results are showcased in Fig. 5a alongside the data acquired by Wex et al. (2015).The measurements obtained with the micro-PINGUIN instrument show freezing of Snomax at temperatures as high as −3.5 °C.At −12 °C, concentrations of INPs reach a plateau at 10 9 INPs per mg of Snomax.At the knee point around −10 °C, INP concentrations obtained in this study are slightly below the values of previous measurements with other instruments, but the plateau reached by the curve is in agreement with that reported by Wex et al. (2015).We noted significant discrepancies in repeated Snomax measurements, even when employing the identical instrument and the same batch of Snomax.As Snomax contains not only the freeze-dried cells of P. syringae bacteria, but also the fragments of the cell membrane, remains of the culture medium, and another unknown material, the number of INPs can vary within the prepared suspension, leading to large variations in the measured freezing curves.Consequently, minor disparities in freezing spectra measured by diverse instruments are expected if different suspensions are used.Furthermore, the variance in ice nucleation activity could potentially stem from the utilization of distinct batches of Snomax in the two studies, variations within the substrate, or the possibility of a marginal reduction in Snomax activity due to storage.We could significantly improve the reproducibility of the measurements using aliquots of a Snomax suspension that were stored frozen until measurements were performed, as shown in Fig. 5b.This observation points at the key role that substance heterogeneity plays for measurement reproducibility.
The lower onset freezing temperature in Fig. 5b is attributed to the large variability between freshly prepared Snomax suspensions and not due to a decrease in activity upon freezing.We evaluated the impact of freezing and thawing on the suspension and found that the variations are within the measurement uncertainty (Figs.S5 and S6 in the Supplement).Thus, we propose the use of Snomax suspensions that are prepared in advance and stored frozen in aliquots for reproducibility measurements and further instrument intercomparison studies.The reproducibility of measurements using frozen aliquots is further discussed in Sect.3.3.
Illite NX
The number of INPs normalized for the surface area, n s,BET (T ), measured for illite NX suspensions of concentrations between 10 and 0.1 mg mL −1 is shown in Fig. 6 in comparison with data from other devices analyzing illite NX suspensions.The surface area, n s,BET (T ), was derived from the n m (T ) spectrum following the approach by Hiranuma et al. (2015): with the specific surface area, θ , obtained from gas adsorption measurements (BET-derived surface area) of 124.4 m 2 g −1 .We assume the same surface area, as we used an aliquot sourced from the same batch of illite NX as the one used by Hiranuma et al. (2015).The data displayed in , 2015;Beall et al., 2017;Harrison et al., 2018;David et al., 2019).Thus, our data extend the concentrations of INPs reported for illite NX suspensions.
Reproducibility of the measurements
To assess the consistency of measurements conducted with the micro-PINGUIN instrument, successive experiments were carried out employing the identical suspension.This approach aimed at mitigating the influence of dilution errors and variations within the substrates that we used in the tests.Initial trials involving Snomax, illite, and feldspar suspensions revealed an aging phenomenon over the course of the day despite storing the suspensions in the refrigerator be-tween the individual experiments.As a result, the suspensions were freshly prepared immediately prior to each experiment or the suspension was divided into aliquots that were frozen for preservation until needed.In the latter case, the samples were thawed just before conducting the ice nucleation experiment, and all measurements were executed within the same day to minimize disparities in freezer storage time.To characterize this procedure, Snomax was chosen as the test substance due to its biogenic origin and the notable deviations in previous measurements.Figure 7 shows the mean value and standard deviation of the fraction frozen curves for three measurements with a concentration between 10 −2 and 10 −7 mg mL −1 of Snomax.The experiment's standard deviations range from 0.006 to 1.191 °C, showing outliers for exceedingly high and low INP counts, respectively.Bacterial ice-nucleating proteins show distinct freezing behavior depending on the size of the proteins and can be divided into different classes (Turner et al., 1990;Yankofsky et al., 1981;Hartmann et al., 2013;Budke and Koop, 2015).Budke and Koop (2015) The results of these reproducibility measurements are in agreement with the measurement uncertainty determined earlier.Further, we could demonstrate that the reproducibility of the measurements is greatly improved when suspensions are stored frozen in aliquots.Storage of the sample for 4 months at −20 °C resulted in a slightly higher standard deviation for the freezing curves, but no clear reduction in ice nucleation activity was observed.The freezing spectra were partly within the standard deviation of the three initial measurements, while other dilutions showed slightly higher or lower freezing temperatures (Figs.S7 and S8 in the Supplement).Overall, the reproducibility was improved compared to freshly prepared Snomax suspensions.The grey data points represent measurements for comparable instruments (Hiranuma et al., 2015;Harrison et al., 2018;Beall et al., 2017;David et al., 2019)
Conclusion
We developed a novel ice nucleation instrument that supports accurate nucleation temperature detection within the temperature range of 0 to −25 °C.The distinctive feature of this instrument is the utilization of a gallium bath, which acts as a platform that holds the PCR plates with the samples.The gallium bath ensures tight contact between the sample and the surrounding cooling unit and thus results in good thermal conductivity.Further, the freezing events are detected with high precision by an infrared camera based on a sudden rise in temperature following the nucleation event.This facilitates the recognition of nucleation events instead of freezing events, further reducing the uncertainty in assigned freezing temperatures.The instrument was thoroughly analyzed for its reproducibility and accuracy of the temperature measurements and can therefore be used for reliable and intercomparable ice nucleation studies.The vertical temperature profile measurements were performed with a thin thermistor (PSB-S9 Thermistor, PB9-43-SD6) to minimize the disturbance of the measurement by the thermistor and to allow for measurements at several depths in the well.The wells of the 384-well PCR plate were filled with 30 µL of sterile filtered Milli-Q water, and the thermistor was mounted on a micromanipulator positioned above the well.The first measurement was performed with the thermistor approximately 1 mm below the water surface.The cooling experiment was started with 1 °C min −1 until −15 °C, while the temperatures measured with the small thermistor inside the well and a reference temperature probe were recorded by the instrument.After the cooling cycle, the instrument reached a steady-state temperature at around 2 °C (temperature of the cooling water), and the thermistor was lowered by 1.5 mm using the micromanipulator.The gradient measurements were performed both in the center and at a corner of the 384-well PCR plate.The temperature profiles were recorded for several depths and then evaluated relative to the reference temperature probe.As some freezing events occurred during the measurements, a linear regression analysis was performed for the temperature profiles in the temperature range between 0 and −6 °C to determine the vertical gradient of the instrument.
We observed slightly more gentle gradients at the corner of the plate than in the center of the plate, and thus the steepest gradient was used to estimate the total vertical gradient of the instrument.The vertical gradient contribution in the center well was measured to be 0.20 °C at 0 °C and increased 0.015 °C per degree with lower temperatures (Fig. A1).Given that temperature measurements are done at the surface, all temperature readings should be corrected by half the vertical gradient at a given temperature, resulting in a symmetrical contribution.During the measurements, we observed that the temperature reading was slightly different if the thermistor was touching the wall of the well, probably due to the different thermal conductivity of the plastic material.Due to the conical shape of the wells, it was not possible to lower the thermistor further without making contact with the plastic wall.Due to these limitations, the gradient measurement covers only a depth of approximately 3 mm and would be ideally extrapolated to cover the full depth of the well.To be conservative in our evaluation, we assume a linear relationship and therefore multiply the vertical gradient by a factor of 3/8, as the sample in the well has a depth of approximately 8 mm.Thus, the temperature correction due to the vertical gradient is given by the following equation: where T is the difference between the room temperature, T R (22 ± 1 °C), and the surface temperature, T S , measured by the infrared camera in the well ( T = T S − T R ).The contribution of the vertical gradient represented as the standard uncertainty can be expressed by the following equation: A2 Pt100 uncertainty (δT T ), long-term drift (δT TL ), and Pt100 calibrator uncertainty (δT TC ) The temperature measurement of the Pt100 temperature probe is based on a change in its resistance as a function of temperature change and is measured using a National Instruments measurement module (NI-9219; National Instruments, US).Calibration correction parameters were determined using an AMETEK reference temperature calibrator (RTC-157; AMETEK, US) with an external reference temperature probe (Pt100 resistance probe STS 200 A915; AME-TEK, US) within the temperature range of 35 to −35 °C.The micro-PINGUIN Pt100 temperature probe was submerged in ethanol inside the RTC calibrator and data were recorded for 5 min in steady-state conditions at 5 °C intervals.This resulted in a maximum residual mean error of 0.0081 °C within the temperature range of 35 to −35 °C.Additionally, the uncertainty of the calibration device, δT TC , which is given by a calibration certificate (δT TC = 0.02 °C), must be considered.After a 2-month operation time, the temperature probe will be recalibrated to determine the long-term drift of the temperature reading δT TL .The manufacturer guarantees a longterm stability better than 0.05 °C per 5 years.Initially, the micro-PINGUIN instrument was equipped with a thermistor for the reference temperature measurement.However, during the detailed examination of the uncertainty, we noticed that the thermistor had a high long-term drift.Thus, the thermistor was replaced by the aforementioned Pt100 temperature probe.The vertical gradient measurements and the examination of the infrared camera repeatability, distortion, and non-uniformity correction were conducted with the previous thermistor probe; however, the results are not influenced by the exchange of the reference temperature probe.The measurements of both sensors are accurate and the exchange was only due to the better long-term stability of the Pt100 probe.
A3 Pt100 repeatability (δT TR )
After the calibration, the stability of the Pt100 temperature probe reading was examined by recording the temperature in steady-state conditions at 0 °C for 3 min.The standard deviation of the temperature reading was calculated to be 0.0016 °C and is used to derive the temperature probe repeatability contribution, δT TR .A4 Thermal camera repeatability (δT CR ) For accurate results, the camera's manufacturer recommends powering up the camera before the measurement.We evaluated the camera warm-up time by recording the deviation between the temperatures measured by the infrared camera relative to the reference temperature probe at the fixed-point cavity (calibration offset).This experiment was performed in steady-state conditions (room temperature) for 1.5 h after powering the camera.Improved stability of the temperature measurement was found after a 40 min operating time of the camera.The repeatability of the temperature reading by the infrared camera is calculated by the standard deviation of the temperature reading in steady-state conditions recorded after the warm-up period.The standard deviation of the temperature readings is ±0.05 °C.
A5 Non-uniformity correction (δT NUC )
The infrared camera constantly records its internal temperature and performing a non-uniformity correction to account for minor detector drifts that occur over time due to internal temperature changes.We observed that the temperature measured before and after this correction can differ slightly.Thus, we evaluated this impact by manually performing several non-uniformity corrections while recording the temperature in steady-state conditions (room temperature).The non-uniformity correction contribution was measured to be 0.15 °C.
A6 Thermal camera distortion (δT CD )
The inhomogeneity of the camera lens has an impact on the temperature measurements across the PCR plate.To evaluate this contribution, the camera was attached to a movable plate and the black-body radiation of the fixed-point cavity was measured for several positions of the camera under steadystate conditions.The offset between the infrared camera and the temperature measured by the temperature probe for several positions gives an estimate of the lens distortion of the camera.The average temperature offset for each position is calculated over a 2 min measurement period with an image frequency of 120 images per minute to minimize the impact of repeatability contributions.The lens distortion contribution was measured to be 0.06 °C.
Figure 2 .
Figure 2. Schematic drawing of the cooling base.The red and blue tubes connected to the water cooler base indicate the circulation of cooled water which removes the heat generated by the Peltier elements.
Figure 3 .
Figure 3.Effect of a dry airflow on the freezing behavior of the negative control (Milli-Q water).These freezing curves were obtained without flushing the camera tower with dry air prior to the experiment.
Figure 4 .
Figure 4. (a) Mask created for the freezing point detection.The yellow circles mark the pixels taken for the analysis.(b) Temperature profile of the droplet marked in green.The red line indicates the point in time with the highest temperature gradient.
Fig. 6
Figure 5. (a) Number of INPs per mg Snomax measured for suspensions, with 10 −2 to 10 −7 mg mL −1 of Snomax prepared freshly on 3 different days.The data were binned in 0.5 °C temperature bins.Grey data points represent the results from various ice nucleation instruments investigated by Wex et al. (2015).The dashed line is based on the CHESS model by Hartmann et al. (2013).(b) The same is shown as in panel (a) but with the Snomax suspension that was prepared once and stored frozen in aliquots and the measurement repeated on freshly thawed aliquots three times.
identified two classes of INPs for Snomax: the highly active but less abundant class A INPs nucleate ice at around −3.5 °C, while class C INPs are frequently observed but nucleate at a lower temperature of −8.5 °C.Reproducibility within freezing temperatures ranging from −7 to −10 °C outperformed that at higher temperatures, likely due to the prevalence of class C INPs over the class A INPs, which results in a higher class C homogeneity.The mean standard deviation for this dataset is 0.20 °C, leading to an estimated reproducibility of ±0.20 °C for the micro-PINGUIN instrument.We suggest that this is a conservative estimate due to the difference in the prevalence of class A and class C INPs in the sample.Thus, while the standard deviations observed for the temperature range where class A INPs show predominant activity are due to a combined effect of the technical reproducibility of our instrument and inhomogeneity of the INPs across the droplets, the standard deviations observed for the temperature range where class C INPs show predominant activity primarily reflect the technical reproducibility of our instrument.
Figure 6 .
Figure 6.(a) Number of INPs per surface area of illite NX (orange data).The grey data points represent measurements for comparable instruments(Hiranuma et al., 2015;Harrison et al., 2018;Beall et al., 2017;David et al., 2019).The black line shows the fit for all suspension measurement techniques in the logarithmic representation, and the dashed blue line shows the fit fromAtkinson et al. (2013) for K-feldspar multiplied by a factor of 0.000014 as discussed inHiranuma et al. (2015).(b) The right panel shows the same data, zooming in on the range 10 −1 < n s,BET < 10 5 m −2 .
Figure 6.(a) Number of INPs per surface area of illite NX (orange data).The grey data points represent measurements for comparable instruments(Hiranuma et al., 2015;Harrison et al., 2018;Beall et al., 2017;David et al., 2019).The black line shows the fit for all suspension measurement techniques in the logarithmic representation, and the dashed blue line shows the fit fromAtkinson et al. (2013) for K-feldspar multiplied by a factor of 0.000014 as discussed inHiranuma et al. (2015).(b) The right panel shows the same data, zooming in on the range 10 −1 < n s,BET < 10 5 m −2 .
Figure 7 .
Figure 7. (a) Fraction frozen for Snomax suspensions with a concentration between 10 −2 and 10 −7 mg mL −1 .The data for 10 −5 and 10 −6 mg mL −1 are not shown for illustrative purposes.Data points represent the mean values of three measurements, and the horizontal error bars indicate the standard deviation between these three measurements.(b) Standard deviations for the different concentrations are shown as box plots with 25th and 75th percentiles.
Figure A1 .
Figure A1.Vertical gradient measurements for well G13 in the middle of the 384-well PCR plate.(a) Temperature profiles measured for three positions in the well.(b) Deviation of the temperature from the mean temperature reading at 0 °C for the three depths.
Table 1 .
Quantities contributing to the temperature uncertainty of micro-PINGUIN.The standard uncertainty was determined experimentally, and the contribution to the uncertainty is estimated by taking the distribution of the uncertainty into account.
Based on the reproducibility experiments, we recommend that the Snomax suspensions are prepared in advance and stored frozen in aliquots for future reproducibility and instrument intercomparison measurements.
Table A1 .
Measurement uncertainty and temperature corrections for various temperatures at 5 °C steps.The measurement uncertainties are expanded to a coverage of 95 %. | 9,835 | 2024-05-07T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Melatonin Confers Plant Cadmium Tolerance: An Update
Cadmium (Cd) is one of the most injurious heavy metals, affecting plant growth and development. Melatonin (N-acetyl-5-methoxytryptamine) was discovered in plants in 1995, and it is since known to act as a multifunctional molecule to alleviate abiotic and biotic stresses, especially Cd stress. Endogenously triggered or exogenously applied melatonin re-establishes the redox homeostasis by the improvement of the antioxidant defense system. It can also affect the Cd transportation and sequestration by regulating the transcripts of genes related to the major metal transport system, as well as the increase in glutathione (GSH) and phytochelatins (PCs). Melatonin activates several downstream signals, such as nitric oxide (NO), hydrogen peroxide (H2O2), and salicylic acid (SA), which are required for plant Cd tolerance. Similar to the physiological functions of NO, hydrogen sulfide (H2S) is also involved in the abiotic stress-related processes in plants. Moreover, exogenous melatonin induces H2S generation in plants under salinity or heat stress. However, the involvement of H2S action in melatonin-induced Cd tolerance is still largely unknown. In this review, we summarize the progresses in various physiological and molecular mechanisms regulated by melatonin in plants under Cd stress. The complex interactions between melatonin and H2S in acquisition of Cd stress tolerance are also discussed.
Introduction
Heavy metal pollution is the most widespread contamination resulting from anthropogenic activities in the world [1]. It has raised concerns about its various harmful risks to human health via the metal transfer along the food chain [2]. Among the heavy metals, cadmium (Cd) is a toxic element and poses a hazardous impact to living organisms, such as renal tubular dysfunction and bone disease [3]. In plants, Cd disturbs a range of important biochemical, morphological, physiological, and molecular processes, thus resulting in chlorosis and shunted growth [4,5]. Cd stress deceases the chlorophyll content, net photosynthetic rate, stomatal conductance, intracellular CO 2 concentration, and transpiration rate [4][5][6]. Cd stress induces the excess accumulation of reactive oxygen species (ROS), mainly due to the imbalance between ROS generation and scavenging [7,8]. Increased concentrations of ROS further induce the lipid peroxidation and oxidative damage, destructing plant membranes, macromolecules, and organelles [7,8]. Additionally, excessive bioaccumulation of Cd in plants inhibits Fe and Zn uptake, and disrupts the uptake and transport of K, Ca, Mg, P, and Mn [9]. In response to Cd stress, plants have evolved the complex biochemical and molecular mechanisms that modulate ROS homeostasis and Cd compartmentation and chelation [7,[10][11][12]. Plant hormones (ethylene, salicylic acid (SA), abscisic acid (ABA), jasmonic acid (JA), auxin, brassinosteroids (BRs), and strigolactones (SLs)) and signaling molecules (nitric oxide (NO), carbon monoxide Melatonin can be degraded by two distinct routes: non-enzymatic and enzymatic transformations [17]. Transgenic tomato (Solanum lycopersicum) plants expressing the gene encoding indoleamine 2,3-dioxygenase (IDO) in rice showed reduced melatonin levels [41]. Thus, the pathway that melatonin converts to N 1 -acetyl-N 2 -formyl-5-methoxykynuramine (AFMK) exists in plants. Tan and Reiter speculated that AFMK is the product of melatonin interaction with ROS, which generated during photosynthesis [39]. This might reflect the important role of melatonin in detoxifying ROS accumulation. In addition, melatonin hydroxylation metabolites, 2-hydroxymelatonin (2-OHMel) and cyclic 3-hydroxymelatonin (c3-OHMel), have been identified in plants. Their formation is attributed to melatonin 2-hydroxylase (M2H) and melatonin 3-hydroxylase (M3H), respectively [42][43][44]. Singh et al. suggested that N-nitrosomelatonin (NOmela) likely served as a nitric oxide (NO) carrier that participated in the redox signal transduction [45]. Nevertheless, Mukherjee considered that NOmela served as an intracellular NO reserve in plants was questionable due to its sensitive and unstable characteristics [46]. The processes of NOmela formation and transport are not fully understood and should be thoroughly investigated. In addition, whether 5-methoxytryptamine (5-MT) formed by melatonin deacetylation is of physiological importance remains to be investigated in plants. Melatonin can be degraded by two distinct routes: non-enzymatic and enzymatic transformations [17]. Transgenic tomato (Solanum lycopersicum) plants expressing the gene encoding indoleamine 2,3-dioxygenase (IDO) in rice showed reduced melatonin levels [41]. Thus, the pathway that melatonin converts to N 1 -acetyl-N 2 -formyl-5-methoxykynuramine (AFMK) exists in plants. Tan and Reiter speculated that AFMK is the product of melatonin interaction with ROS, which generated during photosynthesis [39]. This might reflect the important role of melatonin in detoxifying ROS accumulation. In addition, melatonin hydroxylation metabolites, 2-hydroxymelatonin (2-OHMel) and cyclic 3-hydroxymelatonin (c3-OHMel), have been identified in plants. Their formation is attributed to melatonin 2-hydroxylase (M2H) and melatonin 3-hydroxylase (M3H), respectively [42][43][44]. Singh et al. suggested that N-nitrosomelatonin (NOmela) likely served as a nitric oxide (NO) carrier that participated in the redox signal transduction [45]. Nevertheless, Mukherjee considered that NOmela served as an intracellular NO reserve in plants was questionable due to its sensitive and unstable characteristics [46]. The processes of NOmela formation and transport are not fully understood and should be thoroughly investigated. In addition, whether 5-methoxytryptamine (5-MT) formed by melatonin deacetylation is of physiological importance remains to be investigated in plants.
Melatonin Acts as a Master Regulator in Plant Abiotic Stress
As a master regulator, melatonin plays important roles in plant tolerance to abiotic stresses, such as heavy metals, drought, salinity, cold, heat, waterlogging, and pesticides [19,[47][48][49][50][51][52]. This review shows schematically the melatonin-mediated responses to abiotic stresses in plants ( Figure 2). Melatonin levels are strongly induced by the above unfavorable conditions. For instance, endogenous melatonin level in Arabidopsis wild-type plants was increased in response to salt stress [47]. Loss-of-function mutation atsnat in the AtSNAT gene showed lower endogenous melatonin content and sensitivity to salinity stress [47]. Cold stress induced melatonin accumulation by upregulating the relative expression of ClASMT in watermelon plants [49]. In tomato seedlings, Cd stress induced COMT1 expression, and thereby improved the accumulation of melatonin [22]. Transcription factor heat shock factor A1a (HsfA1a) bound to the COMT1 gene promoter and activated the transcription of COMT1 gene under Cd stress [22]. However, the post-translational regulation of melatonin biosynthesis genes and modification of related proteins still remains largely unknown and should be elucidated in future.
Melatonin Acts as a Master Regulator in Plant Abiotic Stress
As a master regulator, melatonin plays important roles in plant tolerance to abiotic stresses, such as heavy metals, drought, salinity, cold, heat, waterlogging, and pesticides [19,[47][48][49][50][51][52]. This review shows schematically the melatonin-mediated responses to abiotic stresses in plants ( Figure 2). Melatonin levels are strongly induced by the above unfavorable conditions. For instance, endogenous melatonin level in Arabidopsis wild-type plants was increased in response to salt stress [47]. Loss-of-function mutation atsnat in the AtSNAT gene showed lower endogenous melatonin content and sensitivity to salinity stress [47]. Cold stress induced melatonin accumulation by upregulating the relative expression of ClASMT in watermelon plants [49]. In tomato seedlings, Cd stress induced COMT1 expression, and thereby improved the accumulation of melatonin [22]. Transcription factor heat shock factor A1a (HsfA1a) bound to the COMT1 gene promoter and activated the transcription of COMT1 gene under Cd stress [22]. However, the post-translational regulation of melatonin biosynthesis genes and modification of related proteins still remains largely unknown and should be elucidated in future.
Figure 2.
The roles of melatonin in plant tolerance to abiotic stress. Melatonin content of plants increases significantly in responses to abiotic stresses, such as heavy metals, salinity, drought, heat, cold, waterlogging, and pesticides. It confers plant tolerance via multiple mechanisms, including ROS or RNS scavenging, toxic compounds decrease, photosynthetic efficiency increase, interaction with hormones, and secondary metabolite biosynthesis. ROS, reactive oxygen species; RNS, reactive nitrogen species.
Melatonin confers plant tolerance via multiple mechanisms, including photosynthetic efficiency increase, ROS or RNS scavenging, toxic compounds decrease, interaction with hormones, and secondary metabolite biosynthesis ( Figure 2). Melatonin stimulated stomatal conductance and improved photosynthesis, thus enhancing tolerance to waterdeficient stress in grape cuttings [53]. Another fact is that the photosynthetic efficiency was maximized by higher rates of CO2 assimilation and stomatal conductance after application of melatonin [54]. Several stresses can induce ROS or RNS accumulation, causing oxidative damage to plants [55]. In this case, melatonin re-establishes the redox balance Melatonin confers plant tolerance via multiple mechanisms, including photosynthetic efficiency increase, ROS or RNS scavenging, toxic compounds decrease, interaction with hormones, and secondary metabolite biosynthesis ( Figure 2). Melatonin stimulated stomatal conductance and improved photosynthesis, thus enhancing tolerance to water-deficient stress in grape cuttings [53]. Another fact is that the photosynthetic efficiency was maximized by higher rates of CO 2 assimilation and stomatal conductance after application of melatonin [54]. Several stresses can induce ROS or RNS accumulation, causing oxidative damage to plants [55]. In this case, melatonin re-establishes the redox balance via activating enzymatic antioxidant defense systems, as well as the ascorbate-glutathione (AsA-GSH) cycle [56]. In plants, the Salt-Overly Sensitive (SOS) pathway mediates ionic homeostasis and contributes to salinity tolerance [57]. This pathway comprises three crucial genes, Salt-Overly Sensitive1 (SOS1), Salt-Overly Sensitive2 (SOS2) and Salt-Overly Sensitive3 (SOS3), which function together to initiate transport of Na + out of the cell, or activating other transporters, thus leading to the sequestration of Na + in the vacuole [58]. Melatonin reduced ion toxicity and improved salinity tolerance via the SOS pathway [47]. ABA and H 2 O 2 /NO signaling transduction pathways were also modulated for plant tolerance in response to abiotic stress [47,48,56,59]. In addition, melatonin could increase primary and secondary metabolites including amino acids, organic acids and sugars, and thus improving plant cold tolerance [60].
Melatonin Improves Cd Tolerance in Plants
It has been found that Cd affects the ecosystem, causing stress and toxicity in plants.
Melatonin acts as a key role in protecting plants from Cd stress. Table 1 summarizes that Cd treatment up-regulates the transcripts of melatonin biosynthesis genes, such as TDC, T5H, SNAT, ASMT, and COMT in Arabidopsis thaliana, Oryza sativa L., Solanum lycopersicum, Triticum aestivum L., Nicotiana tabacum L., and Agaricus campestris [59,[61][62][63][64][65][66][67]. Therefore, melatonin contents are significantly increased. Notably, four M2H genes, involved in melatonin degradation, were also induced [65]. Byeon et al. suggested that both melatonin degradation and melatonin synthesis occurred in parallel, and 2-hydroxymelatonin of melatonin metabolite also acted as a signaling molecule in plant stress tolerance [65]. As melatonin catabolism is complicated, other pathways and the role of their metabolites should be investigated in plants under Cd stress. Most studies showed that melatonin alleviated Cd-induced seedling growth inhibition, including the biomass (fresh weight and dry weight) and root length [19]. Melatonin improved the photosynthesis rate (Pn), transpiration rate (E), intracellular CO 2 concentration and stomatal conductance (Gs) upon Cd stress in Nicotiana tabacum L. [6]. That melatonin enhanced stomatal opening and conductance capacity ultimately favored the photosynthesis in plants. Melatonin also prevented the degradation of the chlorophyll and carotenoid molecules in Chinese cabbage seedlings [68]. Similarly, application of melatonin improved chlorophyll and the maximum quantum efficiency of photosystem II (Fv/Fm) levels of wheat plants [20]. In chloroplasts, superoxide anion (O 2 · − ) in photosystem I (PSI) is generated by two molecules of O 2 with two electrons from photosystem II (PSII), and disproportionated to H 2 O 2 catalyzed with superoxide dismutase (SOD) [69]. The better potential in melatonin treated plants under Cd stress can aid in chlorophyll protection, improve photosynthesis, and mediate redox homeostasis from oxidative damage.
Melatonin scavenges the above ROS mainly through two pathways upon Cd stress. Antioxidant enzymes play key roles in melatonin-decreased ROS overproduction, such as APX, CAT, SOD, POD, GPX, GR, DHAR, and monodehydroascorbate reductase (MDHAR). Their functions are confirmed in above plant species. For example, exogenously applied with melatonin counterbalanced the H 2 O 2 and MDA accumulation via enhancing APX, CAT, SOD, and POD activities under Cd stress [77]. Enzymes involved in the ascorbateglutathione (AsA-GSH) cycle, such as DHAR, MDHAR and GR, were also involved in melatonin-mediated ROS balance in sunflower (Carthamus tinctorius L.) seedlings [80]. In addition, melatonin interacted with ROS by improving antioxidant levels, including GSH, AsA, and dehydroascorbate (DHA) [80]. Other studies reported melatonin also could increase proline, anthocyanins, flavonoid, and sugars contents in response to Cd-induced oxidative stress [18,64,77,79]. These impacts of melatonin on Cd-induced oxidative stress are summarized in Table 2.
Melatonin Regulates Cadmium Uptake and Translocation
In general, Cd is taken up by plant roots from soil, then transported to shoots through the xylem and phloem, and eventually accumulated in grains [87]. Several processes regulate Cd accumulation, including Cd apoplastic influx, cell wall adsorption, cytoplasm across the membrane, xylem loading, vacuolar sequestration, and energy-driven transport in plants [88]. Natural resistance-associated macrophage protein (NRAMP) might be involved in several processes, such as uptake, intracellular transport, translocation, and metal detoxification in various plants [89,90]. Moreover, Cd is also transported through Zn, Fe, and Ca transporters, including Zn transporter proteins (ZRT)-and Fe-regulated transporter (IRT)-like protein (ZIP), yellow strip-like (YS1/YSL), and low-affinity calcium (Ca) transporter 1 (LCT1) [91]. ABC transport (PDR8), metal tolerance proteins (MTPs), cation diffusion facilitators (CDFs), and the P18-type metal transporter ATPase (HMAs) take part in Cd homeostasis [92][93][94]. Furthermore, GSH and its derivatives, phytochelatins (PCs), bound with Cd, and then transported Cd to vacuoles by ATP-binding cassette subfamily C proteins (ABCCs) [95,96]. HMA3 and CDF transporter family are also involved in the transfer of Cd-PCs complexes into the vacuole [97,98]. Other high-affinity chelators, including metallothioneins (MTs), organic acids, and amino acids play multiple roles in detoxification of Cd [99].
Recent studies have shown that melatonin regulates Cd homeostasis in plants. Exogenous application of melatonin reduced Cd contents in both roots and leaves of Raphanus sativus L. and Brassica pekinensis (Lour.) Rupr. plants [68,84]. Melatonin significantly decreased Cd contents in the leaves, but not in the roots of Oryza sativa L., Carthamus tinctorius L., and Solanum lycopersicum [61,76,80,81]. However, melatonin increased and decreased Cd contents in roots and shoots of Malva parviflora, respectively [18]. These results suggest that the effect of melatonin on translocation factor (Cd content of shoot/root) are different in the above various plants. Melatonin reduced the transcripts of metal transporter-related genes (iron-regulated transporter1 (OsIRT1), iron-regulated transporter2 (OsIRT2), heavy metal ATPase2 (OsHMA2), heavy metal ATPase3 (OsHMA3), natural resistance-associated macrophage protein1 (OsNramp1), natural resistance-associated macrophage protein5 (Os-Nramp5), and low-affinity cation transporter1 (OsLCT1) in leaves, but not in the roots of Oryza sativa L. under Cd stress [81]. Expression of YSLs and HMAs were down-regulated by melatonin, thereby reducing the Cd entering the roots of Raphanus sativus L. [84]. In addition, the Metallothionein 1 (RsMT1) gene was involved in melatonin-conferred Cd tolerance in transgenic tobacco [84]. In roots of Brassica pekinensis (Lour.) Rupr. plants, IRT1 tran-script was down-regulated significantly by melatonin application [68]. Then, Cd content was reduced in root tissues. These impacts of melatonin on Cd uptake and translocation are summarized in Table 3. Therefore, to characterize the biological roles of these metal transporter genes contributes to understanding the melatonin-mediated Cd homeostasis and detoxification.
Other Regulators Are Involved in Melatonin-Mediated Cd Tolerance
It has been widely reported that NO plays a crucial role in regulating various plant physiological processes [100]. Previous studies found that Cd treatment increased NO production, which promoted Cd accumulation by the IRT1 up-regulation [101,102]. Exogenous melatonin alleviated Cd toxicity by reducing NO accumulation and IRT1 expression in Brassica pekinensis (Lour.) Rupr. [68]. By contrast, melatonin triggered the endogenous NO, and enhanced Cd tolerance via the increase in the activities of antioxidant enzymes in wheat seedlings [20]. Moreover, melatonin can be nitrosated to NOMela by employing four nitrosating entities at the N atom of indole ring [46]. It was suggested that NOMela could release NO. That NO induces S-nitrosation is an important redox-based post-translational modification, which is involved in plant responses to abiotic stress [103,104]. Thus, complex interactions between melatonin and NO in Cd resistance should be further investigated. Another important signaling element, salicylic acid (SA), alleviated Cd toxicity by affecting Cd distribution, the antioxidant defense activities, and photosynthesis [105][106][107]. Amjadi et al. found that there was a possible synergic interaction between melatonin and SA by reducing Cd uptake and modulating the ascorbate-glutathione cycle and glyoxalase system [80].
A Possible Role for H 2 S in Melatonin-Mediated Tolerance against Cd Stress
Acting as a signaling molecule, NO interacts with other molecules (H 2 O 2 , CO, and H 2 S) to mediate plant growth and development, as well as abiotic stress responses [100]. Among the molecules, H 2 S is also involved in almost all physiological plant processes [27,100]. To date, there is considerable research on the role of NO in melatonin-modulated plant abiotic stress tolerance. However, the functions of H 2 S have been largely unknown. It will become a research hotspot to contribute to precise analysis of the collaboration between H 2 S and melatonin, and provide deeper insight into melatonin-mitigated signaling mechanisms.
H 2 S Action in Plant Tolerance against Cd Stress
H 2 S acts as a signaling molecule in modifying various metabolic processes in plants, especially Cd stress (Figure 3, [27]). Endogenous H 2 S production was induced via expression of LCD, DCD, and DES1 under Cd stress [108][109][110]. SA, methane (CH 4 ), and WRKY DNA-binding protein 13 (WRKY13) transcription factor were suggested to be involved in the above process [30,111,112]. H 2 S regulated the activities of key enzymes and AsA-GSH cycle involved in ROS homeostasis to alleviate Cd-induced oxidative stress [113][114][115][116][117][118][119][120]. For example, H 2 S enhanced the activities of antioxidant enzymes, such as POD, CAT, APX, and SOD, and thereby decreased ROS accumulation [120]. Similarly, it also obviously increased AsA and GSH and the redox status (AsA/DHA and GSH/GSSG) levels to improve rice Cd resistance [114,116].
Increasing evidence demonstrates that H 2 S also regulates Cd uptake and translocation in plants [30,117,119,121]. H 2 S enhanced the expression of genes encoding metallothionein (MTs) and phytochelatin (PCS) in Arabidopsis roots [117]. Therefore, H 2 S increased the metal chelators synthesis, contributing to Cd detoxification by binding the trace metal. In addition to enhancing the above genes expression, the protective effect of H 2 S was attributed to a decrease in Cd accumulation associated with the expression of Cd transporter genes, such as PCR1, PCR2, and PDR8 [30]. Exogenous application of NaHS weakened the expression of NRAMP1 and NRAMP6 genes, and intensified the expression of Cd homeostasis-related genes (CAX2 and ZIP4) to enhance Cd tolerance in foxtail millet [122].
A number of studies address that H 2 S can interact with other signaling molecules, such as SA, proline, MeJA, Ca, and NO during the responses of plants to Cd stress (Figures 3 and 4; [111,122,123]). H 2 S acted as a downstream molecule of SA-transmitted signals to regulate Cd tolerance in Arabidopsis [111]. The endogenous production of proline and MeJA enhanced by H 2 S donor NaHS responded significantly to Cd stress in foxtail millet [122,123]. H 2 S also improved CaM gene expression and controlled the combination of Ca 2+ and CaM, which act as signal transducers [33].
There exists a complicated and synergistic relationship between H 2 S and NO in response to Cd stress in plants (Figure 4; [115,118,124,125]). Exogenous NO and H 2 S application increased the Cd tolerance in plants [115,124,126]. Subsequent pharmacological experiments proved that H 2 S donor NaHS triggered NO production, which might act as a signal for alleviation of Cd-induced oxidative damage in alfalfa seedling roots [124]. Nevertheless, H 2 S production activated by NO is essential in Cd stress response of bermudagrass [115]. As a second messenger, Ca acted both upstream and downstream of NO signal, and crosstalk of Ca and NO regulated the cysteine and H 2 S to mitigate Cd toxicity in Vigna radiata [126]. Moreover, application of sodium nitroprusside (SNP), the donor of NO, increased H 2 S generation, and thus enhanced Cd stress tolerance in wheat [118]. However, this protective effect was reversed by hypotaurine (HT), the scavenger of H 2 S [118]. These results suggested that H 2 S and NO can function in a coordinated way under certain signaling cascades in plants under Cd stress. Increasing evidence demonstrates that H2S also regulates Cd uptake and translocation in plants [30,117,119,121]. H2S enhanced the expression of genes encoding metallothionein (MTs) and phytochelatin (PCS) in Arabidopsis roots [117]. Therefore, H2S increased the metal chelators synthesis, contributing to Cd detoxification by binding the trace metal. In addition to enhancing the above genes expression, the protective effect of H2S was attributed to a decrease in Cd accumulation associated with the expression of Cd transporter genes, such as PCR1, PCR2, and PDR8 [30]. Exogenous application of NaHS weakened the expression of NRAMP1 and NRAMP6 genes, and intensified the expression of Cd homeostasis-related genes (CAX2 and ZIP4) to enhance Cd tolerance in foxtail millet [122]. MeJA enhanced by H2S donor NaHS responded significantly to Cd stress in foxtail millet [122,123]. H2S also improved CaM gene expression and controlled the combination of Ca 2+ and CaM, which act as signal transducers [33]. Increasing evidence showed that melatonin and H2S act as the downstream of NO in the responses to Cd stress, respectively (green arrow). It is also suggested that NO acts as a downstream of melatonin or H2S to improve Cd tolerance (orange arrow). The combination of melatonin, NO and H2S might be responsible for melatonin-triggered signal transduction in plant Cd tolerance via the decreased Cd accumulation, GSH synthesis and metabolism, decreased ROSinduced oxidative stress and improved photosynthesis. Red arrow, yet largely unknown. Cd, cadmium; NO, nitric oxide; H2S, hydrogen sulfide; GSH, glutathione; ROS, reactive oxygen species; Pn, photosynthesis rate; Gs, stomatal conductance; E, transpiration.
There exists a complicated and synergistic relationship between H2S and NO in response to Cd stress in plants ( Figure 4; [115,118,124,125]). Exogenous NO and H2S application increased the Cd tolerance in plants [115,124,126]. Subsequent pharmacological experiments proved that H2S donor NaHS triggered NO production, which might act as a signal for alleviation of Cd-induced oxidative damage in alfalfa seedling roots [124]. Nevertheless, H2S production activated by NO is essential in Cd stress response of bermudagrass [115]. As a second messenger, Ca acted both upstream and downstream of NO signal, and crosstalk of Ca and NO regulated the cysteine and H2S to mitigate Cd toxicity in Vigna radiata [126]. Moreover, application of sodium nitroprusside (SNP), the donor of NO, increased H2S generation, and thus enhanced Cd stress tolerance in wheat [118]. However, this protective effect was reversed by hypotaurine (HT), the scavenger of H2S [118]. These results suggested that H2S and NO can function in a coordinated way under certain signaling cascades in plants under Cd stress. Increasing evidence showed that melatonin and H 2 S act as the downstream of NO in the responses to Cd stress, respectively (green arrow). It is also suggested that NO acts as a downstream of melatonin or H 2 S to improve Cd tolerance (orange arrow). The combination of melatonin, NO and H 2 S might be responsible for melatonin-triggered signal transduction in plant Cd tolerance via the decreased Cd accumulation, GSH synthesis and metabolism, decreased ROS-induced oxidative stress and improved photosynthesis. Red arrow, yet largely unknown. Cd, cadmium; NO, nitric oxide; H 2 S, hydrogen sulfide; GSH, glutathione; ROS, reactive oxygen species; Pn, photosynthesis rate; Gs, stomatal conductance; E, transpiration.
Crosstalk of Melatonin and H 2 S in Plants
The interaction between melatonin and H 2 S plays a beneficial role in abiotic stress response [32]. Exogenous melatonin regulated the endogenous H 2 S homeostasis by modulating the L-DES activity in salt-stressed tomato cotyledons [31]. Moreover, an endogenous H 2 S-dependent pathway was involved in melatonin-mediated salt stress tolerance in tomato seedling roots [34]. Synergistic effects of melatonin and H 2 S regulated K + /Na + homeostasis, and reduced excessive accumulation of ROS by enhancing the activity of antioxidant enzymes. Inhibition of H 2 S by HT reversed the melatonin-modulated heat tolerance by inhibiting photosynthesis, carbohydrate metabolism, and the activity of antioxidant enzymes in wheat [36]. Recent investigation has revealed that melatonin-induced pepper tolerance to iron deficiency and salt stress was dependent on H 2 S and NO [118]. It was further confirmed that H 2 S and NO jointly participated in melatonin-mitigated salt tolerance in cucumber [35]. Thus, these results postulate that H 2 S might act as a downstream signaling molecule of melatonin. Combined with the roles of H 2 S and melatonin in alleviating Cd stress, it is easy to speculate that H 2 S might be involved in melatoninmediated Cd tolerance in plants ( Figure 4).
As mentioned above in Section 3, GSH plays a critical role in plant Cd tolerance. It is synthesized from glutamate, cysteine and glycine by γ-glutamyl cysteine synthetase (γ-ECS, encoded by GSH1/ECS gene) and glutathione synthetase (GS, encoded by GSH2/GS gene) [127]. The catalysis of GSH1 is the rate-limiting step of GSH biosynthesis [128]. Cd stress induced the transcripts of GSH1 and GSH2 in Arabidopsis, as well as ECS and GS in Medicago sativa [114,[129][130][131]. It was suggested that H 2 S could be quickly incorporated into cysteine and subsequently into GSH [132]. Application of NaHS re-established (h)GSH homeostasis by further strengthening the up-regulation of ECS and GS genes [114]. Similar results were also found in strawberry and cucumber plants [133,134]. Interestingly, exogenous melatonin also increased the GSH content by inducing the transcript levels of SlGSH1 in tomato [75]. Hence, there might be a certain connection between H 2 S and melatonin in regulating the GSH homeostasis at the transcriptional regulatory pathway. This will provide an interesting direction for further research on the complex interactions between melatonin and H 2 S in improving Cd tolerance in plants.
Conclusions and Future Prospects
Recent studies have strongly indicated that melatonin, a multifunctional molecule, regulates Cd tolerance in plants. To further promote related research in plant Cd tolerance, this review summarizes the regulatory roles and mechanisms of melatonin in response to Cd stress. Melatonin reduces Cd damage mainly through re-establishing the redox homeostasis and decreasing Cd accumulation, but its underlying mechanisms remain to be determined. Intriguingly, melatonin is suggested to be a phytohormone due to the identification of the putative receptor CAND2/PMTR1 [135], although there is still a debate on whether it is a bona fide receptor for melatonin [136]. More importantly, more receptor gene(s) should be characterized, which will be critical for precisely understanding the signal transduction pathway of melatonin in plants in response to Cd stress.
Currently, as a signal molecule, the role of NO has been revealed in melatoninmediated Cd tolerance, likewise H 2 S plays a key messenger in plant resistance to Cd stress. That the effects of H 2 S have been less explored has prevented precise analysis of the collaboration of H 2 S and melatonin. Recently, we presented the underlying mechanisms of H 2 S action and its multifaceted roles in plant stress responses [137]. Hence, it would be interesting to fully evaluate the effects of H 2 S-based signaling on regulating melatonin-induced Cd tolerance. For directions of future research, biochemical and genetic characterization of H 2 S-producing proteins and persulfidation signaling is needed and will shed more light on the integration of H 2 S and melatonin signaling during Cd stress.
Pharmacological, Genetic and 'Omics' Approach to Understand the Crosstalk of H 2 S-Melatonin during Cd Stress
Various pharmacological, enzyme activity, and gene expression investigations revealed the crosstalk of H 2 S-melatonin in response to salt and heat stress in tomato and cucumber [34][35][36]. Exogenous melatonin induced the H 2 S generation by activating the L-cysteine desulfhydrase (L-CDes) activity, which was encoded by LCD, DCD, and DES1 [31,35]. Then, the interaction of H 2 S and melatonin enhanced the antioxidant defense, and regulated carbohydrate metabolism and ion homeostasis [34][35][36]118]. Similar pharmacological experiments with an effective concentration range of 1-200 µM in exogenous melatonin, 10-100 µM in 4-Chloro-DL-phenylalanine (p-CPA, melatonin synthesis inhibitor), 10-100 µM in hypotaurine (HT, H 2 S inhibitor), and 10-100 µM in NaHS (H 2 S donor) could be used to investigate the crosstalk of H 2 S-melatonin during Cd stress. Furthermore, genetic modifications with altering melatonin and H 2 S levels, such as snat, comt, lcd, dcd, and des1 mutants, should be used to explore their possible roles.
H 2 S plays a critical signal mediator in plants in response to Cd stress [111,112,115]. However, there is still an urgent need to elucidate the interactions of H 2 S with other signaling molecules in melatonin-mediated Cd tolerance. With the advent of transcriptomic and proteomic analysis, scientists shall reveal the intrinsic regulatory mechanisms of melatonin and H 2 S interaction on the regulation of various biological processes. For example, the expression of genes and proteins related to GSH synthesis and metabolism and redox homoeostasis, as well as the hormone biosynthesis pathways, might be used to establish a model system to decipher their signaling interaction.
The Potential Role of Persulfidation Driven by H 2 S in Melatonin-Mediated Cd Tolerance
Recently, it was found that H 2 S-mediated post-translational modification (PTM, persulfidation) of protein cysteine residues (RSSH) is an important mechanism in plants to adapt to external environments [27,32]. Protein persulfidation cause various changes in structures, activities, as well as the subcellular localizations of the candidate proteins [138,139]. These proteins are mainly involved in plant growth and development, abiotic stress responses, and carbon/nitrogen metabolism [138]. For example, H 2 S production regulated the persulfidation of NADPH oxidase RBOHD at Cys825 and Cys890, leading to improving the ability to produce H 2 O 2 signal [140]. It also led to the persulfidation of ABSCISIC ACID INSENSITIVE 4 (ABI4) at Cys250, and persulfidation of SnRK2.6, contributing to reveal the function of H 2 S in the complex signal-transduction system [141][142][143]. By contrast, the residue Cys32 of APX could be persulfidated, thereby enhancing its activity [144]. Therefore, the persulfidation might become a promising direction to investigate the roles of H 2 S in melatonin-mediated Cd tolerance in plants. To conclude, the progresses in the various physiological and molecular mechanisms regulated by melatonin are not enough, and future studies along with the above lines should be used to unveil the regulatory mechanism of melatonin and H 2 S signaling pathways in plant Cd tolerance.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,984.6 | 2021-10-28T00:00:00.000 | [
"Biology"
] |
Internet-Supported Multi-User Virtual and Physical Prototypes for Architectural Academic Education and Research
collection of international perspectives on distance learning and distance learning implementations in higher education. The perspectives are presented in the form of practical case studies of distance learning implementations, research studies on teaching and learning in distance learning environments, and conceptual and theoretical frameworks for designing and developing distance learning tools, courses and programs. The book will appeal to distance learning practitioners, researchers, and higher education administrators. To address the different needs and interests of audience members, the book is organized into five sections: Distance Education Management, Distance Education and Teacher Development, Distance Learning Pedagogy, Distance Learning Students, and Distance Learning Educational Tools.
Introduction
Even though development of concepts and tools for Internet-based academic research and education may be traced back to the 1960s, only in the last decade concepts such as Internet of Things, Ambient Intelligence (AmI), and ubiquitous computing (ubicomp) have introduced not only a technological but also a conceptual and methodological paradigm shift that implies a general infiltration of Internet-based concepts and tools not only into academic education and research but also into everyday life activities. In this context, Hyperbody at Delft University of Technology (DUT) has been developing in the last decade hardware-and software-prototypes for Internet-supported academic education and research, which were tested in practical experiments implemented mainly in international workshops and lectures. This chapter presents and discusses Hyperbody's past, present and future development and use of software and hardware applications for virtual academic education and research in architectural and urban design within the larger framework of contemporary conceptual, methodological and technological advancements.
Concepts and tools
The development of concepts and tools for Internet-based academic research and education such as Engelbart's proposal for using computers in the augmentation of human skills (1962) and Sutherland's Sketchpad as first graphical user interface for computers (1963), the establishment of the Internet (1969) and the development of interactive learning environments enabled universities to offer accredited graduate programs through online courses. While at the time, virtual and physical interaction was separated in corresponding educational programs, in the last decade, concepts such as Internet of Things, Ambient Intelligence (AmI), and ubiquitous computing (ubicomp) have introduced a conceptual and methodological paradigm shift manifested in the blur of boundaries between physical and virtual and the continuous integration of Internet-based concepts and tools into the academic everyday life. In this context, ubiquitous computing refers to the integration of information processing into everyday objects and activities (Weiser, 1988) while the Internet www.intechopen.com of Things consists of uniquely identifiable (tagged) objects (Things) and their virtual representations inventoried and connected in an Internet-like structure (Ashton, 1999;Magrassi, et al. 2001). Envisioning Ambient Intelligence (AmI) as a physical environment that incorporates digital devices in order to support people in carrying out daily activities by using information and intelligence that is contained within the network connecting these devices (Zelkha et al., 1998), Protospace (http://www.hyperbody.nl/protospace) developed at Hyperbody (Fig. 1) can be seen as an embedded, networked hardware-and software system that is exhibiting characteristics of Ambient Intelligence. This implies that interactive, context aware (sensor-actuator) sub-systems are embedded into the spatial environment in such a way that they are context and user aware by collecting and mapping data with respect to users' movement and behaviour in relation to physical space, they are, when needed, tailored to individual needs, and furthermore, they are adaptive, responding to user and environmental changes, even anticipatory, as for instance, during interactive lectures and workshops described in following sections. Considering that information processing has been, meanwhile, increasingly integrated into physical spaces, everyday objects, human activities (Weiser, 1988) and ubiquitous computing (ubicomp) has become prevalent in everyday life, the following sections aim to critically asses what they offer architectural academic education and research, and thus reveal what challenges remain in their development and application.
Methodologies and processes
Late 1990s, Internet-based academic education and research were focusing on education methodologies and applied technologies for educators, students and researchers, who are separated by time and distance, or both (Daniel, 1998;Loutchko et al. 2002). In this context, the requirement for synchronous interaction for participants, who are virtually present at the same time while they are physically remotely located would be implemented via timely synchronous videoconferencing and live streaming, telephone, and web-based VoIP. In addition, participants would access educational and research materials on their own schedule, as well as communicate by means of asynchronous information exchange such as E-mail correspondence, message exchange (forums), audio-video recording and replay.
www.intechopen.com
Other more sophisticated methods would include online three-dimensional (3D) virtual worlds providing synchronous and asynchronous interaction as well as collaboration.
Such systems would, obviously, require high-tech hardware and software equipment and would offer the possibility to flexibly accommodate time and space constraints of users, while reducing the demand on institutional infrastructure such as buildings. Educators, students and researchers attending virtual sessions would exchange and acquire knowledge asynchronously by reading documents from the database or studying videos, for instance, and synchronously by discussing problems, reviewing case studies, or actively participating in workshops. Communication in the synchronous virtual study room is, therefore, conceived as a collaborative study and research experience, where participants are interacting real-time with peers through web-conferencing and 3D gaming.
3D gaming environments inspired, in the last decade, researchers from different disciplines to develop computer-supported collaborative environments such as Protospace in which problem-based studying and researching is implemented. This enables physically and virtually present students and researchers to investigate and solve problems by working in groups, wherein participants identify what they know, what they need to know, and learn how to bridge the gap between the two by searching and accessing information from worldwide available databases that may lead to finding solutions.
In such a context, the role of the instructor, educator, or team leader is that of facilitating the process by suggesting appropriate references, and instigating critical discussions, while open-ended, ill-defined problems, addressed in collaborative group work are driving such a process (Armstrong, 1991). By exploring various strategies in order to understand the nature of the to-be-solved problem, by investigating the constraints and options to its resolution, as well as by acknowledging eventual different viewpoints, participants learn to negotiate between competing, even contradicting resolutions. These approaches implemented in Internet-supported multi-user, collaborative, game-like environments such as Protospace are well suited for addressing architectural and urban planning problems in education and research.
Such approaches are, however, not anymore confined to the classical concepts of the Internet-based education such as distance learning; instead they started to permeate the academic everyday life (Bier, 2011). Internet-based interaction is not anymore employed for distance learning only but it is integrated in the daily interaction, information and knowledge exchange between students, educators and researchers. While learning environments are increasingly accessed in various contexts and situations, ubiquitous increasingly replaces distance learning (Bomsdorf, 2005).
If ubiquitous learning (u-Learning) represents a relevant advancement in the development of distance learning, its obvious potential results from the enhanced possibilities of accessing content and computer-supported collaborative environments at any time and place. Furthermore, u-Learning enables seamless combination of virtual environments and physical spaces and allows embedding individual learning activities in everyday life so that learning activities are freed from schedule and spatial constraints, becoming pervasive and ongoing, prevalent within a large, diverse community consisting of students, educators, social communities, researchers, etc.
Applications in education and research
Within the larger context described in the sections before, Hyperbody has been developing in the last decade Internet-supported applications for academic education and research by employing interactive 3D game technology: As an Internet-based, multi-user game environment, Protospace enables real-time collaborative architectural and urban design and has been tested in graduate education and research including the Internet-based postgraduate program, E-Archidoct, that was offered in collaboration with 14 European universities 2008-10.
Fig. 2. Rapid seamless CAD-CAM prototyping and CNC fabrication implemented with students in Protospace.
Protospace is, basically, a compound of software and hardware applications for virtual academic education and research equipped with multi-channel immersive audio, multiscreen projection, ubiquitous sensing, wireless input-output devices and (Computer Numerically Controlled) CNC facilities (incorporating a laser cutter and a large mill). It facilitates Internet-supported collaborative design and development of interactive architectural components, supports seamless (Computer-Aided Design and Manufacturing) CAD-CAM workflows, as well as enables implementation of non-standard, complex geometries ( Fig. 2) into architecture.
Protospace incorporates, therefore, virtual drafting and modeling as well as physical prototyping tools with shared database capabilities so that changes made by design team members are visible to all other team members, thus allowing for design, evaluation, and dialogue between team members to take place concurrently in real-time. Team members are physically and/or virtually participating in a seamless process of design, prototyping, and review establishing a feedback loop between conceptualization and production.
Software and hardware prototypes
In addition to CNC machines allowing for CAD-CAM production of 1:1 or scaled architectural prototypes, Protospace incorporates Virtual Reality (VR) hardware and software (Fig. 3) devices enabling implementation of specific tasks such as geometrical and behavioral manipulation.
CAD-CAM processes
Internet-supported collaborative processes such as CAD-CAM design and fabrication workflows are implemented in Protospace by means of commercial and non-commercial software applications that in part are developed from scratch at Hyperbody. For architectural and urban design Hyperbody employs parametric software such as Virtools, Max MSP, Rhino-Grasshopper and Generative Components. These CAD applications are coupled with CAM facilities in order to allow seamless production of physical prototypes from virtual models.
With respect to their use in education, in case of E-Archidoct, for instance, in addition to the Internet-based individual and collaborative exchange between students and teachers facilitated by the open-source Modular Object-Oriented Dynamic Learning Environment (Moodle) which was incorporated into the E-Archidoct website, Protospace software applications were as well integrated.
Students were, basically, introduced to parametric software such as Virtools, Grasshopper and Generative Components employed in architectural and urban design projects. Liu's www.intechopen.com project (Fig. 4), for instance, applies parametric definition for the development of multiple designs. Parametric manipulation implied, among others, the use of the marching cubes algorithm, which constructs surfaces from numerical values; furthermore, programmatic considerations were parametrically defined with respect to function in relation to volume and orientation in 3D space, etc. CAD structural analysis employing MIDAS/Gen implied that data with respect to forces, moments and stresses was used in order to determine the placement and dimension of main and secondary structure, whereas final design was physically prototyped by means of CAM. In addition to CAD-CAM processes in which designers (tutors, students and researchers) may partake physically or virtually, Protospace facilitates interactive presentations that allow physically and virtually present audience to attend and interact with the presenters and their presentations in real-time.
Interactive lectures
Non-linear screen presentations are set-up on multiple screens and non-linear talks follow a paradigm in which the audience is enabled to select from predefined content clusters specific topics, images, and/or movies, which the speaker then presents and discusses.
Following principles of Ambient Intelligence some of the multiple screen projections may be influenced by the audience: The physically present audience can alter the course of the presentation by using laser pointers, or triggering light-and/or pressure-sensors integrated in floor and walls, while audience from all over the world follows and interacts with the presentation via Internet-based interfaces (Fig. 5). In this context, distinct clusters of content are marked with keywords, indicating when audience input is expected or required, while physically and virtually present lecturers introduce and discuss content by means of multimedia and videoconference presentations. www.intechopen.com
Multi-user (3D) games
As one of the most relevant Protospace applications, Virtools allows development of Internet-supported, multi-user (3D) games. Known as serious games (Zyda, 2005) such games have a primary purpose other than entertainment as they are designed for the purpose of solving an architectural or urban design problem. They are accessible via a game interface on the Internet by multiple users that can interact with the virtual environment in real-time. Their development in Virtools is based on the separation of objects, data and behaviors, employing an intuitive user interface with real-time visualization window and graphical programming. This allows programming with spatial arrangements of text and graphic symbols, whereas screen objects are treated as entities that can be connected with lines, which represent relations (Fig. 6). Virtools' behavior engine runs both custom and out-of-the-box behaviors, whereas behaviors relevant for architectural and urban design at Hyperbody are swarm behaviors relying on principles of self-organization (Reynolds 1987;. Swarm behaviors are collective behaviors exhibited by natural or artificial agents, which aggregate together, exhibiting motion patterns at group level (Mach & Schweitzer, 2003). These behaviors are emergent, arising from simple rules that are followed by individuals (agents).
Protospace applications such as Virtual Operation Room (VOR), Building Relations (BR) employ swarm behaviors in order to address issues such as interactivity in architecture (Bier et al. 2006) and automated placement of programmatic units in 3D space either at city or at building scale (Bier, 2007). VOR is, basically, an interactive environment allowing participants to playfully start to understand interactivity principles, which are then applied in design using Protospace's design interface. In order to incorporate behaviors and interactively change geometry in real-time, VOR employs self-organization principles of swarms enabling elements of the structure to respond to external changes. According to Oosterhuis (2006) swarm architecture implies that all building components operate like intelligent agents, whereas the swarm is, in this context, of special interest: Self-organizing swarms go back to Reynolds' computer program developed in 1986, which simulates flocking behavior of birds. The rules according to which the birds are moving are simple: Maintain a minimum distance to vicinity (1), match velocity with neighbors (2) and move towards the center of the swarm (3). These rules are local establishing the behavior of one member in relationship to its next vicinity.
Similar to Reynolds' flocking rules, VOR's icosahedral geometry employs rules regarding the movement of its vertices. The movements of its vertices are controlled as follows: (1) Keep a certain distance to neighboring vertices; move faster if you are further away. (2) Try to be at a certain distance from your neighbors' neighbors; move faster if you are further away. These rules aim to establish a desired state of equilibrium implying that VOR aims to organize itself into the primary icosahedral structure. Under exterior influences VOR executes geometrical-spatial transformations according to the rule (3): Try to maintain a certain distance to the avatar, whereas the avatar is an embodiment of the user in this multiuser virtual reality (Fig. 7). www.intechopen.com VOR, as a multi-user interactive environment, is a computer simulation of an imaginary system, a game that enables users to perform operations on the simulated system while showing effects in real-time. Basically, VOR consists of responsive environments with which the user interacts via input devices such as mouse, keyboard, and/or joystick; these allow for intuitive maneuver and navigation. Similarly, all functional units pertaining to a building can be seen as flocking agents striving to achieve an optimal spatial layout (Bier et al. 1998;Bier et al. 2006). In this context, spatial relations between functional units can be described as rules, according to which all units organize themselves into targeted spatial configurations (Fig. 8). This approach is particularly suitable for the functional layout of middle large structures: While the architect might find it difficult to have an overview on all functions and their attributed volume and preferential location, functional units can easily swarm towards local optimal configurations. Basically, programmatic distribution of functions in architectural design deals with the placement of functions in 3D-space; in this context, building components such as rooms have neither fixed dimensions nor pre-defined positions in space. Attempts to automate the process of layout incorporate approaches to spatial allocation by defining the available space as an orthogonal 2D-grid and use an algorithm to allocate each rectangle of the grid to a particular function. Other strategies break down the problem into parts such as topology and geometry: While topology refers to logical relationships between layout components, geometry refers to position and size of each component of the layout. A topological decision, for instance, that a functional unit is adjacent to another specific functional unit restricts the geometric coordinates of a functional unit relative to another (Michalek et al., 2002;Bier et al. 2007).
Based on a similar strategy BR generates solutions for complex layout problems in an interactive design process. Furthermore, it operates in the 3D-space and therefore, it represents an innovative approach to semi-automated design processes. The BR database establishes connectivities between different software and functions as a parameter pool containing geometric and functional data. BR is being used interactively and in combination with other software, to achieve non-deterministically designs. It is a design support system, since it supports the user (designer) in the functional layout process rather than prescribes a solution.
At urban scale, applications developed by Hyperbody have been addressing space allocation in a similar way BR does: While at building scale spaces are allocated within a building, at urban scale buildings and building clusters are allocated within an urban area (Fig. 9). The allocation principle is, however, the same: Functions and dedicated volumes swarm within the urban context towards local optimal spatial configurations (Jaskiewicz, 2010). Obviously, these applications are of interest for Internet-supported education and research as they enable interactive, collaborative, (virtually and physically present) multi-user design and fabrication sessions.
Users' interaction in education and research
The classical concept of distance learning implemented when time and distance separate educators and students is increasingly replaced by Internet-supported systems that are employed to assist daily academic interaction even when researchers, educators and students are not separated by time or distance. This is the result of a conceptual and methodological paradigm shift that implies an ongoing change towards pervasive computing and ubiquitous education and research as already discussed in the previous sections.
Considering the Internet as the start, Castells extrapolates 1996 the expected development of such a networked system towards becoming pervasive, permeating the everyday life. The www.intechopen.com purpose of Internet-supported systems is, therefore, not anymore to only bridge time and distance but to support everyday academic education and research by incorporating ubiquitous, interactive devices into the physical space. In this context, Protospace can be seen as a prototype for pervasive computing that is integrated into physical space and is employed in academic daily life, whereas interaction between users and data takes place in a networked, Internet-supported, embedded system.
In such a networked system, users are connected with other users, multimedia databases and applications enabling reading and editing of data, sensing-actuating, and computing in such a way that users interact physically and virtually as needed in a physical, digitallyaugmented environment.
By integrating concepts such as Autonomous Control (Uckelmann et al., 2010) the Internet of Things is envisioned as a network in which self-organized virtual and physical agents are able to act and interact autonomously with respect to context and environmental factors. Such context awareness (Gellersen et al., 2000) implies data collection and information exchange thus communication between users and physical environment; it may imply acquisition of data with respect to users' habits, emotions, bodily states, their social interaction, and their regular and spontaneous activities as well as context data with respect to spatial location, infrastructure, available resources, and physical conditions such as noise, light, and temperature. Information exchange thus communication between physical (sentient) environment and users may, however, not only imply accommodating but also challenging interactions.
As a context aware system, Protospace is concerned with the acquisition of context data by means of sensors, as mentioned before, the interpretation of the data collected by sensors, and the triggering of accommodating and challenging actions as response to the interpretation of collected data, whereas responses may imply operation of electrical light, sun shading, and projection screens, depending on local and global needs, etc. Furthermore, Protospace's context awareness addresses also activity recognition as implemented in interactive lectures and CAD-CAM sessions described in the previous sections.
Discussion and outlook
The on-going fusion of the physical and the virtual reflected in the convergence of the Internet, mobile communication systems, and advanced human-computer interaction technologies generates a reality-virtuality continuum (Milgram, 1994) containing all possible degrees of real and virtual conditions so that the distinction between reality and virtuality becomes blurred.
In this context, Protospace as a reality-virtuality continuum facilitates not only interactive design and fabrication sessions but also interactive presentations for Internet-based graduate programs (E-Archidoct) becoming a relevant platform for studying and researching by connecting virtually students, educators and researchers from all over the world. This implies that Protospace connects users and data physically and virtually in such a way that activities in academic education and research are enhanced by the inherent use of knowledge, software and hardware applications incorporated into it and the available multiple interaction modes.
As technologies evolve and pervasive forms increasingly emerge, permeating all aspects of academic everyday life, concepts such as distance learning are gradually replaced by ubiquitous education and research implemented in sentient, interactive environments. The traditional divide between formal (physical) and informal (virtual) contexts of education and research is blurred. Technological as well as social, cultural, and institutional changes mean that learning, studying, and researching are possible across spatial and temporal barriers. Internet-supported academic education and research implies thus that the physical environment with integrated, networked, interactive devices such as Protospace incorporates increasingly aspects of context-awareness, adaptation, and anticipation, (Zelkha et al., 1998;Aarts et al., 2001) supporting virtual and physical everyday academic activities.
Such systems may show, however, as in the case of the E-Archidoct program, that only a limited amount of students may participate successfully in such a program. Reasons for this may be found not only in technological requirements but also in methodological constraints; Students and educators from all over Europe participating in E-Archidoct were confronted with one of the main barriers to such virtual collaborative interaction, which is the difficulty in achieving agreement when diverse viewpoints, cultural boundaries, and different working and cognitive learning styles exist (Dirckinck-Holmfeld, 2002).
Furthermore, students' limited access to necessary software and hardware as well as insufficient know-how in dealing with software and hardware was an additional problem: Some design assignments within E-Archidoct, for instance, required software and hardware to which not all students had access. Furthermore, local technical support (for tutors and students) was needed in order to ensure successful participation in the program, this, however, could not always be afforded.
Future developments of virtual and physical systems for Internet-based and -supported education require, therefore, access of all participants to software and hardware as well as development of computer literacy and technology know-how among them. This may be implemented via educating students and researchers with respect to the use of Internetbased facilities before starting specific education and research program but also implies development of user-friendly software.
However, interaction models whether menu-driven or GUI-based (Graphical User Interface) are improving and are increasingly supported by applications such as mobile phones, radio-frequency identification tags, and GPS (Global Positioning System). As these devices grow smaller, more connected and more integrated into spatial environments so that only multimodal user interfaces remain perceivable for users (Aarts et al., 2001), Hyperbody investigates and further develops their use for academic education and research. Protospace, for instance, may operate in the future as physical and virtual laboratory enabling users to even remotely conduct physical experiments from other geographical locations. The benefits of such remote laboratories are known in engineering education (Ferreira et al., 2010) and imply advantages such as: (1) Relaxation of time constraints and 24/7 accessibility; (2) Relaxation of geographical constraints and independence from physical locality of researchers; (3) Material costs reduction due to sharing of lab costs and avoiding start-up costs for new laboratories; (4) Enhanced sharing of knowledge, expertise and experience.
In this context, research and education on architectural and urban design and production may be then implemented with students, educators, and experts from all over the world, interacting virtually and physically by means of multimodal interfaces, collaboratively working in a multi-user, gaming environment, that is enabling them to access and manipulate the same data on a common server synchronously or asynchronously, and even implement remotely CAD-CAM experiments.
The relevant question for the future seems to be, therefore, not whether intelligent, sentient environments may be built, but how these environments may be employed as instruments for enhanced, distributed problem solving (Bowen-James, 1997;Novak, 1997), how ubiquitous education and training may be implement in programs that promote digital democracy and literacy by bridging the digital divide (Norris, 2001), how intelligence may be embed into the physical environment in order to be made available to users. | 5,710.6 | 2012-03-16T00:00:00.000 | [
"Computer Science"
] |
Evolution of Microstructure and Mechanical Properties of LM25–HEA Composite Processed through Stir Casting with a Bottom Pouring System
Aluminum matrix composites reinforced by CoCrFeMnNi high entropy alloy (HEA) particulates were fabricated using the stir casting process. The as-cast specimens were investigated by X-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS), and transmission electron microscopy (TEM). The results indicated that flake-like silicon particles and HEA particles were distributed uniformly in the aluminum matrix. TEM micrographs revealed the presence of both the matrix and reinforcement phases, and no intermetallic phases were formed at the interface of the matrix and reinforcement phases. The mechanical properties of hardness and tensile strength increased with an increase in the HEA content. The Al 6063–5 wt.% HEA composite had a ultimate tensile strength (UTS) of approximately 197 MPa with a reasonable ductility (around 4.05%). The LM25–5 wt.% HEA composite had a UTS of approximately 195 Mpa. However, the percent elongation decreased to roughly 3.80%. When the reinforcement content increased to 10 wt.% in the LM25 composite, the UTS reached 210 MPpa, and the elongation was confined to roughly 3.40%. The fracture morphology changed from dimple structures to cleavage planes on the fracture surface with HEA weight percentage enhancement. The LM25 alloy reinforced with HEA particles showed enhanced mechanical strength without a significant loss of ductility; this composite may find application in marine and ship building industries.
Introduction
Metal matrix composites (MMCs) are replacing monolithic alloys in various structural and other applications, due to their enhanced properties, such as high specific strength, mechanical and tribological properties. Aluminum (Al) based MMCs are widely used for structural applications in the automotive, aerospace, and chemical industries. This is attributed to their high strength-to-weight ratio, high elastic modulus, wear resistance, and corrosion resistance [1,2]. To enhance the mechanical properties of Al-based composites, they are reinforced with Al 2 O 3 , B 4 C, SiC, TiB 2 , or TiC. However, there are major drawbacks of these ceramic-reinforced Al-based composites, including supporting agglomerations, particle fragmentation, porosity, and cracking [3]. In addition, the vast variation of the thermal expansion coefficient between the ceramic particles and metal matrix, inferior wettability at the interface, and the reaction at the interface reduce the properties of MMCs [4,5].
To overcome the above drawbacks and improve the structural properties of Al-based composites, a new reinforcement system with metallic elements was investigated [6]. Instead of a single metal, a multi-component system was preferred as reinforcement due to its superior mechanical, thermal, and corrosion properties. Yeh et al. [7] termed these multi-component systems as high entropy alloys (HEAs). HEAs are a new class of materials designed as an alloy comprised of five or more principal elements, with concentrations ranging from 5% to 35% [8]. In HEAs, mixing entropies enables the formation of random solid-solution phases instead of intermetallic phases. Therefore, HEAs exhibit superior plasticity. In addition, the many elements of different atomic radii contained in HEAs result in severe lattice distortion and helps them to possess high strength [9]. There are various reported beneficial properties of the HEA system, such as increased strength and hardness, good corrosion and oxidation resistance, wear and fatigue with good thermal stability, magnetic properties, and increased stability at elevated temperatures. In addition, the four main effects of HEAs, i.e., high entropy, sluggish diffusion, severe lattice distortion, and cocktail effects, affect phase transformation, microstructure, and mechanical properties more significantly than low entropy alloys [10]. When compared with conventional ceramic reinforcements, the CoCrFeMnNi HEA particles used as reinforcement in the present work were expected to exhibit better mechanical strength without losing ductility.
Preliminary work has been carried out considering HEAs as a reinforcement phase in MMCs. Wang et al. [11] used FeNiCrCoAl 3 particles in 2024 Al matrix composites and obtained a compressive strength of 710 MPa. Liu et al. [12] reported the influence of a transition layer on AlCoCrFeNi HEA particle-reinforced Al matrix composites. Using transmission electron microscopy, the authors observed that the interface layer had an face centered cubic (fcc) structure. In another study, AlCoCrCuFeNi HEA as reinforcement in an AZ91D matrix composite coating was prepared through laser surface forming [13]. It was observed that HEA reacted with the magnesium alloy matrix and formed a new phase, which resulted in a significant improvement in the wear resistance of the coating. Chen et al. [14] considered the powder metallurgy route to fabricate a copper matrix composite with AlCoNiCrFe as the reinforcement phase and found that the yield strength of composites was enhanced by approximately 160% compared with the yield of the Cu matrix, and the elongation increased by approximately 15%. Ananiadis et al. [15] studied the microstructure and corrosion performance of Al matrix composites reinforced with refractory (MoTaNbVW) HEA particulates. The composite was fabricated through the powder metallurgy route, and it was reported that increasing the volume of the reinforcing phase enhanced the composite's hardness. Similarly, there are numerous reports published on composites where HEA is used [16][17][18][19][20]. The majority of these reports show powder metallurgy as the preferred production route, since reinforcing the particles may be easier when using powder metallurgy than when using other casting options [21][22][23]. In addition, some of these reports also involve additive manufacturing as next generation manufacturing processes to fabricate such composites [24,25]. However, casting is one of the most cost-effective methods compared with the novel additive manufacturing routes. Hence, the present manuscript used the cost-effective method of casting to process and fabricate HEAreinforced metal matrix composites. Moreover, HEA particles are said to show sluggish diffusion; hence, they will not readily absorb moisture nor oxidize easily. Thus, this will help in avoiding any processing of the reinforcement powders before their introduction into the melt. In addition, since the HEA particles have structural stability, they may not react with the matrix when introduced as reinforcement. Since all the elements in the HEA are metallic in nature, they will offer better wettability than the ceramic reinforcement. Considering all these advantages, CoCrFeMnNi HEA particles were used as reinforcement in the present work, and LM25 Al alloy was taken as the matrix. LM25 is a cast alloy of aluminum that has superior mechanical strength and improved corrosion resistance. LM25 Al alloy is generally used in packaging, chemical, marine, and mobility engineering, where it is used for wheels, cylinder blocks and heads, and other engine and body castings. The current work focuses on the fabrication of Al-MMCs by a stir casting process with a bottom pouring system. The various properties of the resultant composites were studied in detail.
Materials and Methods
The metal matrix composites were fabricated and synthesized through a bottom pouring stir casting unit. Crystalline CoCrFeMnNi HEA particles were used as the reinforcement phase and commercially pure Al and LM25 alloy were used as the matrix material. The matrix material (AL/LM25) was melted in a mechanized induction type stir casting furnace ( Figure 1) with a tapered bottom and a bottom pouring facility (Indfurr, Chennai, India) at 800 • C. In order to achieve the desired dispersion in the matrix, the HEA reinforcement particles were then introduced into the melt, and the melt was stirred at 400 rpm for 5-10 min. The parameters were chosen partially based on the literature and also on prior casting experience, especially in the fabrication of composites. Once optimum stirring was realized and the reinforcement particles were dispersed uniformly in the melt, the nozzle at the bottom of the furnace was then opened to allow the melt to fill the rectangular die of 120 mm × 150 mm × 20 mm placed at the bottom of the furnace. Sufficient time was allowed for the composite material to solidify and cool down before sampling for metallographic preparation.
as reinforcement in the present work, and LM25 Al alloy was taken as the matrix. L is a cast alloy of aluminum that has superior mechanical strength and improved corro resistance. LM25 Al alloy is generally used in packaging, chemical, marine, and mo engineering, where it is used for wheels, cylinder blocks and heads, and other engine body castings. The current work focuses on the fabrication of Al-MMCs by a stir ca process with a bottom pouring system. The various properties of the resultant compo were studied in detail.
Materials and Methods
The metal matrix composites were fabricated and synthesized through a bo pouring stir casting unit. Crystalline CoCrFeMnNi HEA particles were used as the forcement phase and commercially pure Al and LM25 alloy were used as the matrix terial. The matrix material (AL/LM25) was melted in a mechanized induction typ casting furnace ( Figure 1) with a tapered bottom and a bottom pouring facility (Ind Chennai, India) at 800 °C. In order to achieve the desired dispersion in the matrix HEA reinforcement particles were then introduced into the melt, and the melt was st at 400 rpm for 5-10 min. The parameters were chosen partially based on the literature also on prior casting experience, especially in the fabrication of composites. Once mum stirring was realized and the reinforcement particles were dispersed uniform the melt, the nozzle at the bottom of the furnace was then opened to allow the melt the rectangular die of 120 mm × 150 mm × 20 mm placed at the bottom of the fur Sufficient time was allowed for the composite material to solidify and cool down b sampling for metallographic preparation. Representative composite pieces were cut from the cast bar and subjected to stan metallographic sample preparation techniques (using standard metallographic p dures), and finally were etched with Kellar's reagent. The microstructures of the co sites were observed under an Olympus GX41 inverted microscope. A field-emission ning electron microscope (FESEM, Gemini-300, ZEISS, Jena, Germany) with a comb energy-dispersive X-ray spectroscope (EDS) feature identified elemental compositi well as surface morphology. The X-ray diffraction analysis was carried out using a Ri Ultima IV XRD unit (Rigaku, Stuttgart, Germany) (with Cu-Kα radiation: λ: 1.540 which functioned at 30 mA and 40 kV by recording diffraction patterns at a scan ra 0.01° from 2θ ranging between 20° to 100° to confirm the crystal structure and ph Representative composite pieces were cut from the cast bar and subjected to standard metallographic sample preparation techniques (using standard metallographic procedures), and finally were etched with Kellar's reagent. The microstructures of the composites were observed under an Olympus GX41 inverted microscope. A field-emission scanning electron microscope (FESEM, Gemini-300, ZEISS, Jena, Germany) with a combined energydispersive X-ray spectroscope (EDS) feature identified elemental composition as well as surface morphology. The X-ray diffraction analysis was carried out using a Rigaku Ultima IV XRD unit (Rigaku, Stuttgart, Germany) (with Cu-Kα radiation: λ: 1.5406 Å), which functioned at 30 mA and 40 kV by recording diffraction patterns at a scan rate of 0.01 • from 2θ ranging between 20 • to 100 • to confirm the crystal structure and phases produced. In addition, the specimens underwent a microhardness test for a load of 0.2 kg and a dwell time of 5 sec using a Shimadzu microhardness tester HMV-G20, and the values indicated are an average of the 6 to 8 values considered for the specimen.
Tensile tests were carried on the micro tensile dog bone type specimen with a strain rate of 5 × 10 −4 s −1 according to the ASTM-E08-16 standard using a Tinius Olsen tabletop tensile testing machine (model H25KL, TINIUS OLSEN, Redhill, UK). The fractography of the fractured tensile specimens was evaluated with scanning electron microscopy (SEM) using a TESCAN microscope (Oxford). Specimens were prepared for transmission electron microscopy (TEM) as 0.1 mm foil with a repeated series of disk and emery polishing. From that, a 3 mm piece was cut with a disk punch. Then, the 3 mm piece was placed in a grinder wheel and its thickness was reduced with a disk grinder. The resulting thin foil was placed in a Gatan precision ion polishing system (PIPS) that comprises double penning ion guns with a beam diameter of 350 lm. The adjustment of the beam diameter was made by argon gas; the ion milling was carried out until good electron transparency was obtained. Vacuum pressure was maintained in the order of 8-9 × 10 −6 Torr pressure. The TEM images of the specimens were examined through JOEL JEM 2100 equipment (JEOL, Freising, Germany) with an accelerating voltage of 20 kV and very high magnification. Figure 2 shows the distribution of reinforcing particles within the Al alloy matrix for the various composite systems. It can be observed that the actual reinforcement content is in better agreement with the composition, i.e., the higher the composition, the greater the actual reinforcing particle content. Concerning the particle distribution, it can be observed from Figure 2 that the particle distribution is homogeneous for the different composite systems. In the Al 6063-5 wt.% HEA (Figure 2a), the HEA particles were distributed uniformly throughout the Al matrix. In the LM25-5 wt.% HEA (Figure 2b), HEA particles and flake-like silicon particles were distributed nearly uniformly in the Al matrix. In the LM25-10 wt.% HEA (Figure 2c), HEA particles and flake-like silicon particles (silicon being one of the major elements present in LM25 alloy, which was used as the matrix material in the present work) were distributed nearly uniformly in the Al matrix. Still, the silicon particles' percentage slightly decreased when compared with the LM25-5 wt.% HEA. For all the three composite systems, the various particles distributed in the matrix are indicated with arrows.
Evolution of Microstructure
ues indicated are an average of the 6 to 8 values considered for the specimen.
Tensile tests were carried on the micro tensile dog bone type specimen with a strain rate of 5 × 10 −4 s −1 according to the ASTM-E08-16 standard using a Tinius Olsen tabletop tensile testing machine (model H25KL, TINIUS OLSEN, Redhill, UK). The fractography of the fractured tensile specimens was evaluated with scanning electron microscopy (SEM) using a TESCAN microscope (Oxford). Specimens were prepared for transmission electron microscopy (TEM) as 0.1 mm foil with a repeated series of disk and emery polishing. From that, a 3 mm piece was cut with a disk punch. Then, the 3 mm piece was placed in a grinder wheel and its thickness was reduced with a disk grinder. The resulting thin foil was placed in a Gatan precision ion polishing system (PIPS) that comprises double penning ion guns with a beam diameter of 350 lm. The adjustment of the beam diameter was made by argon gas; the ion milling was carried out until good electron transparency was obtained. Vacuum pressure was maintained in the order of 8-9 × 10 −6 Torr pressure. The TEM images of the specimens were examined through JOEL JEM 2100 equipment (JEOL, Freising, Germany) with an accelerating voltage of 20 kV and very high magnification. Figure 2 shows the distribution of reinforcing particles within the Al alloy matrix for the various composite systems. It can be observed that the actual reinforcement content is in better agreement with the composition, i.e., the higher the composition, the greater the actual reinforcing particle content. Concerning the particle distribution, it can be observed from Figure 2 that the particle distribution is homogeneous for the different composite systems. In the Al 6063-5 wt.% HEA (Figure 2a), the HEA particles were distributed uniformly throughout the Al matrix. In the LM25-5 wt.% HEA (Figure 2b), HEA particles and flake-like silicon particles were distributed nearly uniformly in the Al matrix. In the LM25-10 wt.% HEA (Figure 2c), HEA particles and flake-like silicon particles (silicon being one of the major elements present in LM25 alloy, which was used as the matrix material in the present work) were distributed nearly uniformly in the Al matrix. Still, the silicon particles' percentage slightly decreased when compared with the LM25-5 wt.% HEA. For all the three composite systems, the various particles distributed in the matrix are indicated with arrows. Figure 3 shows the scanning electron microscopy images of Al-based alloy composites with varying HEA content. The secondary electron mode and the backscattered electron mode images (EBSD) are shown. Regarding the distribution of reinforcement in the matrix, it was observed that the HEA particles were nearly homogeneously distributed in all three different composite systems. Figure 3a,b correspond to the Al 6063-5 wt.% HEA; in this composite, HEA particles with an irregular polygonal shape were distributed in the Al matrix, identified and indicated in Figure 3. Figure 3c,d correspond to the LM25-5 wt.% HEA; in this composite, the flake-like silicon particles and the reinforcement phase of HEA particles were nearly equally distributed in the Al matrix due to their similar weight percentage (5 to 6%). Figure 3e,f correspond to the LM25-10 wt.% HEA; in this composite, the flake-like silicon particles and the reinforcement phase of HEA particles were nearly uniformly distributed, with the domination of reinforcement HEA particles due to their high weight percentage (10 wt.%) when compared with the Si in the Al matrix. No significant pores were observed. At the same time, the interfacial detachment was not visible. In addition, there was no substantial evidence of a reaction between the matrix and the reinforcement particles at the interfacial areas. This observation is critical because it indicates an absence of potential brittle intermetallic phases at the interface. As reported by Lekatou et al. [26], the presence of intermetallic steps at the interfacial area can cause deterioration in the properties of the composite.
Evolution of Microstructure
in the Al matrix, identified and indicated in Figure 3. Figure 3c,d correspond to the LM25 5 wt.% HEA; in this composite, the flake-like silicon particles and the reinforcement phas of HEA particles were nearly equally distributed in the Al matrix due to their simila weight percentage (5 to 6%). Figures 3e and 3f correspond to the LM25-10 wt.% HEA; in this composite, the flake-like silicon particles and the reinforcement phase of HEA parti cles were nearly uniformly distributed, with the domination of reinforcement HEA parti cles due to their high weight percentage (10 wt.%) when compared with the Si in the A matrix. No significant pores were observed. At the same time, the interfacial detachmen was not visible. In addition, there was no substantial evidence of a reaction between th matrix and the reinforcement particles at the interfacial areas. This observation is critica because it indicates an absence of potential brittle intermetallic phases at the interface. A reported by Lekatou et al. [26], the presence of intermetallic steps at the interfacial are can cause deterioration in the properties of the composite. Energy dispersive spectroscopy mapping with line scanning images of Al-based HEA composites with varying weight percentages are shown in Figures 4-6. The line scan were measured in the vicinity of a bulk reinforcing particle and the matrix to reveal th elemental distribution. The reinforcing particle consisted of the elements related to th refractory HEA system used in the present work. Al, Co, Cr, Fe, Mn, and Ni elements wer homogeneously distributed in the HEA particles and retained their HEA composition even though stir casting through a bottom pouring system took place. No significant in terfacial reaction was observed along with the interface of the HEA particles and the ma trix. In EDS, the K factor's magnitude characterizes the element's content, and its calcula tion is made according to the ratio standard. If its value indicates less, then the intensity of the component can be eliminated. Figure 4 shows energy dispersive spectroscopy mapping with line scanning image of the Al 6063-5 wt.% HEA. From the EDS elemental mapping images, we confirmed tha Energy dispersive spectroscopy mapping with line scanning images of Al-based HEA composites with varying weight percentages are shown in Figures 4-6. The line scans were measured in the vicinity of a bulk reinforcing particle and the matrix to reveal the elemental distribution. The reinforcing particle consisted of the elements related to the refractory HEA system used in the present work. Al, Co, Cr, Fe, Mn, and Ni elements were homogeneously distributed in the HEA particles and retained their HEA composition even though stir casting through a bottom pouring system took place. No significant interfacial reaction was observed along with the interface of the HEA particles and the matrix. In EDS, the K factor's magnitude characterizes the element's content, and its calculation is made according to the ratio standard. If its value indicates less, then the intensity of the component can be eliminated. mapping with line scanning images of LM25-5 wt.% HEA. From the EDS elemental mapping images, we confirmed that elements present in the given composite consisted of approximately 95% Al-matrix (roughly 86% Al, 8% Si, and 1% Mg), a nearly equal distribution of Co, Cr, Fe, and Mn elements with a composition of approximately 1%, and a negligible amount of Ni in the selected area. Based on the line scanning images, it was observed that the distribution of Co, Cr, Fe, Mn, and Ni in the dispersion phase and the matrix was nearly identical. Figure 6 shows EDS mapping with line scanning images of the LM25-10 wt.% HEA. From the EDS elemental mapping images, we confirmed that the elements present in the given composite consisted of approximately 94% Al matrix (roughly 85% Al, 8% Si, and 1% Mg) with a nearly equal distribution of Cr and Fe (around 2%) and Mn and Ni (around 1%) in the selected area. Based on the line scanning image, it was observed that the distribution of Co, Cr, Fe, Mn, and Ni in the dispersion phase and the matrix was nearly identical, with higher contrast when compared to the LM25-5 wt.% HEA. Figure 7 presents the transmission electron microscopy images of the Al-based HEA composites with varying reinforcement content. As observed in the figure, no interfacial reaction was visible between the CoCrFeMnNi HEA and the matrix. In addition, it is evident from the selected area electron diffraction (SAED) patterns that no intermetallic phases formed at the CoCrFeMnNi HEA and matrix boundaries. The matrix and the reinforcement phases are identified and indicated in Figure 7 (reinforcement HEA with a rectangular shape and matrix with a circular shape). Similar observations were made by Chen et al. [8]. No reaction was observed between the Cu matrix and AlCoNiCrFe HEA reinforcement at the interface, and the HEA particles in the Cu matrix had an average size of 20 nm. Karthik et al. [27] also reported no significant diffusion between the constituent elements of the HEA particles and the Al matrix or vice versa. They mentioned that the CoCrFeNi HEA system was thermally stable and did not undergo significant grain growth. Figures 7a,b correspond to the bright-field image and its corresponding selected area electron diffraction (SAED) pattern, respectively, of the Al 6063-5 wt.% HEA composite. In this composite, it was observed that the HEA particles were distributed nearly uniformly in the Al matrix. There was no evidence for \the interfacial reaction and the formation of intermetallic compounds between the matrix and HEA reinforcement. Figures 7c and 7d correspond to the bright-field image and its corresponding selected area electron diffraction (SAED) pattern, respectively, of the LM25-5 wt.% HEA composite. In this composite, it was observed that the HEA particles and Si particles were distributed nearly uniformly in the Al matrix. In this case, there was also no evidence for the interfacial reaction and the formation of intermetallic compounds between the matrix and HEA reinforcement. Figures 7e and 7f correspond to the bright-field image and its corresponding selected area electron diffraction (SAED) pattern, respectively, of the LM25-10 wt.% Figure 4 shows energy dispersive spectroscopy mapping with line scanning images of the Al 6063-5 wt.% HEA. From the EDS elemental mapping images, we confirmed that the elements present in the given composite consisted of roughly 97% Al matrix, Cr, Fe, and Mn elements with a nearly equal distribution and composition around 1%, and Co and Ni elements, which were negligible in the selected area. Based on the line scanning images, it was observed that the distribution of Co, Cr, Fe, Mn, and Ni in the dispersion phase and the matrix was nearly identical. Figure 5 shows energy dispersive spectroscopy mapping with line scanning images of LM25-5 wt.% HEA. From the EDS elemental mapping images, we confirmed that elements present in the given composite consisted of approximately 95% Al-matrix (roughly 86% Al, 8% Si, and 1% Mg), a nearly equal distribution of Co, Cr, Fe, and Mn elements with a composition of approximately 1%, and a negligible amount of Ni in the selected area. Based on the line scanning images, it was observed that the distribution of Co, Cr, Fe, Mn, and Ni in the dispersion phase and the matrix was nearly identical. Figure 6 shows EDS mapping with line scanning images of the LM25-10 wt.% HEA. From the EDS elemental mapping images, we confirmed that the elements present in the given composite consisted of approximately 94% Al matrix (roughly 85% Al, 8% Si, and 1% Mg) with a nearly equal distribution of Cr and Fe (around 2%) and Mn and Ni (around 1%) in the selected area. Based on the line scanning image, it was observed that the distribution of Co, Cr, Fe, Mn, and Ni in the dispersion phase and the matrix was nearly identical, with higher contrast when compared to the LM25-5 wt.% HEA. Figure 7 presents the transmission electron microscopy images of the Al-based HEA composites with varying reinforcement content. As observed in the figure, no interfacial reaction was visible between the CoCrFeMnNi HEA and the matrix. In addition, it is evident from the selected area electron diffraction (SAED) patterns that no intermetallic phases formed at the CoCrFeMnNi HEA and matrix boundaries. The matrix and the reinforcement phases are identified and indicated in Figure 7 (reinforcement HEA with a rectangular shape and matrix with a circular shape). Similar observations were made by Chen et al. [8]. No reaction was observed between the Cu matrix and AlCoNiCrFe HEA reinforcement at the interface, and the HEA particles in the Cu matrix had an average size of 20 nm. Karthik et al. [27] also reported no significant diffusion between the constituent elements of the HEA particles and the Al matrix or vice versa. They mentioned that the CoCrFeNi HEA system was thermally stable and did not undergo significant grain growth. Figure 7a,b correspond to the bright-field image and its corresponding selected area electron diffraction (SAED) pattern, respectively, of the Al 6063-5 wt.% HEA composite. In this composite, it was observed that the HEA particles were distributed nearly uniformly in the Al matrix. There was no evidence for \the interfacial reaction and the formation of intermetallic compounds between the matrix and HEA reinforcement. Figure 7c,d correspond to the bright-field image and its corresponding selected area electron diffraction (SAED) pattern, respectively, of the LM25-5 wt.% HEA composite. In this composite, it was observed that the HEA particles and Si particles were distributed nearly uniformly in the Al matrix. In this case, there was also no evidence for the interfacial reaction and the formation of intermetallic compounds between the matrix and HEA reinforcement. Figure 7e,f correspond to the bright-field image and its corresponding selected area electron diffraction (SAED) pattern, respectively, of the LM25-10 wt.% HEA composite. In this composite, it was observed that the HEA particles and Si particles were distributed nearly uniformly in the Al matrix, with a higher HEA reinforcement content than Si particles; no interfacial reaction or formation of intermetallic compounds occurred. Yang et al. [28] studied the interface morphology between AlCoCrFeNi particles and a 5083 Al matrix. In their study, they reported that the HEA-5083 Al composite exhibited superior interfacial integrity, and neither micropores nor microcracks were formed at the interface. Similar results were observed in the present work, and there was no major diffusion observed. HEA composite. In this composite, it was observed that the HEA particles and Si particles were distributed nearly uniformly in the Al matrix, with a higher HEA reinforcement content than Si particles; no interfacial reaction or formation of intermetallic compounds occurred. Yang et al. [28] studied the interface morphology between AlCoCrFeNi particles and a 5083 Al matrix. In their study, they reported that the HEA-5083 Al composite exhibited superior interfacial integrity, and neither micropores nor microcracks were formed at the interface. Similar results were observed in the present work, and there was no major diffusion observed.
Microhardness Studies
The hardness variation for the different composites is shown in Figure 8. It can be observed that with the enhancement of the reinforcement contents, the hardness increased. In the case of the Al 6063-5 wt.% HEA, the hardness was increased by nearly 15% compared with the microhardness of Al 6063 (54 HV0.20) reported by S. Najafi et al. [29]. In the case of the LM25-5 wt.% HEA, the hardness increased by approximately 41% as
Microhardness Studies
The hardness variation for the different composites is shown in Figure 8. It can be observed that with the enhancement of the reinforcement contents, the hardness increased. In the case of the Al 6063-5 wt.% HEA, the hardness was increased by nearly 15% compared with the microhardness of Al 6063 (54 HV 0.20 ) reported by S. Najafi et al. [29]. In the case of the LM25-5 wt.% HEA, the hardness increased by approximately 41% as compared with the cast LM25 alloy (55 HV 0.20 ). In the case of the LM25-10 wt.% HEA composites, an approximately 67% increase in hardness was observed when compared with the cast LM25 alloy. Similar results were reported by Praveen Kumar et al. [30], where AA 2024 reinforced with 15% HEA showed a 62% improvement in hardness compared to monolithic AA 2024 alloy. They reported that this increase in hardness may be influenced by the reinforcement particles, the refined grain size of the Al alloy matrix, interparticle distance, bonding at the matrix-reinforcement interface, constrained dislocation movement, and higher dislocation density [31].
X-Ray Diffraction Analysis
The X-ray diffractograms of the Al6063-5 wt.% HEA, LM25-5 wt.% HEA, and LM25-10 wt.% HEA composite samples are represented in Figure 9. It can be observed that in all cases, two different phases can be identified: an FCC phase with intense high peaks corresponding to the Al matrix and a BCC phase of significantly lower peak intensities corresponding to the HEA reinforcing particles. Compared to the LM25-5 wt.% HEA sample, the peak intensities of the BCC phase are higher in the LM25-10 wt.% HEA sample, corroborating the presence of higher amounts of HEA reinforcement particles. As reported elsewhere [32], the crystallite size and microstrain were calculated using the Williamson-Hall (WH) technique of X-ray diffraction line profile analysis (XRDLPA). The dislocation density corresponding to crystallite size and microstrain values was calculated [32], and the corresponding values are depicted in Table 1. Table 1 shows that the crystallite size for the Al 6063-5 wt.% HEA was 36 nm, LM25-5 wt.% HEA was 37 nm, and LM25-10 wt.% HEA was 35 nm. The dislocation density and lattice strain values increased with an increase in the reinforcement content. Similar behavior was reported by Kumar et al. [33] in their study on Al 7075-Al2O3 MMCs, where the authors concluded that the hardness of the composites was enhanced with increased filler content.
X-ray Diffraction Analysis
The X-ray diffractograms of the Al6063-5 wt.% HEA, LM25-5 wt.% HEA, and LM25-10 wt.% HEA composite samples are represented in Figure 9. It can be observed that in all cases, two different phases can be identified: an FCC phase with intense high peaks corresponding to the Al matrix and a BCC phase of significantly lower peak intensities corresponding to the HEA reinforcing particles. Compared to the LM25-5 wt.% HEA sample, the peak intensities of the BCC phase are higher in the LM25-10 wt.% HEA sample, corroborating the presence of higher amounts of HEA reinforcement particles. As reported elsewhere [32], the crystallite size and microstrain were calculated using the Williamson-Hall (WH) technique of X-ray diffraction line profile analysis (XRDLPA). The dislocation density corresponding to crystallite size and microstrain values was calculated [32], and the corresponding values are depicted in Table 1. Table 1 shows that the crystallite size for the Al 6063-5 wt.% HEA was 36 nm, LM25-5 wt.% HEA was 37 nm, and LM25-10 wt.% HEA was 35 nm. The dislocation density and lattice strain values increased with an increase in the reinforcement content. Similar behavior was reported by Kumar et al. [33] in their study on Al 7075-Al 2 O 3 MMCs, where the authors concluded that the hardness of the composites was enhanced with increased filler content. density corresponding to crystallite size and microstrain values was calculated [32], and the corresponding values are depicted in Table 1. Table 1 shows that the crystallite size for the Al 6063-5 wt.% HEA was 36 nm, LM25-5 wt.% HEA was 37 nm, and LM25-10 wt.% HEA was 35 nm. The dislocation density and lattice strain values increased with an increase in the reinforcement content. Similar behavior was reported by Kumar et al. [33] in their study on Al 7075-Al2O3 MMCs, where the authors concluded that the hardness of the composites was enhanced with increased filler content. Figure 9. The X-ray diffraction patterns of the Al-based metal matrix composites with various HEA content. Figure 9. The X-ray diffraction patterns of the Al-based metal matrix composites with various HEA content. However, the elongation decreased to roughly 3.90%. When the reinforcement content increased to 10 wt.% to LM25 alloy, the UTS reached approximately 210 MPa, and the elongation was confined around 3.30%. Similar results were reported in a study by Li et al. [34], in which the mechanical properties of Al-based MMCs with different contents of AlFeNiCrCoTi HEA (4, 5, and 6 wt.%) were investigated. The authors found that intermetallic compound strengthening, solid solution strengthening, and grain refinement enhanced the strength of the Al-based MMCs. When the concentration of HEA was 4 wt.%, the UTS was 142 MPa, and it increased to 170 MPa for 5 wt.% HEA. Figure 10b shows the variation versus sample designation. From Figure 10b, it was observed that YS and UTS increased with increasing HEA reinforcement, and the elongation decreased with increasing HEA reinforcement. The values of the various properties are indicated in Figure 10b, and the corresponding mechanical property values are depicted in Table 2. metallic compound strengthening, solid solution strengthening, and grain refinement enhanced the strength of the Al-based MMCs. When the concentration of HEA was 4 wt.%, the UTS was 142 MPa, and it increased to 170 MPa for 5 wt.% HEA. Figure 10b shows the variation versus sample designation. From Figure 10b, it was observed that YS and UTS increased with increasing HEA reinforcement, and the elongation decreased with increasing HEA reinforcement. The values of the various properties are indicated in Figure 10b, and the corresponding mechanical property values are depicted in Table 2. In their work, Karthik et al. [27] reported that the high strength and hardness of the HEA reinforcement exhibits better tensile and compressive properties. A similar result was obtained in the present work, where in the case of the Al 6063-5 wt.% HEA composite, the YS and UTS increased by approximately 86% and 51%, respectively, and the percent elongation decreased by 50%, compared with as-cast LM25 alloy. The work of Schuh et al. [35] attained similar results obtained through arc melting and drop-casting processes for fabrication of an equiatomic CoCrFeMnNi HEA. The composite underwent high-pressure torsion, which induced significant grain refinement in the coarse-grained casting, resulting in a grain size of approximately 50 nm. As a result, the strength was enhanced to 1.95 GPa. In the case of the LM25-5 wt.% HEA composite, YS and UTS increased by approxi- In their work, Karthik et al. [27] reported that the high strength and hardness of the HEA reinforcement exhibits better tensile and compressive properties. A similar result was obtained in the present work, where in the case of the Al 6063-5 wt.% HEA composite, the YS and UTS increased by approximately 86% and 51%, respectively, and the percent elongation decreased by 50%, compared with as-cast LM25 alloy. The work of Schuh et al. [35] attained similar results obtained through arc melting and drop-casting processes for fabrication of an equiatomic CoCrFeMnNi HEA. The composite underwent high-pressure torsion, which induced significant grain refinement in the coarse-grained casting, resulting in a grain size of approximately 50 nm. As a result, the strength was enhanced to 1.95 GPa. In the case of the LM25-5 wt.% HEA composite, YS and UTS increased by approximately 74% and 50%, respectively, and the percent elongation was decreased by approximately 51%. For the LM25-10 wt.% HEA composite, YS and UTS increased by approximately 84% and 61%, respectively, and the percent elongation was decreased by around 60%. Similar results were observed in a study by Wang et al. [36], where high quality aluminum CuZrNiAlTiW high entropy alloy (HEA) composites were fabricated by mechanical alloying and spark plasma sintering (SPS). The HEA powders in the as-milled condition conformed to a single body-centered cubic (BCC) solid-solution phase, and the formation of NiAl-rich B 2 and WAl 12 phases occurred in the sintered composites. This was attributed to the high concentration gradient of Al between the matrix and the HEA reinforcement.
Mechanical Properties
The spark plasma sintered Al bulk exhibited lower microhardness and strength, compared with the Al-HEA composite. Of the three fabricated Al-metal matrix composites presented here, the LM25-10 wt.% HEA composite exhibited superior properties due to its higher HEA reinforcement content over the Al 6063-5 wt.% HEA and LM25-5 wt.% HEA. Its YS was slightly reduced; however, its UTS values increased by nearly 6% and percent elongation was decreased by approximately 19% in comparison with the Al 6063-5 wt.% HEA. Furthermore, its YS and UTS values increased by nearly 6%, and 7% and percent elongation was decreased by roughly 15% compared to the LM25-5 wt.% HEA composite. Yang et al. [28] reported that the fabrication of 5083 Al matrix composite was reinforced by submerged friction stir processing (SFSP). When compared to a base metal, the submerged friction stir processed HEA-5083Al composites exhibited enhanced yield stress (YS) and ultimate tensile strength (UTS) by 25% and 32%, respectively, and good ductility (18.9%). In the present work, the obtained results were similar with a slight enhancement in the properties. Figure 11 represents the scanning electron microscopy images of the fracture surface. Figure 11a shows the fracture image of Al 6063-5 wt.% HEA composite. The primary fracture mode in the specimens was ductile, with reduced cavitation; in addition, the reduction of dimples in the fracture and slight cleavage planes (indicated by arrows) were also observed. Figure 11b shows the fracture image of LM25-5 wt.% HEA composite; it can be observed that with the addition of HEA, the cleavage plane appeared on the fracture surface along with the presence of dimples. Figure 11c shows the fracture image of LM25-10 wt.% HEA composite; as the HEA was enhanced from 5 wt.% to 10 wt.%, the cleavage plane appeared on the fracture surface and significantly reduced the number of dimples. Similar results were reported in a study by Li et al. [34], in which they found that the fracture surface of pure Al contained many uniform dimples. With the inclusion of HEA particles in the Al matrix, the dimple size was reduced. For the composite containing 6 wt.% HEA, the fractography revealed blocky intermetallic compounds, and cracks were generated at the tip of the intermetallic compound due to high-stress concentration [34].
Conclusions
• Al 6063 and LM25 matrix composites reinforced with CoCrFeMnNi HEA particles were fabricated through stir casting with a bottom pouring system.
•
Optical and scanning electron microscopy images revealed that the reinforcement particles were distributed homogeneously.
•
From XRD phase analysis, two different phases were observed. The peaks with higher intensities were identified as the FCC phase and correspond to Al. The peaks with significantly lower intensities correspond to the reinforced HEA particles and have a BCC structure. • Some mechanical properties, such as microhardness, yield strength, and ultimate tensile strength, were increased with increased HEA reinforcement content. However, ductility was decreased with an increase in HEA reinforcement content.
•
As HEA content was increased, the fracture surface revealed a cleavage plane and a significant reduction in the number of dimples, corroborating the mechanical test results.
Conclusions
• Al 6063 and LM25 matrix composites reinforced with CoCrFeMnNi HEA particles were fabricated through stir casting with a bottom pouring system.
•
Optical and scanning electron microscopy images revealed that the reinforcement particles were distributed homogeneously. • From XRD phase analysis, two different phases were observed. The peaks with higher intensities were identified as the FCC phase and correspond to Al. The peaks with significantly lower intensities correspond to the reinforced HEA particles and have a BCC structure. • Some mechanical properties, such as microhardness, yield strength, and ultimate tensile strength, were increased with increased HEA reinforcement content. However, ductility was decreased with an increase in HEA reinforcement content. • As HEA content was increased, the fracture surface revealed a cleavage plane and a significant reduction in the number of dimples, corroborating the mechanical test results. Informed Consent Statement: Not applicable.
Data Availability Statement:
This data is a part of an ongoing study, and the data will be made available on reasonable request. | 9,604 | 2021-12-29T00:00:00.000 | [
"Materials Science"
] |
( Re-) Design of a Demonstration Model for a Flexible and Decentralized Cyber-Physical Production System ( CPPS )
Cyber-physical production systems (CPPS) enable completely new possibilities in the factory of the future through the connectivity between the digital world and the physical production system. It has so far been difficult to transfer these advantages and concepts to students as well as professionals in a clear and practical oriented manner. The Fraunhofer Italia team has built a demonstration model for a flexible and decentralized CPPS system for showcase purposes. As part of an improvement of the existing system, Axiomatic Design (AD) was applied as the scientific design theory. Starting from the identification of the Customer Attributes, Functional Requirements were derived and the Design Parameters for the new design were defined by means of the AD decomposition and mapping process. In this paper, the application of AD for product improvement is described step-by-step based on the example of the redesign of the CPPS demonstration model.
Introduction
Cyber-Physical Systems (CPS) and other Industry 4.0 technologies are the enablers for new business models, which have the potential to be disruptive [1].The term "Industry 4.0" is spoken of highly as a bright vision in research and industry to revolutionize production management and the factory of the future.After mechanization, electrification and computerization of industrial production we are now at the beginning of a new epoch in production, where web technology, intelligent automation as well as digitalization supports the development of CPS [2].
Many companies in various industries have reorganized their production in the recent past, following the principles of Lean Production [3,4] or even taking advantage of novel production strategies such as Agile Manufacturing [5] and Mass Customization [6], and thereby increasing flexibility and achieving significant progress in productivity and in readiness for delivery [7].
New Industry 4.0 technologies and CPS enable completely new opportunities for the realization of highly efficient and responsive production systems for the production of individual products on demand.Industry 4.0 technologies facilitate the fabrication of customized products and thus, the concept of mass customization [8].
The majority of enterprises is still quite sceptical regarding the vision of Industry 4.0.In addition, large enterprises tend to feel better prepared than small enterprises.In other words, SMEs still show deficits compared to large enterprises [9].Thus, Industry 4.0 represents a special challenge for businesses, but also for education.One possibility to train employees as well as students in Industry 4.0 technologies is learning factories.Learning factories can make a substantial contribution toward the understanding of Industry 4.0.Workplace-related scenarios can be mapped providing practical learning.This process enables participants to transfer learned knowledge directly to the own workplace [10].In recent years, a large number of learning factories have been created for the purpose of knowledge transfer [10,11,12].However, it is not always possible to train and show these emerging concepts directly in a real factory environment.Therefore, demonstration models are a popular alternative and complementary solution, where the concepts and technologies can be demonstrated and explained in a miniaturized way to employees and students.
The paper shows an Axiomatic Design (AD) based redesign of demonstration model of a flexible and decentralized Cyber-Physical Production Systems (CPPS).The demonstration model created by Fraunhofer Italia is intended to facilitate the knowledge transfer of Industry 4.0 concepts to project partners, industrial firms as well as students from schools and universities.The paper is structured as follows: after this introduction, the authors summarize the theoretical background on CPPS and demonstration models.Afterwards, the current design of the demonstration model is described in detail in section 3.In section 4, the authors show an AD based approach starting with the collection of the Customer Attributes (CAs) and defining the Functional Requirements (FRs).After this, they derive the Design Parameters (DPs) of the new model through an AD decomposition and mapping process.In section 5, they illustrate the redesigned concept for the demonstration model and close the paper with a brief summary and an outlook for future research activities.
Theoretical background
The theory section presents a brief overview and the state of the art regarding to CPPS as well as the purpose and use of demonstration models.The vision of Industry 4.0 tries to respond to today's challenges like [14]: • Demand for improved efficiency, especially in developed countries, in terms of productivity and energy efficiency, • Trend towards individualisation and small batch sizes, • Need to create horizontal value networks extending across businesses.In order to achieve the goals, Industry 4.0 foresees 4 pillars [17]: Interconnection of humans and machines via the Internet, information transparency between the physical world and its virtual model, autonomous decisions at lowest possible level (decentralised), and finally, technical assistance in decision making and problem solving to humans to cope with the ever more complex CPPS.Across all pillars, standards, security, safety and human-machine-interfaces play a significant role.
The term Industry 4.0 is often being used by suppliers and governmental bodies, and on technical components that are sold by numerous suppliers.However, its practical use in factories is impeded by shortcomings in, for example, the lack of common standards for both horizontal and vertical integration, models for controlling complex structures, and qualified personnel [18].Consequently, companies, especially small and medium sized enterprises (SMEs), are often forced to postpone the comprehensive adoption of the concepts of Industry 4.0.
Industry 4.0 Labs and Demonstration Models
Demonstration in a safe environment ("sandboxing") overcomes part of the obstacles in implementing Industry 4.0 concepts in production, because it does not affect the operational production.This encourages the experimental use of advanced methods for controlling productions and innovative technologies, and creates a breeding ground of regional innovation by training the involved people.If such an infrastructure is located in an independent institution like a Fraunhofer Institute, open to all interested companies, its effectiveness is maximised and a positive impact to the regional companies can be achieved.
These benefits were already recognised particularly in Germany, where application examples, laboratories and test centres are united through the network "Plattform Industrie 4.0" [19].This platform counts almost 284 test examples for Industry 4.0.Such labs and application centres benefit from a favourable ecosystem, e.g., universities with relevant research focus, or suppliers of technologies.For example, in Kaiserslautern, where the German Research Centre for Artificial Intelligence (DFKI) partnered up with the local technical university and well-known companies such as Siemens, Bosch, SAP and several others, a supplierindependent Smart Factory lab for innovative technologies, control architectures and components, as well as a consulting services for interested companies was created [20].The initiative fostered a successful ecosystem and soon several other companies participated in the initiative.
Demonstration is an important step towards the implementation of Industry 4.0 concepts in companies [20], where smart factory labs are an obvious step to make.The major R&D challenges for demonstration models are listed here for a summary [20] In order to enhance usefulness, the ability to demonstrate the potential of Industry 4.0 to the interested audience to perform research in one or more of the fields listed above needs to be developed.
Current design of the Demonstration Model
The current model was built by a team of Fraunhofer Italia researchers and students to demonstrate the potentials of a cyber-physical production system in the factory of the future.analogue sensors, an IR distance sensor for collision avoidance, an apple-eject mechanism, and a control algorithm (line follower) with routing capability and the ability to detect and take crossings.In addition, a user interface was developed to get orders from a computer terminal.Visitors may enter their name and a personal message to be engraved on the apple.After confirmation through an individual RFID card or tag.Before the order is executed, the message needs approval by a human operator.When ready, a screen informs the visitor to pick up his apple.Visitors need to present their RFID card in order to start the delivery of the apple.
Re-design of the CPPS Demonstration Model using Axiomatic Design (AD)
After the first successes of the demonstration model at the Long Night of Research, it should now be presented to a larger audience by planning to disassemble the model and transport it in an efficiently way and to build it up on-site at schools, SMEs, universities or other facilities or events.For this purpose, the design is revised, further functions should be integrated and at the same time analysed critically regarding design flaws and possible improvements.For this purpose, a workshop was held in the presence of an AD expert.
AD was developed by Nam P. Suh in the mid-1970s in the pursuit of developing a scientific, generalized, codified, and systematic procedure for design.The scientific theory get its name from two axioms in AD that have to be respected.The first is the Independence Axiom: Maintain the independence of the functional elements, i.e., avoid coupling in the system (e.g., avoiding dependencies between the DPs and other FRs).The second is the Information Axiom: Minimize the information content: select the solution with the least information content, i.e., that has the highest probability of success [21].In order to apply these axioms, parallel functional and physical hierarchies are constructed, the latter containing the physical design solutions.The impact of AD is that the designer learns how to construct large design hierarchies quickly that are more structured, thus freeing more time for mastering applications [22].
In the initial workshop, requirements and so-called CAs were collected.Based on this input, FRs and Cs are defined and design parameters for a redesign were derived in an AD top-down decomposition and mapping process.
Workshop to define Customer Attributes (CAs)
In the workshop, the research team collected the requirements and needs and categorized them in the following groups [23]: a) Constraints (Cs) are usually hard limits or values (minimum, maximum, between).b) Functional Requirements (FRs) help the designer in the determination of the sub-levels requirements and related design solutions.They should be independent from each other to comply with axiom one, reduce complexity of the system design and are characterizing the functional needs of the artefact.c) Non-Functional Requirements (non-FRs) focus on "how" the artefact should be (usually "should be" together with an adjective) and can influence functional requirements.The following CAs could be identified (see Table 1): The design matrix on the first level is decoupled and shows the dependencies between the solutions (DPs) and the functional requirements (FRs): (1) DP 1 has also influence on FR 2 and FR 4 and DP 3 has influence on FR 4 .The same holds for DP 4 and FR 5 .These off-diagonal interactions, showing a coupling of DPs and other FRs, are explained more in detail in Section 4.3.6 on the different levels.Figure 5 shows the FR-DP tree of the highest hierarchical levels.
Top-Down decomposition and mapping process
The decomposition process of top-level FRs and DPs aims to transform the abstract requirements into more concrete parameters that are close to the daily practice and therefore relevant for implementation.The FR-DP pairs on the highest hierarchical level represented in Figure 5 are a starting point for the top-down decomposition and mapping process in AD.The decomposition is performed separately for each of the FR-DP pairs shown in Figure 5 to obtain a better understanding of the process.
FR1-DP1 -Industrial Robotics
Industrial robotics can be integrated using a lightweight robot for picking apples from the container and loading them on the vehicles.Starting from FR 1 , further FRs and DPs of the successive hierarchical level can be defined as follows: FR 1.1 Localize and identify apple for flexible feeding.FR 1.2 Grasping of sensitive products.DP 1.1 Lightweight robot combined with vision system for bin picking of the apple.DP 1.2 Flexible gripper for complex and sensitive products.
The design matrix shows an uncoupled design: (2) For flexible feeding, the research team can use existing equipment.A mobile station with a mounted UR3 lightweight robot combined with a camera system allows flexible feeding without additional investments.For grasping the apples, a flexible and sensitive gripper is required to avoid damages to the product.
FR2-DP2 -Scalability
Scalability is a major requirement of modern production systems.While in the current demonstration model the number of vehicles is fixed, a buffer should be created in the redesigned demonstration model.All other stations (laser engraving and robotized loading station) are sufficiently rapidly scalable in their performance.
Vehicles not needed in periods with low demand can be parked in the buffer to reduce energy consumption in the system, while they are called automatically, when demand is rising.FR2 can be decomposed further as follows: FR The design matrix shows an uncoupled design: (3) To realize the buffer line some more NFC pads have to be made for integration in the demonstration model.In addition, the dimensions (length) of the buffer line shall be defined according to the maximum number of vehicles, in order to guarantee the expected performance of the model during visitor presentations.
FR3-DP3 -Interaction User-System
To increase the attention and know-how transfer during visitor presentations the model should allow for interaction between the visitor/user and the CPPS.In the current demonstration model, visitors may create an individual order for writing an individual text on an apple by using a desktop station or their smartphone.Any order requires approval by a supervisor (in order to avoid inappropriate texts).To avoid injuries and malfunctions of the production equipment, the visitor cannot touch the vehicles or any other stations, and the finished apple ejects to a withdrawal tray, where the visitor may grasp it without interfering with the vehicle itself.In the current system, a quality check is missing for the simulation of a complete production process.Further, in addition to the visualization on a screen, the result of the quality check as well as the availability of the apple at the delivery station should be sent to the visitor via app.Thus, this new functions should be integrated in the redesigned demonstration model.FR3 can be decomposed in the following lower level FRs and DPs: The design matrix shows a decoupled design: In addition to the diagonal interactions, DP 3.2 also influences FR 3.3 .This influence occurs, because a remote order creation via smartphone in DP 3.2 allows visitors to write a text on the apple.To avoid non-suitable text writing during demonstrations the text needs to be checked manually for its compliance (FR 3.3 ).Furthermore also DP 3.4 shows an influence on FR 3.5 .The result of the quality check at the laser engraving station (DP 3.4 ) determines, if information can be transmitted and visualized to the user/visitor (FR 3.5 ).
While DP 3.1 to DP 3.3 were already part of the original model, the other two DPs shall be implemented in the redesigned demonstration model.
FR4-DP4 -Decentralized Control
The decentralized control as well as the traceability of mass customized products in the demonstration model is currently solved by the use of intelligent (NFC technology) and autonomous vehicles.FR 4 can be decomposed in the following lower level FRs and DPs: The design matrix shows an uncoupled design: ( The choice for NFC is motivated by the fact that it provides superior robustness against electromagnetic disturbances typical for an industrial production environment and the well-defined range of operation, which allows its use not only for communication purposes but also for unambiguous position detection.Same also for the modular vehicles built with standard components and hence there are no design changes in this FR-DP pair.
FR5-DP5 -User Experience
The demonstration model should allow the visitor to follow the steps in the production process from the point of view of the product.In the current demonstration model this function was not integrated.There is no need to decompose this FR-DP pair any further, as DP 5 can be implemented through a standard camera mounted on the vehicle.The livestream of the camera shall be transmitted to the information screen and allows the visitors following the production process from loading the apple, rotating the apple in the right position, laser engraving and ejection in the delivery station.
Overall design matrix and summary
The decomposition and mapping process helped the research team to better structure the requirements and to derive systematically the physical design solutions (DPs) without increasing complexity of the design.In developing the final design concept the constraints identified in section 4.2 shall be respected.Figure 6 summarizes the overall design matrix at first and second level in the Axiomatic Design software Acclaro DFFS.
As shown in the decoupled first level design matrix in equation ( 1) some DPs influence other FRs.The overall design matrix in Figure 6 explains this influence in the lower levels.The maximum performance of the selected industrial lightweight robot in DP 1.1 determines also the maximum length of the buffer line to scale up the capacity of the system (FR 2.2 ).Further, DP 1.1 has influence on FR 4.1 , as the industrial robot needs information on the position of vehicles in order to pick the apple and place it on the vehicles.Also DP 3.5 is affecting FR 4.1 , as the status notification at the delivery station needs position data of the vehicles.The last offdiagonal interaction is between DP 4.1 as well as DP 4.2 and FR 5 .Both DPs (NFC as communication technology and the use of autonomously navigating vehicles) influence the possibilities to visualize the production from the point of view of the product.The demo model is built up in a modular way, costs less than 20 k€ and occupies less than 6 m 2 .Furthermore, the n-FR are fulfilled: with bin picking and final quality control a complete production process of a local product is simulated and the demo model will be realized with the support of students at Fraunhofer Italia.
Concept of the redesigned and improved CPPS Demonstration Model
The analysis in section 4 revealed and improved a few weaknesses of the original demonstrator design, such as: • Operation of the demonstrator required at least two people: One person is required for the actual operation in the reserved area and one for interacting with the visitors.This aspect is improved by inserting a bin-picking robotic arm in the layout, in line with DP1.1.• The manual placement of the apples did not require the rotation mechanism, if the operator oriented the apple correctly, as he loads the vehicle.The binpicking gripper (result of DP 1.2 ) likewise disposes of such functionality, and consequently a holding cup substitutes the now obsolete rotation mechanism on the vehicles.• Vehicles could not be automatically added to the production process for rapid scalability, i.e., immediate, decentralized adjustment of the production rate; this had to occur manually and was relying on the operator recognizing the current demand.As suggested by DP2 a dedicated buffer line improves this situation.The required areas are obtained by re-designing the available board area.In the back, also the lateral boards now extend to the full length and the track layout considers the central area too.As a positive side effect, the overall capacity of the demonstrator was increased.• In the waiting queues, empty and occupied vehicles were mixed in the two parking lanes.This not only violates the independence axiom, but also impedes an independent management of the vehicles required for loading area and delivery point, which resulted in unnecessarily frequent relocations of the vehicles and complex queue management.• The table was not designed for optimal mobile use.
The setup required additional structure elements to carry the tables and the cabling underneath did not allow quick and reliable setup.In line with C3 the tables were put on their own support structure and equipped with wheels.Further, the boards are hold together with two wing nuts, which do not even require tools for mounting and dismounting.
Similarly, each board is pre-wired and the electrical interconnection of all low voltage systems can be realized with RJ45 patch cables from board to board.While a field bus interconnected the NFC pads, other low-level I/O's were excluded from that system, resulting in a heterogeneous wiring and unhandy complexity.New developed devices on the field bus substituted these non-standard components.Also, on underneath the boards, the mechanical placement of the field bus devices underwent unification process.Whereas previously each device was screwed into a particular position, rails were installed accommodating the field bus components.Now a displacement along the rails is easily possible for all components.
• The vehicles and the NFC pads have been subject of continuous improvement since the public presentation of the demonstrator.The field bus system and the associated communication protocol were continuously expanded.The NFC library was completely rewritten for improved reliability and usability.Also, the precision of the line follower algorithm was improved significantly reducing lateral motion.
• As one of the major goals of the demonstrator is to reach public attention for our research, also the nonfunctional aspects deserves attention, such as an attractive presentation.Previously the information screen merely informed the visitors when product were ready and showed videos from a fixed camera inside the laser housing -the latter mainly due to safety concerns.It turned out that the videos attracted much attention, though vehicles rested only briefly inside the laser.As suggested by DP5 the presentational aspect improves significantly by transmitting live video from a camera mounted on a vehicle.The resources on the vehicles previously used for the rotational mechanism may now be conveniently used for video transmission.Most of the improvements are reflected in the layout of the demonstrator, which is illustrated in Figure 7.
In the redesigned demonstration model there is a clear distinction between active waiting for jobs (green), active waiting for delivery (purple) and standby in the buffer line (blue) vehicles.The loading mechanism not only grips the apples and places them on the vehicles, but also aligns them so that the rotation mechanism of the vehicles becomes obsolete.Hence, the vison process is now part of the loading.The operator is relieved from repetitive loading and may assist visitors.In addition, visitor may place orders remotely.
Conclusion and outlook for future research
The present research activity has shown that axiomatic design is a powerful tool not only for new designs, but also for analysing and improving existing designs.It helped particularly in sharpening the focus on the original objectives, which during the implementation phase of the demonstrator got fuzzy.Furthermore, the systematic exercise of applying axiomatic design theory identified also gaps in the previous design, like e.g. the lack of quality control, missing buffer line, redundancy in functions (rotation mechanism, manual placement).Overall, the demonstrator benefits significantly from the activity: Its capacity increased just by changing the topology of the tracks, at the same time the control algorithms are simplified and the presentational quality improves by including also bin-picking robotic arm and live videos directly from the vehicles.Finally, the demonstrator gained in efficiency, as less people are required for operation and setup.
There remain some points for improvement: • The overall weight of the demonstrator is unchanged, which hinders its portability.Particularly the laser (safety housing, fume extractor unit and compressor) is massive and in future lighter alternatives may be explored.• The reliability of the vehicle motion control turned out to be limited without position feedback, as the wheels slip on the surface, which increases with usage.The use of inertial measurement units could improve the accuracy of movement particularly at crossings.• When developing the updated setup, a variety of possible topologies was found.Without proper means of simulating their performance, the authors had to estimate the performance based on their experience.
A quantitative confirmation of this result, as well as exploration of other possible topologies (e.g. a full mesh topology, Figure 8) is of interest to the authors and may be addressed in future work.The results of this research activity are applicable also to ongoing research projects, such as "DeConPro2, which shares many of the objectives of the demonstrator.The focus is on decentralized control, with realization of a model factory with industrial-grade components.
In the research project "SME 4.0", two work packages will develop design concepts for highly adaptable and intelligent CPPS for SME.As the working title already reveals, research focuses on the development of new concepts and the adaptation of existing approaches that are especially suitable for SME.
Fig. 1 .
Fig. 1.Increasing (relative) popularity of the search term "Industry 4.0" on Google since 2013 (the term was particularly popular in Germany, Italy, Japan, India and the UK) [16].The values indicate the search interest relative to the highest point in the chart in the specified period.
The model has been presented the first time in public during the long night of research in September 2016 in the Free University of Bolzano.Mainly the model aims to demonstrate the following concepts of typical and modern factory of the future concepts to students and SMEs: • Flexible Transport System • Intelligent Workpiece Carrier • Decentralized Control • Digital Interconnection • Efficient Human-Machine Interface.
Fig. 2 .
Fig. 2. System Architecture of the demonstration model.Yellow areas around the black tape indicate the boundaries of vehicle navigation; purple rectangles represent the dimensions of the vehicles; dotted circles indicate the manoeuvring area around crossings, which should be respected by the other vehicles.
Figure 2
Figure 2 and 3 illustrate the system architecture and the real demonstration model consisting of a: • Laser engraving head • Safety housing (custom design) • Fume extraction unit • Air compressor • NFC (near field communication) pads • Vehicles for product transport.The vehicles presented in Figures 3 and 4 consist of a commercial robotic platform with 4 DC motors, an Arduino-compatible controller board, an internally developed apple-spin mechanism (apple symmetry axis aligns to spin axis), a custom built line sensor with 8
Fig. 4 .
Fig. 4. Semi-autonomous vehicle -schematic view with major subsystems including (near-field communication) NFC connection between vehicle and stationary parts (pads).
FR 3 . 1
Prevent direct intervention by the user in unsafe areas FR 3.2 Create individual order in-situ or remotely.FR3.3 Compliance check of incoming orders.FR3.4 Quality check after processing.FR 3.5 Inform user/visitor about the order progress.DP 3.1 Separation of unsafe areas (e.g. through acrylic glass screen).DP 3.2 Order creation (individual text on the apple) at the order terminal or via smartphone (app).DP3.3 Approval by supervisor on a monitor screen.DP 3.4 Camera system at laser engraving station to compare the result with the text in the order.DP 3.5 Notification to the visitor after laser engraving station and at delivery station.
FR 4 . 1
Vehicles shall be aware of their position and communicate with the CPPS.FR 4.2 Bring mass customized products decentralized to their next processing station.DP 4.1 NFC technology for both communication and location awareness.DP 4.2 Autonomously navigating vehicles with their own drive, routing capability and controller for every work piece carrier.
Fig. 8 .
Fig. 8. Full mesh topology allowing dynamic reorganisation of the layout at the cost of several NFC pads.Its benefit and the potential reduction shall be analysed with adequate means of simulation in future.
Table 1 .
Customer Attributes (CAs).Products should have a local link with the region of South Tyrol.n-FR 2 The model should be realized and run with the support of students for training purpose.n-FR 3 The demonstration model should show a complete production process.Finally the remaining CAs were associated to high-level Functional Requirements deriving Design Parameters: FR 1 Apply advanced industrial robotics in the CPPS.FR 2 Allow automatic scaling of capacity up and down.FR 3 Ensure safety during operation of personnel, visitors and equipment.FR 4 Event-based dynamic control and monitoring of a production line for mass customized products.FR 5 Visualize the production from the point of view of the product.DP 1 Lightweight robot and vision system for bin picking at the loading station.DP 2 Buffer for waiting vehicles and automatic call.DP 3 Safe user-interface between user/visitor and the cyber-physical system DP 4 Intelligent and autonomous vehicles driven by a decentralized control architecture.DP 5 Camera system on the work piece carrier. | 6,396.6 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
The Lightweight Autonomous Vehicle Self-Diagnosis (LAVS) Using Machine Learning Based on Sensors and Multi-Protocol IoT Gateway
This paper proposes the lightweight autonomous vehicle self-diagnosis (LAVS) using machine learning based on sensors and the internet of things (IoT) gateway. It collects sensor data from in-vehicle sensors and changes the sensor data to sensor messages as it passes through protocol buses. The changed messages are divided into header information, sensor messages, and payloads and they are stored in an address table, a message queue, and a data collection table separately. In sequence, the sensor messages are converted to the message type of the other protocol and the payloads are transferred to an in-vehicle diagnosis module (In-VDM). The LAVS informs the diagnosis result of Cloud or road side unit(RSU) by the internet of vehicles (IoV) and of drivers by Bluetooth. To design the LAVS, the following two modules are needed. First, a multi-protocol integrated gateway module (MIGM) converts sensor messages for communication between two different protocols, transfers the extracted payloads to the In-VDM, and performs IoV to transfer the diagnosis result and payloads to the Cloud through wireless access in vehicular environment(WAVE). Second, the In-VDM uses random forest to diagnose parts of the vehicle, and delivers the results of the random forest as an input to the neural network to diagnose the total condition of the vehicle. Since the In-VDM uses them for self-diagnosis, it can diagnose a vehicle with efficiency. In addition, because the LAVS converts payloads to a WAVE message and uses IoV to transfer the WAVE messages to RSU or the Cloud, it prevents accidents in advance by informing the vehicle condition of drivers rapidly.
Introduction
Due to the introduction of the Fourth Industrial Revolution, information and communication technology (ICT) technology has been applied in various industries and is being developed. Among them, the automobile industry is actively utilizing internet of things (IoT) technology for autonomous vehicles. In addition, many automotive companies around the world are busy developing their technology through various competitions and interactions.
Autonomous driving technology is divided into 0-5 levels. Level 0 means no autonomous driving. Level 1 is a state where one autonomous driving technology is applied. In level 2, an advanced driver assistance system (ADAS) on the vehicle can itself actually control both steering and braking/accelerating simultaneously under some circumstances. The human driver must continue to pay full attention at all times and perform the rest of the driving tasks. Level 3 is incomplete but is capable of autonomous driving and requires the driver to be ready to drive in an emergency. Level 4 is the stage in which the automatic driving system (ADS) of a vehicle performs all the driving tasks and monitors the driving environment in specific situations. The driver does not need to pay attention in Maurizio et al. proposed a new combinatorial optimization problem that arises from the framework of rule-based risk mitigation policies for the routing of gateway location problem (GLP) hazardous materials vehicles. GLP consists of locating a fixed number of check points (so called gateways) selected out of a set of candidate sites and routing each vehicle through one assigned gateway in such a way that the sum of the risks of vehicle itineraries is minimized. This paper addresses a GLP preparatory step, that is, how to select candidate sites, and it investigates the impact of different information guided policies for determining such a set. All policies consist of selecting a ground set and sampling it according to a probability distribution law. A few criteria are proposed for generating ground sets as well as a few probability distribution laws. A deterministic variant based on a cardinality constrained covering model is also proposed for generating candidate site sets [9].
Lee et al. proposes a synchronization mechanism for FlexRay and Ethernet audio video bridging (AVB) network that guarantees a high quality-of-service. Moreover, this study uses an in-vehicle network environment that consists of FlexRay and Ethernet AVB networks using an embedded system, which is integrated and synchronized by the gateway. The synchronization mechanism provides the timing guarantees for the FlexRay network that are similar to those of the Ethernet AVB network. Figure 1 shows the use of the Ethernet AVB switch to communicate between the event-based CAN protocol and the timing-based FlexRay protocol [10]. Maurizio et al. proposed a new combinatorial optimization problem that arises from the framework of rule-based risk mitigation policies for the routing of gateway location problem (GLP) hazardous materials vehicles. GLP consists of locating a fixed number of check points (so called gateways) selected out of a set of candidate sites and routing each vehicle through one assigned gateway in such a way that the sum of the risks of vehicle itineraries is minimized. This paper addresses a GLP preparatory step, that is, how to select candidate sites, and it investigates the impact of different information guided policies for determining such a set. All policies consist of selecting a ground set and sampling it according to a probability distribution law. A few criteria are proposed for generating ground sets as well as a few probability distribution laws. A deterministic variant based on a cardinality constrained covering model is also proposed for generating candidate site sets [9].
Lee et al. proposes a synchronization mechanism for FlexRay and Ethernet audio video bridging (AVB) network that guarantees a high quality-of-service. Moreover, this study uses an in-vehicle network environment that consists of FlexRay and Ethernet AVB networks using an embedded system, which is integrated and synchronized by the gateway. The synchronization mechanism provides the timing guarantees for the FlexRay network that are similar to those of the Ethernet AVB network. Figure 1 shows the use of the Ethernet AVB switch to communicate between the eventbased CAN protocol and the timing-based FlexRay protocol [10]. Aljeri et al. proposed a reliable quality of service (QoS) aware and location aided gateway discovery protocol for vehicular networks by the name of fault tolerant location-based gateway advertisement and discovery. One of the features of this protocol is its ability to tolerate gateway routers and/or road vehicle failure. Moreover, this protocol takes into consideration the aspects of the QoS requirements specified by the gateway requesters; furthermore, the protocol insures load balancing on the gateways as well as on the routes between gateways and gateway clients [11].
Duan et al. proposed software defined networking (SDN) enabled 5G VANET. With proposed dual cluster head design and dynamic beamforming coverage, both trunk link communication quality and network robustness of vehicle clusters are significantly enhanced. Furthermore, an adaptive transmission scheme with selective modulation and power control is proposed to improve the capacity of the trunk link between the cluster head and base station. With cooperative communication between the mobile gateway candidates, the latency of traffic aggregation and distribution is also reduced [12].
Ju et al. proposed a novel gateway discovery algorithm for VANETs, providing an efficient and adaptive location-aided and prompt gateway discovery mechanism (LAPGD). Here, all vehicles go across selected mobile gateways to access 3G networks instead of a direct connection. The algorithm aims to ensure every vehicle is capable of finding its optimal gateway, to minimize the total number of gateways selected in VANETs, and to guarantee the average delay of packets within an allowable range [13]. Jeong et al. proposed "An Integrated Self-diagnosis System (ISS) for an Autonomous Vehicle based on an Internet of Things (IoT) Gateway and Deep Learning". The ISS collects data from the Aljeri et al. proposed a reliable quality of service (QoS) aware and location aided gateway discovery protocol for vehicular networks by the name of fault tolerant location-based gateway advertisement and discovery. One of the features of this protocol is its ability to tolerate gateway routers and/or road vehicle failure. Moreover, this protocol takes into consideration the aspects of the QoS requirements specified by the gateway requesters; furthermore, the protocol insures load balancing on the gateways as well as on the routes between gateways and gateway clients [11].
Duan et al. proposed software defined networking (SDN) enabled 5G VANET. With proposed dual cluster head design and dynamic beamforming coverage, both trunk link communication quality and network robustness of vehicle clusters are significantly enhanced. Furthermore, an adaptive transmission scheme with selective modulation and power control is proposed to improve the capacity of the trunk link between the cluster head and base station. With cooperative communication between the mobile gateway candidates, the latency of traffic aggregation and distribution is also reduced [12].
Ju et al. proposed a novel gateway discovery algorithm for VANETs, providing an efficient and adaptive location-aided and prompt gateway discovery mechanism (LAPGD). Here, all vehicles go across selected mobile gateways to access 3G networks instead of a direct connection. The algorithm aims to ensure every vehicle is capable of finding its optimal gateway, to minimize the total number of gateways selected in VANETs, and to guarantee the average delay of packets within an allowable range [13]. Jeong et al. proposed "An Integrated Self-diagnosis System (ISS) for an Autonomous Vehicle based on an Internet of Things (IoT) Gateway and Deep Learning". The ISS collects data from the sensors of a vehicle, diagnoses the collected data, and informs the driver of the result. The ISS considers the influence between its parts by using deep learning when diagnosing the vehicle. By transferring the self-diagnosis information and by managing the time to replace the car parts of an autonomous driving vehicle safely, ISS reduces loss of life and overall cost [14]. They proposed "A Lightweight In-Vehicle Edge Gateway (LI-VEG)" for the self-diagnosis of an autonomous vehicle. LI-VEG supports a rapid and accurate communication between in-vehicle sensors and a self-diagnosis module and between in-vehicle protocols. The LI-VEG has higher compatibility and is more cost effective because it applies a software gateway to the OBD, compared to a hardware gateway. In addition, it can reduce the transmission error and overhead caused by message decomposition because of a lightweight message header [15].
Random-Forest
Mu used a random forest algorithm to increase customer loyalty through investigations of customer statistics, and dynamic and enterprise service attributes. As a result, we have taken appropriate steps to improve the accuracy of our forecasts of customer loyalty and to prevent customer losses [16]. Figure 2 shows high level architecture of the proposed system. Though identifying eating-related gestures using wrist-worn devices is a viable application of the watch, the focus of our work is to explore the idea of using audio to detect eating behavior based on bites, rather than swallows as other works have done. sensors of a vehicle, diagnoses the collected data, and informs the driver of the result. The ISS considers the influence between its parts by using deep learning when diagnosing the vehicle. By transferring the self-diagnosis information and by managing the time to replace the car parts of an autonomous driving vehicle safely, ISS reduces loss of life and overall cost [14]. They proposed "A Lightweight In-Vehicle Edge Gateway (LI-VEG)" for the self-diagnosis of an autonomous vehicle. LI-VEG supports a rapid and accurate communication between in-vehicle sensors and a self-diagnosis module and between in-vehicle protocols. The LI-VEG has higher compatibility and is more cost effective because it applies a software gateway to the OBD, compared to a hardware gateway. In addition, it can reduce the transmission error and overhead caused by message decomposition because of a lightweight message header [15].
Random-Forest
Mu used a random forest algorithm to increase customer loyalty through investigations of customer statistics, and dynamic and enterprise service attributes. As a result, we have taken appropriate steps to improve the accuracy of our forecasts of customer loyalty and to prevent customer losses [16]. Figure 2 shows high level architecture of the proposed system. Though identifying eating-related gestures using wrist-worn devices is a viable application of the watch, the focus of our work is to explore the idea of using audio to detect eating behavior based on bites, rather than swallows as other works have done. Kalantarian et al. described signal-processing techniques for identification of chews and swallows using smart watch devices built-in microphone. In addition, the goal is to evaluate the potential of smartwatches as a platform for nutrition monitoring. Thus, signal processing technology uses random forest classifiers to classify sounds in a given environment. Random forests classify sounds based on a certain number of samples [17].
Huang et al. proposed a classification algorithm based on local cluster centers (CLCC) for data sets with a few labeled training data. The experimental results on uci data sets show that CLCC achieves competitive classification accuracy as compared to other traditional and state-of-the-art algorithms, such as sequential minimal optimization (SMO), adaptive boosting (AdaBoost), random tree, random forest, and co-forest [18].
Kalantarian et al. proposed a probabilistic algorithm for segmenting time-series signals, in which window boundaries are dynamically adjusted when the probability of the correct classification is low. Time-series segmentation refers to the challenge of subdividing a continuous stream of data into Kalantarian et al. described signal-processing techniques for identification of chews and swallows using smart watch devices built-in microphone. In addition, the goal is to evaluate the potential of smartwatches as a platform for nutrition monitoring. Thus, signal processing technology uses random forest classifiers to classify sounds in a given environment. Random forests classify sounds based on a certain number of samples [17].
Huang et al. proposed a classification algorithm based on local cluster centers (CLCC) for data sets with a few labeled training data. The experimental results on uci data sets show that CLCC achieves competitive classification accuracy as compared to other traditional and state-of-the-art algorithms, such as sequential minimal optimization (SMO), adaptive boosting (AdaBoost), random tree, random forest, and co-forest [18].
Kalantarian et al. proposed a probabilistic algorithm for segmenting time-series signals, in which window boundaries are dynamically adjusted when the probability of the correct classification is low. Time-series segmentation refers to the challenge of subdividing a continuous stream of data into discrete windows, which are individually processed using statistical classifiers to recognize various activities or events. The algorithm improves the number of correctly classified instances from a baseline of 75%-94% using the random forest classifier [19].
Tahani et al. proposed the three data mining algorithms, namely the self-organizing map (SOM), C4.5, and random forest. They are applied on adult population data from the Ministry of National Guard Health Affairs (MNGHA), Saudi Arabia to predict diabetic patients using 18 risk factors. Health care data is often huge, complex, and heterogeneous because it contains different variable types and missing values as well. Therefore, data extraction using data mining was applied. Random forest achieved the best performance compared to other data mining classifiers [20].
AI-Jarrah et al. proposes a semi-supervised multi-layered clustering model (SMLC) for network intrusion detection and prevention tasks. SMLC has the capability to learn from partially labeled data while achieving a detection performance comparable to that of the supervised Machine Learning (ML)-based intrusion detection and prevention system (IDPS). The performance of the SMLC is compared with a well-known semi-supervised model (i.e., tri-training) and supervised ensemble ML models, namely, random forest, bagging, and AdaboostM1 on two benchmark network intrusion datasets, the NSL and Kyoto 2006+. In addition, SMLC demonstrates detection accuracy comparable to that of the supervised ensemble models [21].
Meeragandhi et al. evaluates the performance of a set of classifier algorithms of rules (JRIP, decision table, PART, and OneR) and trees (J48, random forest, REP Tree, and NB Tree). Based on the evaluation results, the best algorithms for each attack category are chosen and two classifier algorithm selection models are proposed. The classification models used the data collected from knowledge discovery databases (KDD) for intrusion detection. The trained models were then used for predicting the risk of the attacks in a web server environment or by any network administrator or any security experts [22]. Huang et al. proposes an approach to diagnose broken rotor bar failure in a line start-permanent magnet synchronous motor (LS-PMSM) using random forests. The transient current signal during the motor startup was acquired from a healthy motor and a faulty motor with a broken rotor bar fault. He extracted 13 statistical time domain features from the startup transient current signal, and used these features to train and test a random forest to determine whether the motor was operating under normal or faulty conditions. For feature selection, the feature importances from the random forest were used to reduce the number of features to two features. The results showed that the random forest classifies the motor condition as healthy or faulty with an accuracy of 98.8% using all features and with an accuracy of 98.4% using only the mean-index and impulsion features. This approach can be used in the industry for online monitoring and fault diagnostic of LS-PMSM motors and the results can be helpful for the establishment of preventive maintenance plans in factories [23].
Overview
In this section, Figure 3 shows the structure of the LAVS, which has two key modules in this paper: Multi-protocol integrated gateway module (MIGM) and in-vehicle diagnosis module (In-VDM). First, the MIGM supports communication between the internal protocols of a vehicle and transmits the payloads of sensor messages collected from the vehicle to the In-VDM. To improve the accuracy and processing speed of vehicle diagnosis, the In-VDM applies the random forest to the part self-diagnosis and the neutral network to the total self-diagnosis. It performs the part diagnosis of the vehicle itself independently of the Cloud and uses the results of this part diagnosis as an input for the total diagnosis of the vehicle.
The Multi-Protocol Integrated Gateway Module (MIGM)
Since the MIGM supports the in-vehicle communication between two protocols, transfers the payloads to the In-VDM rapidly, and works in the OBD-II in software, not in hardware, it improves the speed of the self-diagnosis. Figure 4 shows the structure and functions of the MIGM, which consists of four sub-modules. The first message interface sub-module (MIS) acts as an interface between the sensors and the MSS. The second message storage sub-module (MSS) manages the message transferred from the MIS and the message converted in the MCS. The third message conversion sub-module (MCS) converts the message transferred from the MSS to a destination protocol message. The fourth WAVE message generation sub-module (WMGS) makes the vehicle condition diagnosed in the In-VDM and the payloads used for the diagnosis a WAVE message and transfers the WAVE message to RSU or the Cloud through the internet of vehicles (IoV). If the MIGM receives a sensor message from electronic control unit (ECU), the message works as follows. First, the ECU transmits the sensor messages to hardware devices (transceiver and controller) through FlexRay, CAN, and media oriented systems transport (MOST) Bus. Second, the MIS transfers the sensor messages to the MSS. Third, the MSS divides the received sensor messages into header
The Multi-Protocol Integrated Gateway Module (MIGM)
Since the MIGM supports the in-vehicle communication between two protocols, transfers the payloads to the In-VDM rapidly, and works in the OBD-II in software, not in hardware, it improves the speed of the self-diagnosis. Figure 4 shows the structure and functions of the MIGM, which consists of four sub-modules. The first message interface sub-module (MIS) acts as an interface between the sensors and the MSS. The second message storage sub-module (MSS) manages the message transferred from the MIS and the message converted in the MCS. The third message conversion sub-module (MCS) converts the message transferred from the MSS to a destination protocol message. The fourth WAVE message generation sub-module (WMGS) makes the vehicle condition diagnosed in the In-VDM and the payloads used for the diagnosis a WAVE message and transfers the WAVE message to RSU or the Cloud through the internet of vehicles (IoV). If the MIGM receives a sensor message from electronic control unit (ECU), the message works as follows.
The Multi-Protocol Integrated Gateway Module (MIGM)
Since the MIGM supports the in-vehicle communication between two protocols, transfers the payloads to the In-VDM rapidly, and works in the OBD-II in software, not in hardware, it improves the speed of the self-diagnosis. Figure 4 shows the structure and functions of the MIGM, which consists of four sub-modules. The first message interface sub-module (MIS) acts as an interface between the sensors and the MSS. The second message storage sub-module (MSS) manages the message transferred from the MIS and the message converted in the MCS. The third message conversion sub-module (MCS) converts the message transferred from the MSS to a destination protocol message. The fourth WAVE message generation sub-module (WMGS) makes the vehicle condition diagnosed in the In-VDM and the payloads used for the diagnosis a WAVE message and transfers the WAVE message to RSU or the Cloud through the internet of vehicles (IoV). If the MIGM receives a sensor message from electronic control unit (ECU), the message works as follows. First, the ECU transmits the sensor messages to hardware devices (transceiver and controller) through FlexRay, CAN, and media oriented systems transport (MOST) Bus. Second, the MIS transfers the sensor messages to the MSS. Third, the MSS divides the received sensor messages into header the sensor messages to the MSS. Third, the MSS divides the received sensor messages into header information, sensor messages, and payloads. The received header information is stored in an address table (1), the sensor messages are stored in a message queue (2), and the payloads are stored in a data collection table (3). The MSS transfers the header information of the address table and the sensor messages to the MCS (4,5), and the payloads measured in the data collection table in the same time to the In-VDM (6). Fourth, the MCS converts the sensor message transferred from the MSS to a destination protocol message and transmits the transformed message to the MSS (7) and, the process of the message reception is vice versa. Fifth, if the MSS of the MIGM receives the diagnosis result from the In-VDM, the received result is stored in the data collection table (8). The self-diagnosis data stored in it is transferred to the WMGS with the payloads used for self-diagnosis (9). The WMGS converts the received data collection table information to WAVE messages and performs IoV to transfer the WAVE messages to the neighboring RSU and Cloud (10).
A Design of a Message Interface Sub-Module (MIS)
The MIS acts as an interface between hardware and the MSS. The hardware means the transceiver and controller sending messages to and receiving them from each protocol bus. If the transceiver receives messages from each protocol bus or an actuator and transfers the received messages to a controller, the controller stores the serial bits of messages in the MCU. The MIS transfers the messages of a controller to the MSS. Figure 5 shows the hardware structure of a transceiver and controller. information, sensor messages, and payloads. The received header information is stored in an address table (1), the sensor messages are stored in a message queue (2), and the payloads are stored in a data collection table (3). The MSS transfers the header information of the address table and the sensor messages to the MCS (4,5), and the payloads measured in the data collection table in the same time to the In-VDM (6). Fourth, the MCS converts the sensor message transferred from the MSS to a destination protocol message and transmits the transformed message to the MSS (7) and, the process of the message reception is vice versa. Fifth, if the MSS of the MIGM receives the diagnosis result from the In-VDM, the received result is stored in the data collection table (8). The self-diagnosis data stored in it is transferred to the WMGS with the payloads used for self-diagnosis (9). The WMGS converts the received data collection table information to WAVE messages and performs IoV to transfer the WAVE messages to the neighboring RSU and Cloud (10).
A Design of a Message Interface Sub-Module (MIS)
The MIS acts as an interface between hardware and the MSS. The hardware means the transceiver and controller sending messages to and receiving them from each protocol bus. If the transceiver receives messages from each protocol bus or an actuator and transfers the received messages to a controller, the controller stores the serial bits of messages in the MCU. The MIS transfers the messages of a controller to the MSS. Figure 5 shows the hardware structure of a transceiver and controller. If the message translation is completed, the message has to be transferred to a destination bus. At this time, the MIS receives the messages translated in the MSS and transfers them to the controller. The controller delivers the messages to a protocol bus or an actuator through the transceiver.
A Design of a Message Storage Sub-Module (MSS)
The MSS manages the messages received from the MIS. Figure 4 shows three functions (address table, message queue, and data collection table) in which the MSS manages messages as follows.
First, the header information (destination address, source address, etc.) is stored in the address table. When sensor messages are converted in the MCS, the address table helps them be converted rapidly and is used to detect errors. Figure 6 shows the address table in detail.
Second, the message queue consists of an input message queue and an output message queue. The input message queue is one that stores the messages transferred from the MIS. The MSS transfers the messages to the MCS according to message order within the queue. The order of messages within the input message queue is decided by the priority of messages. The output message queue is one storing the messages transferred from the MCS and the MSS transfers the messages of the output message queue to the MIS according to priority.
Third, the payloads of the received sensor messages are stored in the data collection table and then they are transferred to the IN-VDM. If the IN-VDM completes the self-diagnosis of an autonomous vehicle, the MSS stores the diagnosed result in the data collection table. The payloads If the message translation is completed, the message has to be transferred to a destination bus. At this time, the MIS receives the messages translated in the MSS and transfers them to the controller. The controller delivers the messages to a protocol bus or an actuator through the transceiver.
A Design of a Message Storage Sub-Module (MSS)
The MSS manages the messages received from the MIS. Figure 4 shows three functions (address table, message queue, and data collection table) in which the MSS manages messages as follows.
First, the header information (destination address, source address, etc.) is stored in the address table. When sensor messages are converted in the MCS, the address table helps them be converted rapidly and is used to detect errors. Figure 6 shows the address table in detail. When MOST messages are converted to CAN messages, the attributes of the address table are not changed, but their values are changed according to the message state. Figure 6 shows the conversion of a MOST message to CAN messages by using the address table of Table 2. In Figure 6, the field of Destination Addr(address) in a MOST message is mapped to the destination node IDs of CAN messages through the field of destination address in the Address Table and the field of Source Addr(address) in a MOST message is mapped to the source node IDs of CAN messages through the field of source address in the Address Table. Since the Fblock/Inst/Fkt ID and OP Type field of a MOST message are not used in CAN messages, they are not stored in the address table. The Tel(telephone) ID field of a MOST message is mapped to the service type IDs of CAN messages through the current message number and message ID field of the address table. The Synchronous and Asynchronous field of a MOST message is used for MOST and they are converted to the data field of CAN messages without using the address table. Since the Control and Trailer field of a MOST message is not used for CAN messages, they are not converted. The CAN messages generate the cyclic redundancy check(CRC) and acknowledgement(ACK) field of CAN messages for themselves.
A Design of a WAVE Message Generation Sub-Module (WMGS)
The WMGS generates a WAVE message after receiving information from the data collection table in the MSS. Table 1 shows the example of the data collection table. The payloads (or data) of Table 1 The MCS converts sensor messages to the message type of the other protocol by receiving the messages of the input message queue and the address information of the address table in the MSS. When the messages are converted, the MCS uses the address information of the address table. The MIGM maps rapidly to the message fields of the other protocol the values of the address table, which is generated by each protocol. For example, the address tables used between protocols such as FlexRay to CAN, MOST to FlexRay, etc. are generated separately. The field values of the address table are generated newly whenever messages are converted. Table 2 shows the example of the address table used when MOST to CAN conversion is done. When MOST messages are converted to CAN messages, the attributes of the address table are not changed, but their values are changed according to the message state. Figure 6 shows the conversion of a MOST message to CAN messages by using the address table of Table 2.
In Figure 6, the field of Destination Addr(address) in a MOST message is mapped to the destination node IDs of CAN messages through the field of destination address in the Address Table and the field of Source Addr(address) in a MOST message is mapped to the source node IDs of CAN messages through the field of source address in the Address Table. Since the Fblock/Inst/Fkt ID and OP Type field of a MOST message are not used in CAN messages, they are not stored in the address table. The Tel(telephone) ID field of a MOST message is mapped to the service type IDs of CAN messages through the current message number and message ID field of the address table. The Synchronous and Asynchronous field of a MOST message is used for MOST and they are converted to the data field of CAN messages without using the address table. Since the Control and Trailer field of a MOST message is not used for CAN messages, they are not converted. The CAN messages generate the cyclic redundancy check(CRC) and acknowledgement(ACK) field of CAN messages for themselves.
A Design of a WAVE Message Generation Sub-Module (WMGS)
The WMGS generates a WAVE message after receiving information from the data collection table in the MSS. Figure 7 shows the structure of a generated WAVE message. Since the WAVE message is not converted to a message type of other protocols, it has to set message values newly. The physical layer convergence protocol (PLCP) preamble of the WAVE message consists of the same 10 short training symbols and two long training symbols. The PLCP preamble uses the same bits as that of the Ethernet. Since the WAVE message uses an orthogonal frequency division multiplexing (OFDM), it needs a OFDM signal field whose RATE means a frequency division rate. The frequency division rate is decided according to the size of a message. The reserved of the OFDM signal field represents an address that receives messages. The LENGTH of the OFDM signal field represents the length of a message. The Parity of the OFDM signal field is used to examine errors and the Tail of the OFDM Signal field means the end of the OFDM signal field. The DATA field consists of a service field, a PLCP service data unit(PSDU) field meaning data, a tail field meaning message end, and pad bits examining the error of the DATA field [24]. After the payloads and diagnosed result measured in the same time are converted to the PSDU of the WAVE message, the WMGS transfers the converted message to the neighboring RSU or the Cloud. and the field of Source Addr(address) in a MOST message is mapped to the source node IDs of CAN messages through the field of source address in the Address Table. Since the Fblock/Inst/Fkt ID and OP Type field of a MOST message are not used in CAN messages, they are not stored in the address table. The Tel(telephone) ID field of a MOST message is mapped to the service type IDs of CAN messages through the current message number and message ID field of the address table. The Synchronous and Asynchronous field of a MOST message is used for MOST and they are converted to the data field of CAN messages without using the address table. Since the Control and Trailer field of a MOST message is not used for CAN messages, they are not converted. The CAN messages generate the cyclic redundancy check(CRC) and acknowledgement(ACK) field of CAN messages for themselves.
A Design of a WAVE Message Generation Sub-Module (WMGS)
The WMGS generates a WAVE message after receiving information from the data collection table in the MSS. Figure 8 shows the structure of the In-VDM, which consists of two sub-modules. The first random-forest part-diagnosis sub-module (RPS) generates a random-forest model for each part of a vehicle and diagnoses the parts of a vehicle by using the generated random-forest model. The second neural network vehicle-diagnosis sub-module (NNVS) generates a neural network model and diagnoses the total condition of the vehicle by using the results of the RPS as input. Figure 7 shows the structure of a generated WAVE message. Since the WAVE message is not converted to a message type of other protocols, it has to set message values newly. The physical layer convergence protocol (PLCP) preamble of the WAVE message consists of the same 10 short training symbols and two long training symbols. The PLCP preamble uses the same bits as that of the Ethernet. Since the WAVE message uses an orthogonal frequency division multiplexing (OFDM), it needs a OFDM signal field whose RATE means a frequency division rate. The frequency division rate is decided according to the size of a message. The reserved of the OFDM signal field represents an address that receives messages. The LENGTH of the OFDM signal field represents the length of a message. The Parity of the OFDM signal field is used to examine errors and the Tail of the OFDM Signal field means the end of the OFDM signal field. The DATA field consists of a service field, a PLCP service data unit(PSDU) field meaning data, a tail field meaning message end, and pad bits examining the error of the DATA field [24]. After the payloads and diagnosed result measured in the same time are converted to the PSDU of the WAVE message, the WMGS transfers the converted message to the neighboring RSU or the Cloud. Figure 8 shows the structure of the In-VDM, which consists of two sub-modules. The first random-forest part-diagnosis sub-module (RPS) generates a random-forest model for each part of a vehicle and diagnoses the parts of a vehicle by using the generated random-forest model. The second neural network vehicle-diagnosis sub-module (NNVS) generates a neural network model and diagnoses the total condition of the vehicle by using the results of the RPS as input. The RPS learns a random-forest model using training data and outputs conditions by part. Algorithm 1 shows the process by which the RPS generates a random-forest model.
A Design of the Random-Forest Part-Diagnosis Sub-Module (RPS)
The RPS learns a random-forest model using training data and outputs conditions by part. Algorithm 1 shows the process by which the RPS generates a random-forest model. Algorithm 1. The process generating a random-forest model. Input: Training data X, Y, W X = set of payloads Y = set of results of training data W = set of weights initialize weight W : w i (1) = 1/N for(int j = 1; j <= T; j++) make subset S t from Training data. ∆Gmax = -∞ sample feature f from sensors randomly for(int k = 1; k <= K; k++) S n = a current node split S n into S l or S r by f k compute information gain ∆G: end for if(∆Gmax = 0 or maximum depth) store the probability distribution P(c|l) in a leaf node. else generate a split node recursively. end if if(finish training of decision tree) estimate class label :ŷ î y i = arg max Pt(c|l).
compute an error rate of a decision tree : t = N i:y i ŷ i w compute a weight of a decision tree : α t = 1 2 log 1− t t if(α t > 0 then) update a weight of training data The training data composed of X, Y, and W. X is the set of payloads and is represented in the following formula (1).
In Formula (1), X means sensor data configured for each part. For example, X Engine is a set of payloads that can represent the engine condition. Y means the result value judging whether a part condition is normal or not. Y consists of parts, as in X. It has one between a normal value and an abnormal value in the following formula (2). Y = y 1 , y 2 , y 3 , y 4 , . . . , y N . (2) W means the weight of each training data and is represented in the following formula (3).
All w i is initialized as 1/N at the beginning. Here, N represents the number of training data. The RPS generates a decision-making tree by extracting the variables according to weight and composes a random-forest model by modifying each variable and the tree weight. The RPS generates decision-making trees composing a random-forest model. The decision-making tree is made by using the information gain function, ∆G computed by using Genie function. The RPS repeats itself until the decision-making tree reaches a fixed depth or until the ∆G becomes 0. The decision-making trees are generated as follows.
First, ∆G max is set as −∞ for the decision-making tree generation and the decision-making tree generation sub-module generates subset, S t from the training data.
Second, it selects one of all sensors randomly. Third, it classifies the training data S n of a current node into S l and S r and computes the ∆G in formula (4).
where, S l is the value of a left child node in the S n and S r is the value of a right child node in the S n . G(s) as the Genie index is computed in formula (5).
In formula (5), the probability, P c j of the node c j is computed in formula (6).
In formula (6), w means a weight of each variable. Figure 9 shows that a decision-making tree was generated by using three variables as follows.
In formula (6), w means a weight of each variable. Figure 9. The decision-making tree generated based on temperature, voltage, and fuel spray. Figure 9 shows that a decision-making tree was generated by using three variables as follows. First, 50 payloads of a normal vehicle and 50 payloads of an abnormal vehicle as training data was used for the RPS and temperature, fuel spray, and voltage were selected as a variable.
Second, the RPS computes the Genie index with total training data, each variable, and information gain ΔG. The computation is done as follows.
At the beginning, the Genie index of training data without classification criteria has to be measured only once. The actual value of the Genie index is obtained as follows. First, 50 payloads of a normal vehicle and 50 payloads of an abnormal vehicle as training data was used for the RPS and temperature, fuel spray, and voltage were selected as a variable.
Second, the RPS computes the Genie index with total training data, each variable, and information gain ∆G. The computation is done as follows.
At the beginning, the Genie index of training data without classification criteria has to be measured only once. The actual value of the Genie index is obtained as follows.
If the Genie index of the training data is obtained without the classification criteria, the Genie index of each variable is obtained with the classification criteria. The following formulas are used to measure the Genie index when the condition of a vehicle is classified according to temperature classification criteria.
Here, the number 60 used as the denominator in the G S temp l formula means the 60 of 100 vehicles whose temperature numeric is normal. The 48 used as the numerator in the G S temp l formula means the 48 of 60 vehicles whose total condition is normal and the 12 means the 12 of 60 vehicles whose total condition is abnormal. The 40 used as denominator in the G S temp r formula means the 40 of 100 vehicles whose temperature numeric is abnormal. The seven used as the numerator in the G S temp r formula means the seven of 40 vehicles whose total condition is normal and the 33 means the 33 of 40 vehicles whose total condition is abnormal. In the same way, the RPS computes the Genie index of the fuel spray and voltage as follows. Here, since the information gain about the temperature is the biggest value, the 1st classification node is decided as the temperature. Once the 1st classification node is decided, the RPS computes the Genie indices of the other variables again. The following formulas show that the Genie index of the fuel spray is computed after the vehicles were classified with the temperature. The computed Genie index of the fuel spray has to be compared with the Genie index of the voltage under the same condition. The following formulas show that the Genie index of the voltage is computed after vehicles were classified with the temperature.
If the Genie index of the fuel spray and voltage is computed, the RPS computes the information gain on them. The following formulae are used to compute the information gain on them.
Here, because the ∆G V is the biggest value, the 2nd classification node is decided as the starting voltage. In this way, if the information gain is 0 or a decision-making tree reaches fixed depth, the decision-making tree is generated.
Since the RPS extracts properties randomly whenever each decision-making tree is generated, all the different decision-making trees are generated. Figure 10 shows that the decision-making tree was generated with the variables different from Figure 9. The RPS computes a weight by using an error rate of a decision-making tree and generates a random-forest model based on boosting. First of all, in formula (7), the RPS computes the error rate ( t ) of the decision-making tree by comparing a diagnosed condition with the result of training data (i : y i ŷ i ).
where,ŷ i is the result of a random-forest model when training data is entered into the random-forest model. y i represents the vehicle condition stored in the training data. That is, t is to divide the sum of weights in case of (ŷ i y i ) by total sum of weights. From now on, the RPS computes a weight change rate α t by using formula (8). Figure 10. The decision-making tree generated based on the engine temperature, driving time, and air pressure.
Since the RPS extracts properties randomly whenever each decision-making tree is generated, all the different decision-making trees are generated. Figure 10 shows that the decision-making tree was generated with the variables different from Figure 9. The RPS computes a weight by using an error rate of a decision-making tree and generates a random-forest model based on boosting. First of all, in formula (7), the RPS computes the error rate (ϵ ) of the decision-making tree by comparing a diagnosed condition with the result of training data ( : ≠ ).
where, is the result of a random-forest model when training data is entered into the random-forest model.
represents the vehicle condition stored in the training data. That is, ϵ is to divide the sum of weights in case of ( ≠ ) by total sum of weights. From now on, the RPS computes a weight change rate α by using formula (8).
If the weight change rate was computed, the weight is modified in formula (9).
In Figure 11, if this process is repeated T times and T decision-making trees are generated, the RPS computes the variance of each tree and selects only p trees in ascending order of the variance value. The RPS generates the random-forest model with the selected p trees. Figure 10 shows how to compose T trees by using the weight of a decision-making tree. The former in Figure 10 generates a decision-making tree by sampling training data as a subset and the latter in Figure 10 modifies the weight by using an error rate ϵ and weight change rate α. If the weight change rate was computed, the weight is modified in formula (9).
In Figure 11, if this process is repeated T times and T decision-making trees are generated, the RPS computes the variance of each tree and selects only p trees in ascending order of the variance value. The RPS generates the random-forest model with the selected p trees. Figure 10 shows how to compose T trees by using the weight of a decision-making tree. The former in Figure 10 generates a decision-making tree by sampling training data as a subset and the latter in Figure 10 modifies the weight by using an error rate and weight change rate α. For example, to compute the weight the RPS computes an error rate of a tree through six terminal nodes in Figure 10. For example, to compute the weight the RPS computes an error rate of a tree through six terminal nodes in Figure 10.
Here, the denominator represents the number of total vehicles in a terminal node and the numerator is decided according to the result of a terminal node. If the result of the terminal node is "GOOD", the numerator represents the number of abnormal vehicles in the terminal node. If the result of the terminal node is "BAD", the numerator represents the number of normal vehicles in the terminal node. For example, the 2 14 represents an error rate of the leftmost terminal node in Figure 9. The denominator 14 represents the number of the total vehicles in the terminal node. Since the result of the terminal node is "GOOD", the numerator 2 represents the number of abnormal vehicles and then the RPS computes a weight change rate α.
The RPS modifies weight w with formula (9). Since the decision-making tree of Figure 9 was generated with 100 vehicles, the weight of 100 vehicles is set as 1/100 each. Therefore, the weight of 100 vehicles is modified according to the result of each terminal node and a weight change rate α.
If T trees are generated by repeating the processes, the RPS composes a random-forest model of only fixed trees by computing the variance value of trees.
The RPS generates the final random-forest model and computes the part condition by multiplying the probability, P t (c x) of each decision-making tree by the decision-making tree weight, α t . Formula (10) is used to compute the vehicle condition,ŷ i , in each decision-making tree.
If the probability of each decision-making tree is computed, the RPS selects the highest probability as the final probability,ŷ.ŷ = argmax(ŷ t ).
After the RPS diagnosed this part, it informs drivers of a vehicle's part condition and transfers the diagnosis result to the NNVS.
A Design of a Neural Network Vehicle-Diagnosis Sub-Module (NNVS)
After the RPS diagnoses parts of a vehicle, the NNVS learns a neural network model using the result of the RPS and diagnoses the total condition of the vehicle by using the learned neural network model. Figure 12 shows an example of the input and output of the NNVS.
When the NNVS completes its neural network model learning, it diagnoses the total condition of the vehicle by using the neural network model. The NNVS result of between 1 and 0.4 indicates that the vehicle is in good condition, that between 0.4 and -0.4 indicates that the vehicle is in a bad condition but capable of driving, and that between -0.4 and -1 indicates that the vehicle is in a dangerous condition. The NNVS delivers the results of the neural network model learning to the driver so that the driver can accurately understand the total condition of the vehicle.
The Performance Analysis
This section shows the performance analysis of the MIGM and the In-VDM. To analyze the performance of the MIGM, it was compared with an existing in-vehicle gateway to measure conversion time and an error rate when 4000 messages are converted to other messages and to analyze the performance of the IN-VDM, it was compared with a multi-layer perceptron (MLP) and The RPS result represents a probability value. However, it exists between −1 and 1 because the probability value is multiplied by −1 when the RPS determines that the part is abnormal. For example, in Table 3, the value of the engine condition for the RPS is 0.251, and because the engine has been diagnosed as faulty, −0.251 is delivered to the NNVP. Table 3 is used as an input to the NNVP in Figure 12. Algorithm 2 represents the process by which the NNVS learns neural networks. The NNVS uses the results of the RPS as an input, which indicates the condition of vehicle parts. That is, the number of the NNVS input nodes is equal to the number of the RPS outputs. The number of The NNVS output nodes is 1. Formula (12) represents the set of inputs in the NNVS, I.
Here, n is the number of parts diagnosed by the RPS. The NNVS generates a neural network model consisting of three hidden layers. Formula (13) represents the nodes of the hidden layers and Formula (14) represents a weight that connects the adjacent node.
The weights are initialized using the Xavier initialization [25]. Formula (15) represents the weights that are initialized using the Xavier initialization.
Here, p in represents the number of nodes in the input layer connected to Z and p out the number of nodes in the output layer connected to Z. The NNVS uses tanh as an activation function for accurate and quick computation, and the mean squared error as a loss function. Formula (16) represents tanh and (17) mean squared error.
In Formula (17), d means the training data, and y means the result of a neural network model. The NNVS is learned based on back-propagation and uses gradient descent to change the weight.
When the NNVS completes its neural network model learning, it diagnoses the total condition of the vehicle by using the neural network model. The NNVS result of between 1 and 0.4 indicates that the vehicle is in good condition, that between 0.4 and −0.4 indicates that the vehicle is in a bad condition but capable of driving, and that between −0.4 and −1 indicates that the vehicle is in a dangerous condition. The NNVS delivers the results of the neural network model learning to the driver so that the driver can accurately understand the total condition of the vehicle.
The Performance Analysis
This section shows the performance analysis of the MIGM and the In-VDM. To analyze the performance of the MIGM, it was compared with an existing in-vehicle gateway to measure conversion time and an error rate when 4000 messages are converted to other messages and to analyze the performance of the IN-VDM, it was compared with a multi-layer perceptron (MLP) and a long short-term memory (LSTM) to measure the computation time and accuracy when the number of test data sets is changed.
The MIGM Performance Analysis
To compare the performance of the MIGM with an existing vehicle gateway, conversion time and an error rate are measured when 4000 messages are converted to other messages. The experiment was conducted when a CAN message was converted to a FlexRay or a MOST message and vice-versa. The existing in-vehicle gateway and the MIGM were implemented in the C language and the experiment was conducted in a Host PC. Figure 13a shows that in the CAN-To-FlexRay conversion, the MIGM was improved by 33.3% in conversion time more than the existing in-vehicle gateway, in the CAN-To-MOST conversion, the MIGM was improved by 20.9% more than the existing in-vehicle gateway, in FlexRay-To-CAN conversion, the MIGM was improved by 29.2% more than the existing in-vehicle gateway, and in the MOST-To-CAN conversion, the MIGM was improved by 31.3% more than the existing in-vehicle gateway. Therefore, the conversion time of the MIGM was improved more than that of the existing in-vehicle gateway by about average 28.67%.
The MIGM Performance Analysis
To compare the performance of the MIGM with an existing vehicle gateway, conversion time and an error rate are measured when 4000 messages are converted to other messages. The experiment was conducted when a CAN message was converted to a FlexRay or a MOST message and vice-versa. The existing in-vehicle gateway and the MIGM were implemented in the C language and the experiment was conducted in a Host PC. Figure 13a shows that in the CAN-To-FlexRay conversion, the MIGM was improved by 33.3% in conversion time more than the existing in-vehicle gateway, in the CAN-To-MOST conversion, the MIGM was improved by 20.9% more than the existing in-vehicle gateway, in FlexRay-To-CAN conversion, the MIGM was improved by 29.2% more than the existing in-vehicle gateway, and in the MOST-To-CAN conversion, the MIGM was improved by 31.3% more than the existing in-vehicle gateway. Therefore, the conversion time of the MIGM was improved more than that of the existing in-vehicle gateway by about average 28.67%. Figure 13b shows that in the CAN-To-FlexRay conversion, the existing in-vehicle gateway causes an error rate, 1.55% and the MIGM an error rate, 1.55%, in the CAN-To-MOST conversion, the existing in-vehicle gateway causes an error rate, 1.66% and the MIGM an error rate, 0.45%, in FlexRay-To-CAN conversion, the existing in-vehicle gateway causes an error rate, 1.94% and the MIGM an error rate, 1.59%, and in the MOST-To-CAN conversion, the existing in-vehicle gateway causes an error rate, 2.59%, and the MIGM an error rate, 2.53%. Therefore, the error rate of the MIGM was lower than that of the existing in-vehicle gateway by about 0.5%. When a CAN message was converted to another protocol message, it had higher performance than other cases by 1%.
The In-VDM Performance Analysis
To analyze the performance of In-VDM, three experiments were conducted. In two experiments, NNVP was compared with multi-layer perceptron (MLP) and long short-term memory (LSTM) in computation time and accuracy, while in the other experiment, RPS was compared with a support vector machine (SVM) and fuzzy in test loss. According to the experimental environment, the number of test data sets was 100, 150, 200, 300, 400, and 500 sets, the computation speed was 3.20 GHz, and the RAM memory size was 16 GB. Figure 14a shows that the computation time of the NNVP was improved by 44.894% and 62.719% more than that of the MPL and the LSTM separately as the number of test data sets increased from 100 to 500 sets. Figure14b shows that the accuracy of the NNVP was higher by about 1% than that of the MLP but similar to that of the LSTM on average. Since there was little difference between Figure 13b shows that in the CAN-To-FlexRay conversion, the existing in-vehicle gateway causes an error rate, 1.55% and the MIGM an error rate, 1.55%, in the CAN-To-MOST conversion, the existing in-vehicle gateway causes an error rate, 1.66% and the MIGM an error rate, 0.45%, in FlexRay-To-CAN conversion, the existing in-vehicle gateway causes an error rate, 1.94% and the MIGM an error rate, 1.59%, and in the MOST-To-CAN conversion, the existing in-vehicle gateway causes an error rate, 2.59%, and the MIGM an error rate, 2.53%. Therefore, the error rate of the MIGM was lower than that of the existing in-vehicle gateway by about 0.5%. When a CAN message was converted to another protocol message, it had higher performance than other cases by 1%.
The In-VDM Performance Analysis
To analyze the performance of In-VDM, three experiments were conducted. In two experiments, NNVP was compared with multi-layer perceptron (MLP) and long short-term memory (LSTM) in computation time and accuracy, while in the other experiment, RPS was compared with a support vector machine (SVM) and fuzzy in test loss. According to the experimental environment, the number of test data sets was 100, 150, 200, 300, 400, and 500 sets, the computation speed was 3.20 GHz, and the RAM memory size was 16 GB. Figure 14a shows that the computation time of the NNVP was improved by 44.894% and 62.719% more than that of the MPL and the LSTM separately as the number of test data sets increased from 100 to 500 sets. Figure 14b shows that the accuracy of the NNVP was higher by about 1% than that of the MLP but similar to that of the LSTM on average. Since there was little difference between them in accuracy but the NNVP was more efficient in computation time, NNVP was more suitable than the MPL and LSTM in vehicle self-diagnosis by using payloads. them in accuracy but the NNVP was more efficient in computation time, NNVP was more suitable than the MPL and LSTM in vehicle self-diagnosis by using payloads. Figure 15 shows a test data loss and over-fitting when RPS, SVM, and fuzzy were used to diagnose parts of a vehicle. The RPS had a loss similar to the SVM and about 0.2 less than the fuzzy. However, since the SVM had over-fitting, RPS was most suitable for part diagnosis of vehicles.
Conclusion
The LAVS for autonomous vehicle self-diagnosis proposed in this paper consists of the MIGM for communication not only between in-vehicle protocols but also between diagnostic results and the server and the In-VDM for part self-diagnosis and total vehicle self-diagnosis. Here, the In-VDM consists of the RPS for part diagnosis and the NNVS for total vehicle diagnosis. The LAVS guarantees the compatibility of in-vehicle protocols by using the MIGM and the self-diagnosis of a vehicle by using the In-VDM.
The conversion time of the MIGM was improved more than that of the existing in-vehicle gateway by about an average of 28.67%, the error rate of the MIGM was lower than that of the existing in-vehicle gateway by about 0.5%, the computation time of the NNVP was improved by 44.894% and 62.719% more than that of the MPL and the LSTM separately, and the accuracy of the NNVP was higher by about 1% than that of the MLP but similar to that of the LSTM on average. The RPS had a test loss similar to the SVM and about 0.2 less than the fuzzy and the SVM had over-fitting. Therefore, the LAVS was most suitable for not only in-vehicle communication but also part diagnosis and total diagnosis of vehicles.
In addition, this paper would contribute to the following. First, the safety problem will be a major obstacle to supply autonomous vehicles. If the self-diagnosis of autonomous vehicles solves this problem, it will greatly contribute to the supply of autonomous vehicles by changing the perception of customers. Second, an autonomous vehicle executes its self-diagnosis independently, Figure 15 shows a test data loss and over-fitting when RPS, SVM, and fuzzy were used to diagnose parts of a vehicle. The RPS had a loss similar to the SVM and about 0.2 less than the fuzzy. However, since the SVM had over-fitting, RPS was most suitable for part diagnosis of vehicles. them in accuracy but the NNVP was more efficient in computation time, NNVP was more suitable than the MPL and LSTM in vehicle self-diagnosis by using payloads. Figure 15 shows a test data loss and over-fitting when RPS, SVM, and fuzzy were used to diagnose parts of a vehicle. The RPS had a loss similar to the SVM and about 0.2 less than the fuzzy. However, since the SVM had over-fitting, RPS was most suitable for part diagnosis of vehicles.
Conclusion
The LAVS for autonomous vehicle self-diagnosis proposed in this paper consists of the MIGM for communication not only between in-vehicle protocols but also between diagnostic results and the server and the In-VDM for part self-diagnosis and total vehicle self-diagnosis. Here, the In-VDM consists of the RPS for part diagnosis and the NNVS for total vehicle diagnosis. The LAVS guarantees the compatibility of in-vehicle protocols by using the MIGM and the self-diagnosis of a vehicle by using the In-VDM.
The conversion time of the MIGM was improved more than that of the existing in-vehicle gateway by about an average of 28.67%, the error rate of the MIGM was lower than that of the existing in-vehicle gateway by about 0.5%, the computation time of the NNVP was improved by 44.894% and 62.719% more than that of the MPL and the LSTM separately, and the accuracy of the NNVP was higher by about 1% than that of the MLP but similar to that of the LSTM on average. The RPS had a test loss similar to the SVM and about 0.2 less than the fuzzy and the SVM had over-fitting. Therefore, the LAVS was most suitable for not only in-vehicle communication but also part diagnosis and total diagnosis of vehicles.
In addition, this paper would contribute to the following. First, the safety problem will be a major obstacle to supply autonomous vehicles. If the self-diagnosis of autonomous vehicles solves this problem, it will greatly contribute to the supply of autonomous vehicles by changing the perception of customers. Second, an autonomous vehicle executes its self-diagnosis independently,
Conclusions
The LAVS for autonomous vehicle self-diagnosis proposed in this paper consists of the MIGM for communication not only between in-vehicle protocols but also between diagnostic results and the server and the In-VDM for part self-diagnosis and total vehicle self-diagnosis. Here, the In-VDM consists of the RPS for part diagnosis and the NNVS for total vehicle diagnosis. The LAVS guarantees the compatibility of in-vehicle protocols by using the MIGM and the self-diagnosis of a vehicle by using the In-VDM.
The conversion time of the MIGM was improved more than that of the existing in-vehicle gateway by about an average of 28.67%, the error rate of the MIGM was lower than that of the existing in-vehicle gateway by about 0.5%, the computation time of the NNVP was improved by 44.894% and 62.719% more than that of the MPL and the LSTM separately, and the accuracy of the NNVP was higher by about 1% than that of the MLP but similar to that of the LSTM on average. The RPS had a test loss similar to the SVM and about 0.2 less than the fuzzy and the SVM had over-fitting. Therefore, the LAVS was most suitable for not only in-vehicle communication but also part diagnosis and total diagnosis of vehicles.
In addition, this paper would contribute to the following. First, the safety problem will be a major obstacle to supply autonomous vehicles. If the self-diagnosis of autonomous vehicles solves this problem, it will greatly contribute to the supply of autonomous vehicles by changing the perception of customers. Second, an autonomous vehicle executes its self-diagnosis independently, not dependent | 14,806.4 | 2019-06-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
A PDMS-Based Microfluidic Hanging Drop Chip for Embryoid Body Formation
The conventional hanging drop technique is the most widely used method for embryoid body (EB) formation. However, this method is labor intensive and limited by the difficulty in exchanging the medium. Here, we report a microfluidic chip-based approach for high-throughput formation of EBs. The device consists of microfluidic channels with 6 × 12 opening wells in PDMS supported by a glass substrate. The PDMS channels were fabricated by replicating polydimethyl-siloxane (PDMS) from SU-8 mold. The droplet formation in the chip was tested with different hydrostatic pressures to obtain optimal operation pressures for the wells with 1000 μm diameter openings. The droplets formed at the opening wells were used to culture mouse embryonic stem cells which could subsequently developed into EBs in the hanging droplets. This device also allows for medium exchange of the hanging droplets making it possible to perform immunochemistry staining and characterize EBs on chip.
Introduction
Owing to their ability to self-renew and differentiate into any cell type in the body, embryonic stem cells (ESCs) have been widely studied for potential uses in tissue engineering and cell therapy [1][2][3].When the factors that maintain the stemness of ES cells are removed, ES cells spontaneously aggregate into a 3-dimensional sphere called embryonic body (EB) which includes all three germ layers of endoderm, mesoderm, and ectoderm, and can be induced to differentiate into different cell types [2][3][4][5][6].Many cell culture approaches have been applied to forming EBs, including suspension culture in bacterial-grade dishes [7][8][9][10], culture in methylcellulose semisolid media [10,11], suspension culture in low-adherence vessels [12][13][14], culture in spinner flask [15] and vessel bioreactor [16].When cultured in bacterial-grade dishes, the ES cells can adhere to each other spontaneously and not attach to the plastic surface [7][8][9][10].However, the size and shape of the EBs are heterogeneous, thus they lose synchrony during differentiation.When ES cells are cultured in methycellulose semisolid media [11,17], each single ES cell can be isolated in the methycellulose matrix and develop into a EB.The disadvantage of this technique is that handling semisolid solution by pipettes is difficult.ES cells can also be cultured in low-adherence vessels.Some reagents such as proteoglycan [12], pluronic [13] and 2-methacryloyloxyethyl phosphorylcholine (MPC) [14] were coated on the dish or plate to prevent cell adhesion so that the ES cells can aggregate into EB spheres.Culturing ES cells in spinner flasks or vessel bioreactors [14,16] is preferable for scalable culture of EBs, but the culture conditions (e.g., cell density, hydrodynamic force etc.) of the devices need to be carefully optimized as they can affect the proliferation and differentiation of the EBs.Recently, microfabricated devices containing microchannel [18][19][20] and microwells [21][22][23] have also been used for EB formation.They represent an attractive approach due to their ability to precisely control the size of the formed EBs which can be easily observed on the devices.
Despite having a wide selection of the aforementioned methods to choose from, currently the conventional hanging drop method is still the most widely used approach for EB formation because it is easy to perform in the laboratory and has minimal equipment and material requirements [24].Briefly, this method uses a manual pipette to drip individual small droplets of ES cell-containing medium on a Petri dish lid, followed by inverting the lid to hang the droplets, in which the cells spontaneously fall into the bottom of the droplets by gravity and gather at the apexes due to the concave shape of the droplets.The size of EB can be controlled by adjusting the density of the cell suspension being used for forming the droplet.The uniqueness of this method lies in the fact that the cells are cultured at the liquid-air interface of the droplet thus eliminating cell contact with a solid substrate which otherwise needs a treatment to become non-adherent to cells in other methods.However, the hanging droplet method is limited by the difficulty of exchanging medium in the droplets and the required manual operation for each individual droplets.Previously, a hollow sphere soft lithography approach was used to make a novel device capable of hosting 500 µL medium in each hanging droplet, thus allowed the medium to be exchanged for EB culture for more than 10 days [25].However, the throughput of the process is still limited because each droplet has to be individually handled.To solve this problem, a modified 384-well plate device has been developed for hanging drop culture which can be scaled up by the assistance of a robotic arm system [26].
In this paper, we report the development of a microfluidic chip-based method for hanging droplet EB culture.Our approach utilizes a microstructure design (microchannel with opening wells) and microfluidic controls to achieve high-through culture of the cells.In our proof-of-concept experiment, we designed and fabricated a 6 ˆ12 arrayed micro-hanging drop chip in which a large number of hanging droplets can be simultaneously formed without needing to pipette the droplets individually.We have also developed a setup and operation procedure for successful on-chip culture of EBs as demonstrated by their growth curve and expressions of pluripotency markers.
Chip Design and Experimental Process and Setup
The arrayed microfluidic hanging drop (µHD) device is composed of PDMS microfluidic channels with 6 ˆ12 opening wells on a glass slide as shown in Figure 1A.The microchannels are 1200 µm wide and 120 µm tall.The opening wells are 1000 µm in diameter and 130 µm tall (please see Supplementary Figure S1 for the detailed design parameters).The entire operation was performed inside a laminar flow hood to keep the device sterile.The chip's inlet hole and outlet hole was initially connected to a syringe with 75% ethanol solution and an empty syringe respectively.As shown in Figure 1B, the ethanol solution was then introduced into the microchannel by manually pushing the inlet syringe while withdrawing the outlet syringe.The inlet syringe was then replaced with a cell culture medium-containing syringe to replace the ethanol solution suing the same manual push and withdraw procedure.Subsequently, the inlet tubing was unplugged and replaced with a tubing connected to an Eppendorf tube containing embryonic stem cells suspension at 3 ˆ10 5 cell/mL concentration, whereas the outlet tubing was unplugged and replaced with another tubing connected to a plastic tube containing 100 µL medium.The chip was then flipped to have the openings facing down, placed in the Petri dish, and moved to the stage of a microscope to observe the microchannel area when cells were being introduced into the chip.The ES cell suspension was injected into the microchannel by using hydrostatic pressure, which was generated by elevating the cell suspension tube for 30 cm and lowering the outlet tube for 30 cm with respect to the level of the chip.Note that the flipped chip was supported by two PDMS blocks in a way that there is a ~0.4 cm distance between the chip and the dish to avoid potential contact between the substrate and the formed droplets.The dish was also added with 4 mL of 1ˆPBS solution and covered with a lid to prevent medium evaporation.The duration of the cell suspension loading step was 15 min, which allowed the cell suspension to fill the microchannel and the cells to fall into the wells.Subsequently, the cell suspension tube was replaced with a cell medium-containing reservoir, followed by adjusting both the inlet and outlet medium reservoirs to the same height (Figure 1C, a = b = 1.45 cm) to stop the flow and apply a hydrostatic pressure needed for maintaining the concave droplet shape for EB formation (Figure 1C).The device was then placed into a cell culture incubator to allow the ES cells to aggregate in the droplets.Note that after 3 h, the height of the medium reservoirs was lowered to 1.4 cm (a = b = 1.4 cm) to reduce the hydrostatic pressure, thus the chance of bursting the droplets during the following cell culture time.The cell culture medium was replaced daily.To replace medium in the microchannel, 100 µL of medium in the inlet tube was taken out followed by adding 100 µL of fresh medium into the tube.Then 100 µL of medium in the outlet tube was taken out and the outlet tube was lowered to 0.2 cm medium high (b = 0.2 cm) to generate a hydrostatic pressure-driven flow from the outlet to inlet tube, thus medium exchange in the microchannel.This setup is kept for 2 h (inside a cell culture incubator), before a total of 100 µL medium was added into both tubes adjust the medium level s back to the same height of 1.4 cm (a = b = 1.4 cm).
Molecules 2016, 21, 882 3 of 11 also added with 4 mL of 1× PBS solution and covered with a lid to prevent medium evaporation.
The duration of the cell suspension loading step was 15 min, which allowed the cell suspension to fill the microchannel and the cells to fall into the wells.Subsequently, the cell suspension tube was replaced with a cell medium-containing reservoir, followed by adjusting both the inlet and outlet medium reservoirs to the same height (Figure 1C, a = b = 1.45 cm) to stop the flow and apply a hydrostatic pressure needed for maintaining the concave droplet shape for EB formation (Figure 1C).The device was then placed into a cell culture incubator to allow the ES cells to aggregate in the droplets.Note that after 3 h, the height of the medium reservoirs was lowered to 1.4 cm (a = b = 1.4 cm) to reduce the hydrostatic pressure, thus the chance of bursting the droplets during the following cell culture time.The cell culture medium was replaced daily.To replace medium in the microchannel, 100 μL of medium in the inlet tube was taken out followed by adding 100 μL of fresh medium into the tube.Then 100 μL of medium in the outlet tube was taken out and the outlet tube was lowered to 0.2 cm medium high (b = 0.2 cm) to generate a hydrostatic pressure-driven flow from the outlet to inlet tube, thus medium exchange in the microchannel.This setup is kept for 2 h (inside a cell culture incubator), before a total of 100 μL medium was added into both tubes adjust the medium level s back to the same height of 1.4 cm (a = b = 1.4 cm).
Hanging Drop Formation
In order to control droplet formation, droplet height under various hydrostatic pressures were measured in μHD chip with 1000 μm diameter opening wells.Figure 2A show the side view of the formed water droplets at the bottom of the microchannel.The relationship between the droplet height and applied hydrostatic pressure is shown in Figure 2B.In our tested pressure range, we found that the formed droplet height is linearly proportional the hydrostatic pressure and the maximum droplet height of ~281 μm is achieved under a hydrostatic pressure of 147 N/m 2 , above
Hanging Drop Formation
In order to control droplet formation, droplet height under various hydrostatic pressures were measured in µHD chip with 1000 µm diameter opening wells.Figure 2A show the side view of the formed water droplets at the bottom of the microchannel.The relationship between the droplet height and applied hydrostatic pressure is shown in Figure 2B.In our tested pressure range, we found that the formed droplet height is linearly proportional the hydrostatic pressure and the maximum droplet height of ~281 µm is achieved under a hydrostatic pressure of 147 N/m 2 , above which the formed droplets are less stable and prone to bursting.For our cell culture experiment, a hydrostatic pressure of 98 N/m 2 was chosen, because it was large enough to form a proper concave droplet shape for cells to concentrate at the droplets' apexes and at the same time allow the droplets to be more robust.which the formed droplets are less stable and prone to bursting.For our cell culture experiment, a hydrostatic pressure of 98 N/m 2 was chosen, because it was large enough to form a proper concave droplet shape for cells to concentrate at the droplets' apexes and at the same time allow the droplets to be more robust.
Demonstration of Solution Exchange in μHD Chip
One of the limitations of the hanging drop method is the difficulty in exchanging medium of the droplets.The use of microchannels makes it easy to exchange medium in the hanging droplets in the μHD chip.To demonstrate and characterize this process, the μHD chip was firstly set up with the hydrostatic pressure driving flow setup with both reservoirs filled with water.The height difference between the inlet reservoir and outlet reservoir was set to be 1.2 cm to produce a hydrostatic pressure-driven flow from inlet to outlet, followed by adding 20 μL of blue dye solution into the inlet reservoir in order to observe the advancement of the flow to measure the time needed for replacing the solution in the μHD chip.The top-view images (Figure 3A-C) shows the progress of the front line of the blue dye solution in the microchannel at 0 min, 60 min, and 100 min time points after the blue dye was added to the inlet reservoir.The result shows that the water in the microchannel can be replaced by the blue dye solution in 100 min.
Demonstration of Solution Exchange in µHD Chip
One of the limitations of the hanging drop method is the difficulty in exchanging medium of the droplets.The use of microchannels makes it easy to exchange medium in the hanging droplets in the µHD chip.To demonstrate and characterize this process, the µHD chip was firstly set up with the hydrostatic pressure driving flow setup with both reservoirs filled with water.The height difference between the inlet reservoir and outlet reservoir was set to be 1.2 cm to produce a hydrostatic pressure-driven flow from inlet to outlet, followed by adding 20 µL of blue dye solution into the inlet reservoir in order to observe the advancement of the flow to measure the time needed for replacing the solution in the µHD chip.The top-view images (Figure 3A-C) shows the progress of the front line of the blue dye solution in the microchannel at 0 min, 60 min, and 100 min time points after the blue dye was added to the inlet reservoir.The result shows that the water in the microchannel can be replaced by the blue dye solution in 100 min.which the formed droplets are less stable and prone to bursting.For our cell culture experiment, a hydrostatic pressure of 98 N/m 2 was chosen, because it was large enough to form a proper concave droplet shape for cells to concentrate at the droplets' apexes and at the same time allow the droplets to be more robust.
Demonstration of Solution Exchange in μHD Chip
One of the limitations of the hanging drop method is the difficulty in exchanging medium of the droplets.The use of microchannels makes it easy to exchange medium in the hanging droplets in the μHD chip.To demonstrate and characterize this process, the μHD chip was firstly set up with the hydrostatic pressure driving flow setup with both reservoirs filled with water.The height difference between the inlet reservoir and outlet reservoir was set to be 1.2 cm to produce a hydrostatic pressure-driven flow from inlet to outlet, followed by adding 20 μL of blue dye solution into the inlet reservoir in order to observe the advancement of the flow to measure the time needed for replacing the solution in the μHD chip.The top-view images (Figure 3A-C) shows the progress of the front line of the blue dye solution in the microchannel at 0 min, 60 min, and 100 min time points after the blue dye was added to the inlet reservoir.The result shows that the water in the microchannel can be replaced by the blue dye solution in 100 min.
EB Formation
For EB formation, ES cells were introduced into the microchannel, followed by elevating the inlet and outlet medium reservoirs to produce a hydrostatic pressure for maintaining the concave droplet shape for the ES cells to aggregate.In our study, cell suspension at 3 × 10 5 cells/mL density was introduced into the microchannels, resulting in 150~400 cells in each well.Figure 4A shows the images of EBs formed in 72 hanging droplets at the 72 opening wells after 1 day of culture.Due to the concave shape of the droplets, the formed EBs were located at the center of each well (Figure 4A,B).It was observed that some droplets could contain more than one EB in it.Therefore, we have also calculated the number of EB in the droplets.As shown in Figure 4D, 83.8% of the wells had a single EB in each well, whereas the rest (16.2%) of the wells contained a single EB with satellite EBs whose size were less than 40 μm (>98%).The size of the single EBs was calculated by measuring their diameters using the ImageJ software.The mean diameter (R) of the EB was determined according to the following equation: R = (a × b) 1/2 , where a and b are two orthogonal diameters of the EB. Figure 4E shows the size distribution of the formed EBs in the opening wells.The percentage of EBs was 11.6%, 77.9%, 10.5% in the diameter range of 40~80 μm, 80~120 μm, 120~160 μm, respectively.The data shows that most of the EBs were 80 to 120 μm in diameter.The EBs size was comparable to that of EBs formed by using conventional hanging drop method (Figure 4E).The average coefficient of variance (CV) for the diameter of the EBs is calculated about 15.9%.EBs were then observed after 1 day to 3 day.
EB Formation
For EB formation, ES cells were introduced into the microchannel, followed by elevating the inlet and outlet medium reservoirs to produce a hydrostatic pressure for maintaining the concave droplet shape for the ES cells to aggregate.In our study, cell suspension at 3 ˆ10 5 cells/mL density was introduced into the microchannels, resulting in 150~400 cells in each well.Figure 4A shows the images of EBs formed in 72 hanging droplets at the 72 opening wells after 1 day of culture.Due to the concave shape of the droplets, the formed EBs were located at the center of each well (Figure 4A,B).It was observed that some droplets could contain more than one EB in it.Therefore, we have also calculated the number of EB in the droplets.As shown in Figure 4D, 83.8% of the wells had a single EB in each well, whereas the rest (16.2%) of the wells contained a single EB with satellite EBs whose size were less than 40 µm (>98%).The size of the single EBs was calculated by measuring their diameters using the ImageJ software.The mean diameter (R) of the EB was determined according to the following equation: R = (a ˆb) 1/2 , where a and b are two orthogonal diameters of the EB. Figure 4E shows the size distribution of the formed EBs in the opening wells.The percentage of EBs was 11.6%, 77.9%, 10.5% in the diameter range of 40~80 µm, 80~120 µm, 120~160 µm, respectively.The data shows that most of the EBs were 80 to 120 µm in diameter.The EBs size was comparable to that of EBs formed by using conventional hanging drop method (Figure 4E).The average coefficient of variance (CV) for the diameter of the EBs is calculated about 15.9%.EBs were then observed after 1 day to 3 day.
EB Formation
For EB formation, ES cells were introduced into the microchannel, followed by elevating the inlet and outlet medium reservoirs to produce a hydrostatic pressure for maintaining the concave droplet shape for the ES cells to aggregate.In our study, cell suspension at 3 × 10 5 cells/mL density was introduced into the microchannels, resulting in 150~400 cells in each well.Figure 4A shows the images of EBs formed in 72 hanging droplets at the 72 opening wells after 1 day of culture.Due to the concave shape of the droplets, the formed EBs were located at the center of each well (Figure 4A,B).It was observed that some droplets could contain more than one EB in it.Therefore, we have also calculated the number of EB in the droplets.As shown in Figure 4D, 83.8% of the wells had a single EB in each well, whereas the rest (16.2%) of the wells contained a single EB with satellite EBs whose size were less than 40 μm (>98%).The size of the single EBs was calculated by measuring their diameters using the ImageJ software.The mean diameter (R) of the EB was determined according to the following equation: R = (a × b) 1/2 , where a and b are two orthogonal diameters of the EB. Figure 4E shows the size distribution of the formed EBs in the opening wells.The percentage of EBs was 11.6%, 77.9%, 10.5% in the diameter range of 40~80 μm, 80~120 μm, 120~160 μm, respectively.The data shows that most of the EBs were 80 to 120 μm in diameter.The EBs size was comparable to that of EBs formed by using conventional hanging drop method (Figure 4E).The average coefficient of variance (CV) for the diameter of the EBs is calculated about 15.9%.EBs were then observed after 1 day to 3 day.The ES cells formed EBs which continued to be proliferated.The sizes of EBs were measured every day to obtain the growth rates of the EBs. Figure 5E shows the growth curves of EBs cultured in μHD chip and conventional system.The EBs in the μHD system had a growth similar to that of EBs cultured by the conventional method.
Immunostaining of EBs
After three days of EB culture, 500 μL of medium was manually injected into the microchannel to burst the droplets and collect the EBs in a Petri dish.The EBs were then stained with SSEA-1 antibody and 4′,6-diamidino-2-phenylindole (DAPI).Figure 6A shows confocal images of immunostained EBs formed by conventional hanging drop method and the microfluidic chip.The positive SSEA-1 green fluorescence staining indicates that the pluripotency of the EBs were maintained in EBs cultured by both methods.The DAPI staining was used to visualize the nucleus of the cells.Note that because the μHD chip allows solution exchange in hanging droplets, we were also able to perform the The ES cells formed EBs which continued to be proliferated.The sizes of EBs were measured every day to obtain the growth rates of the EBs. Figure 5E shows the growth curves of EBs cultured in µHD chip and conventional system.The EBs in the µHD system had a growth similar to that of EBs cultured by the conventional method.The ES cells formed EBs which continued to be proliferated.The sizes of EBs were measured every day to obtain the growth rates of the EBs. Figure 5E shows the growth curves of EBs cultured in μHD chip and conventional system.The EBs in the μHD system had a growth similar to that of EBs cultured by the conventional method.
Immunostaining of EBs
After three days of EB culture, 500 μL of medium was manually injected into the microchannel to burst the droplets and collect the EBs in a Petri dish.The EBs were then stained with SSEA-1 antibody and 4′,6-diamidino-2-phenylindole (DAPI).Figure 6A shows confocal images of immunostained EBs formed by conventional hanging drop method and the microfluidic chip.The positive SSEA-1 green fluorescence staining indicates that the pluripotency of the EBs were maintained in EBs cultured by both methods.The DAPI staining was used to visualize the nucleus of the cells.Note that because the μHD chip allows solution exchange in hanging droplets, we were also able to perform the
Immunostaining of EBs
After three days of EB culture, 500 µL of medium was manually injected into the microchannel to burst the droplets and collect the EBs in a Petri dish.The EBs were then stained with SSEA-1 antibody and 4 1 ,6-diamidino-2-phenylindole (DAPI).Figure 6A shows confocal images of immunostained EBs formed by conventional hanging drop method and the microfluidic chip.The positive SSEA-1 green fluorescence staining indicates that the pluripotency of the EBs were maintained in EBs cultured by both methods.The DAPI staining was used to visualize the nucleus of the cells.Note that because the µHD chip allows solution exchange in hanging droplets, we were also able to perform the immunochemistry staining of EBs on chip (Figure 6B).This unique feature of the µHD chip can eliminate cell transferring steps required in the conventional hanging drop method, thus potential cell loss and damage.
immunochemistry staining of EBs on chip (Figure 6B).This unique feature of the μHD chip can eliminate cell transferring steps required in the conventional hanging drop method, thus potential cell loss and damage.
Fabrication Process
The microfluidic device was fabricated in polydimethylsiloxane (PDMS) formed from mixing prepolymer (Sylgard 184, Dow Corning, Auburn, MI, USA) at a ratio of 1:10 base to curing agent using a modified soft lithography method.Briefly, a master mold with two-layer features were made using conventional photolithography with SU-8 (MicroChem, Newton, MA, USA) on a silicon wafer.The channel (layer 1) were 1200 μm in width, 120 μm in height, and the extruding feature was 130 μm tall cylinders (layer 2) with 1000 μm diameter.The master was then used as mold and placed on a thin layer of PDMS sheet on a glass plate.PDMS prepolymer was then poured on to the master followed by applying a fluoropolymer coated polyester (PE) sheet (Scotchpak 1022, 3M, St. Paul, MN, USA) and a glass plate to sandwiched the master in between the two glass plates which served to apply even pressure to squeeze out excess PDMS.The thin PDMS was then cured at 65 °C for 24 h
Fabrication Process
The microfluidic device was fabricated in polydimethylsiloxane (PDMS) formed from mixing prepolymer (Sylgard 184, Dow Corning, Auburn, MI, USA) at a ratio of 1:10 base to curing agent using a modified soft lithography method.Briefly, a master mold with two-layer features were made using conventional photolithography with SU-8 (MicroChem, Newton, MA, USA) on a silicon wafer.The channel (layer 1) were 1200 µm in width, 120 µm in height, and the extruding feature was 130 µm tall cylinders (layer 2) with 1000 µm diameter.The master was then used as mold and placed on a thin layer of PDMS sheet on a glass plate.PDMS prepolymer was then poured on to the master followed by applying a fluoropolymer coated polyester (PE) sheet (Scotchpak 1022, 3M, St. Paul, MN, USA) and a glass plate to sandwiched the master in between the two glass plates which served to apply even pressure to squeeze out excess PDMS.The thin PDMS was then cured at 65 ˝C for 24 h and then removed from the master with the PE sheet without distortion [27].Then the membrane was bond onto a glass slide after it was treated with oxygen plasma (AP300, Nordson March, Westlake, OH, USA).The PE sheet was then removed from the PDMS.Two PDMS blocks (10 mm ˆ10 mm ˆ2 mm) with punched holes (I.D. = 0.75 mm) were bonded to the PDMS to serve as inlet and outlet for connecting to tubing (I.D. = 0.01 inch, AAQ04091 Tygon Microbore Tubing, Buckeye Container Co., Wooster, OH, USA).Prior to use, the devices were sterilized by the UV light for 30 min.
Droplet and EB Measurement
As shown in Supplementary Figure S2, the chip was connected to two plastic reservoirs (id 14.5 mm, 6 cm high, 5 mL) via two tubings (I.D. = 0.01 inch, AAQ04091 Tygon Microbore Tubing) and placed on two PMMA plates under a microscope (Nikon SMZ1500, Buckeye Container Co., Wooster, OH, USA) equipped with a CCD camera (Qimaging, Surrey, BC, Canada).A mirror (5 cm ˆ7 cm) was set beside the chip and tilted with a 45 ˝angle so that the side view of the droplets could captured by the CCD camera through the microscope.Droplet images were captured from three experiments and from each image the heights of 5 droplets were measured using a software (ImageJ, NIH, Bethesda, MD, USA).The exerted hydrostatic pressure at the opening is calculated by using the hydrostatic pressure equation as: p " ρ g H, where p is pressure (N/m 2 ), ρ is density of liquid (water: 1000 kg/m 3 ), g is acceleration of gravity (9.8 m/s 2 ), and H (H = A ´C) is height of fluid (m).The images of cells in the µHD chip was captured by using an inverted fluorescence microscope (Eclipse Ti, Nikon, Melville, NY, USA) equipped with a CCD camera (RT3, Spot, Sterling Heights, MI, USA).
Medium Exchange Experiment
The degree of solution replacement in the microchannel was evaluated by the intensity of the blue dye measured in the droplets areas.Specifically, the images were firstly converted to 8-bit color images and then to gray scale images.The function of Image Calculator in ImageJ was then used to substrate every image with the background image (i.e., the image taken at 0 min).The gray value at the well areas were then measured from the calculated images.The gray scale value of 0 was regarded as 0% of blue dye replacement whereas the maximum number was regarded as 100% blue dye replacement.The normalized intensity was obtained by dividing the gray value of interest to the maximum gray scale value.The wells were divided into 12 columns (six wells are in one column, C1 to C12 from the inlet side to the outlet side of the chip), and the averaged normalized intensities of each column was plotted.Three experiments were repeated.
Figure 1 .
Figure 1.Illustration of the microfluidic hanging drop chip, and its setup and operation.(A) Photo of the PDMS μHD chip (openings facing up) with opening diameter of 1000 μm.Scale bar is 1 cm; (B) Schematic illustration of the μHD chip operation.The microchannel is firstly filled with 75% ethanol, followed by introducing cell suspension into the microchannel by hydrostatic pressure-driven flow, and allowing cells to descend to the bottom of the droplets.The docked cells centralized and aggregated at center of droplet bottom due to the concavity of the droplets and grow into spheroids; (C) Illustration of μHD chip setup.The μHD chip was placed in a 10 cm dish containing 1× PBS in the bottom of the plate to prevent medium evaporation from the hanging droplets.
Figure 1 .
Figure 1.Illustration of the microfluidic hanging drop chip, and its setup and operation.(A) Photo of the PDMS µHD chip (openings facing up) with opening diameter of 1000 µm.Scale bar is 1 cm; (B) Schematic illustration of the µHD chip operation.The microchannel is firstly filled with 75% ethanol, followed by introducing cell suspension into the microchannel by hydrostatic pressure-driven flow, and allowing cells to descend to the bottom of the droplets.The docked cells centralized and aggregated at center of droplet bottom due to the concavity of the droplets and grow into spheroids; (C) Illustration of µHD chip setup.The µHD chip was placed in a 10 cm dish containing 1ˆPBS in the bottom of the plate to prevent medium evaporation from the hanging droplets.
Figure 2 .
Figure 2. Measured droplet heights under various hydrostatic pressures in μHD chip with 1000 μm diameter opening wells.(A) Side-view photographs of a portion of the μHD chip showing protruding droplets of different heights (top: average height = 142.8 μm at 98 N/m 2 ; middle: average height = 215.6 μm at 127.4 N/m 2 ; bottom: average height = 281.2μm at 147 N/m 2 ; (B) the relationship between the droplet height and hydrostatic pressure.Scale bar is 500 μm.
Figure 2 .
Figure 2. Measured droplet heights under various hydrostatic pressures in µHD chip with 1000 µm diameter opening wells.(A) Side-view photographs of a portion of the µHD chip showing protruding droplets of different heights (top: average height = 142.8 µm at 98 N/m 2 ; middle: average height = 215.6 µm at 127.4 N/m 2 ; bottom: average height = 281.2µm at 147 N/m 2 ; (B) the relationship between the droplet height and hydrostatic pressure.Scale bar is 500 µm.
Figure 2 .
Figure 2. Measured droplet heights under various hydrostatic pressures in μHD chip with 1000 μm diameter opening wells.(A) Side-view photographs of a portion of the μHD chip showing protruding droplets of different heights (top: average height = 142.8 μm at 98 N/m 2 ; middle: average height = 215.6 μm at 127.4 N/m 2 ; bottom: average height = 281.2μm at 147 N/m 2 ; (B) the relationship between the droplet height and hydrostatic pressure.Scale bar is 500 μm.
Figure 3 .
Figure 3. Demonstration and characterization of solution exchange in μHD chip.(A-C) Photographs showing the solution replacement process (blue dye replacing DI water) at 0 min, 60 min and 100 min, respectively.Scale bar = 1 mm; (D) photograph of the whole chip taken at 80 min after the replacement process started.Scale bar = 4 cm; (E) the relationship between the normalized intensity of the blue dye solution and the time after the replacement process started.
Figure 3 .
Figure 3. Demonstration and characterization of solution exchange in µHD chip.(A-C) Photographs showing the solution replacement process (blue dye replacing DI water) at 0 min, 60 min and 100 min, respectively.Scale bar = 1 mm; (D) photograph of the whole chip taken at 80 min after the replacement process started.Scale bar = 4 cm; (E) the relationship between the normalized intensity of the blue dye solution and the time after the replacement process started.
Molecules 2016, 21 , 882 5 of 11 Figure 3 .
Figure 3. Demonstration and characterization of solution exchange in μHD chip.(A-C) Photographs showing the solution replacement process (blue dye replacing DI water) at 0 min, 60 min and 100 min, respectively.Scale bar = 1 mm; (D) photograph of the whole chip taken at 80 min after the replacement process started.Scale bar = 4 cm; (E) the relationship between the normalized intensity of the blue dye solution and the time after the replacement process started.
Figure 4 .
Figure 4. EB formation after 1 day.(A) Images of the top-view of the chip after 1 day culture; (B,C) The closed view of one EB; (D) the comparison between the number of single EB and EBs.The photo shows the satellite EBs are less than 40 μm; (E) the size distribution of the EBs.
Figure
Figure 5A-D shows the images of the EBs formation in four wells during three days of culture.The ES cells formed EBs which continued to be proliferated.The sizes of EBs were measured every day to obtain the growth rates of the EBs.Figure5Eshows the growth curves of EBs cultured in μHD chip and conventional system.The EBs in the μHD system had a growth similar to that of EBs cultured by the conventional method.
Figure 5 .
Figure 5. EB growth observation for 3 days.(A-D) Images of EBs taken at day 0 (A); day 1 (B); day 2 (C) and day 3 (D) after cell seeding; (E) the growth curves of the EBs cultured in the μHD chip and conventional hanging drop system.The scale bar is 200 μm.
Figure 4 .
Figure 4. EB formation after 1 day.(A) Images of the top-view of the chip after 1 day culture; (B,C) The closed view of one EB; (D) the comparison between the number of single EB and EBs.The photo shows the satellite EBs are less than 40 µm; (E) the size distribution of the EBs.
Figure
Figure 5A-D shows the images of the EBs formation in four wells during three days of culture.The ES cells formed EBs which continued to be proliferated.The sizes of EBs were measured every day to obtain the growth rates of the EBs.Figure5Eshows the growth curves of EBs cultured in µHD chip and conventional system.The EBs in the µHD system had a growth similar to that of EBs cultured by the conventional method.
Figure 4 .
Figure 4. EB formation after 1 day.(A) Images of the top-view of the chip after 1 day culture; (B,C) The closed view of one EB; (D) the comparison between the number of single EB and EBs.The photo shows the satellite EBs are less than 40 μm; (E) the size distribution of the EBs.
Figure
Figure 5A-D shows the images of the EBs formation in four wells during three days of culture.The ES cells formed EBs which continued to be proliferated.The sizes of EBs were measured every day to obtain the growth rates of the EBs.Figure5Eshows the growth curves of EBs cultured in μHD chip and conventional system.The EBs in the μHD system had a growth similar to that of EBs cultured by the conventional method.
Figure 5 .
Figure 5. EB growth observation for 3 days.(A-D) Images of EBs taken at day 0 (A); day 1 (B); day 2 (C) and day 3 (D) after cell seeding; (E) the growth curves of the EBs cultured in the μHD chip and conventional hanging drop system.The scale bar is 200 μm.
Figure 5 .
Figure 5. EB growth observation for 3 days.(A-D) Images of EBs taken at day 0 (A); day 1 (B); day 2 (C) and day 3 (D) after cell seeding; (E) the growth curves of the EBs cultured in the µHD chip and conventional hanging drop system.The scale bar is 200 µm.
Figure 6 .
Figure 6.Immunochemistry staining of EBs.(A) Confocal images of SSEA-1 and DAPI stained EBs formed by using the conventional hanging drop method and the μHD chip.Scale bar = 50 μm; (B) images of on-chip immunochemistry stained EBs.Scale bar = 100 μm.
Figure 6 .
Figure 6.Immunochemistry staining of EBs.(A) Confocal images of SSEA-1 and DAPI stained EBs formed by using the conventional hanging drop method and the µHD chip.Scale bar = 50 µm; (B) images of on-chip immunochemistry stained EBs.Scale bar = 100 µm. | 8,706.6 | 2016-07-01T00:00:00.000 | [
"Engineering"
] |
Threading dislocation lines in two-sided flux array decorations
Two-sided flux decoration experiments indicate that threading dislocation lines (TDLs), which cross the entire film, are sometimes trapped in metastable states. We calculate the elastic energy associated with the meanderings of a TDL. The TDL behaves as an anisotropic and dispersive string with thermal fluctuations largely along its Burger's vector. These fluctuations also modify the structure factor of the vortex solid. Both effects can in principle be used to estimate the elastic moduli of the material.
Recent two-sided flux decoration experiments have proven an effective technique to visualize and correlate the positions of individual flux lines on the two sides of Bi 2 Sr 2 CaCu 2 O 8 (BSCCO) thin superconductor films [1,2]. This material belongs to the class of high-T c superconductors (HTSC), whose novel properties have aroused considerable attention in the last few years [3]. Due to disorder and thermal fluctuations, the lattice of rigid lines, representing the ideal behavior of the vortices in clean conventional type II superconductors, is distorted. Flux decoration allows to quantify the wandering of the lines as they pass through the sample. The resulting decoration patterns also include different topological defects, such as grain boundaries and dislocations, which in most cases thread the entire film.
Decoration experiments are typically carried out by cooling the sample in a small magnetic field. In this process, vortices rearrange themselves from a liquid-like state at high temperatures, to an increasingly ordered structure, until they freeze at a characteristic temperature [4,5]. Thus, the observed patterns do not represent equilibrium configurations of lines at the low temperature where decoration is performed, but metastable configurations formed at this higher freezing temperature. The ordering process upon reducing temperature requires the removal of various topological defects from the liquid state: Dislocation loops in the bulk of the sample can shrink, while threading dislocation lines (TDLs) that cross the film may annihilate in pairs, or glide to the edges. However, the decoration images still show TDLs in the lattice of flux lines. The concentration of defects is actually quite low at the highest applied magnetic fields H (around 25G), but increases as H is lowered (i.e. at smaller vortex densities). Given the high energy cost of such defects, it is most likely that they are metastable remnants of the liquid state. (Metastable TDLs are also formed during the growth of some solid films [6].) Generally, a good correspondence in the position of individual vortices and topological defects is observed as they cross the sample. Nevertheless, differences at the scale of a few lattice constants occur, which indicate the wandering of the lines. Motivated by these observations, we calculate the extra energy cost associated with the deviations of a TDL from a straight line conformation. The meandering TDL behaves like an elastic string with a dispersive line tension which depends logarithmically on the wavevector of the distortion. By comparing the experimental data with our results for mean square fluctuations of a TDL, it is in principle possible to estimate the elastic moduli of the vortex lattice. Hence, this analysis is complementary to that of the hydrodynamic model of a liquid of flux lines, used so far to quantify these coefficients [10]. On the other hand, the presence of even a single fluctuating TDL considerably modifies the density correlation functions measured in the decoration experiments. The contribution of the fluctuating TDL to the long-wavelength structure factor is also anisotropic and involves the shear modulus, making it a good candidate for the determination of this coefficient.
In the usual experimental set-up, the magnetic field H is oriented along to the z axis, perpendicular to the CuO-planes of the superconductor. The displacements of the flux lines from a perfect triangular lattice at a point (r, z), are described in the continuum elastic limit, by a two-dimensional vector field u(r, z). The corresponding elastic free-energy cost is where ∇ = (∂ xx + ∂ yŷ ); and c 11 , c 44 , and c 66 , are the compression, tilt, and shear elastic moduli, respectively.
Due to the small magnetic fields involved in the experiments, non-local elasticity effects [3] are expected to be weak, and will be neglected for simplicity. In addition, at the temperatures corresponding to the freezing of decoration patterns, disorder-induced effects should be small, and will also be ignored.
To describe a dislocation line, it is necessary to specify its position within the material, and to indicate its character (edge or screw) at each point. The latter is indicated by the Burger's vector b, which in the continuum limit is defined by L du = −b, with L a closed circuit around the dislocation [11]. For the TDLs in our problem, the Burger's vectors lie in the xy-plane, and the line conformations are generally described by the position vectors R d (z) = R(z) + zẑ (see Fig. 1). Unlike a vortex line, the wanderings of a TDL are highly anisotropic: In an infinite system with a conserved number of flux lines, fluctuations of the TDL are confined to the glide plane containing the Burger's vector and the magnetic field. The hopping of the TDL perpendicular to its Burger's vector (climb) involves the creation of vacancy and interstitial lines, as well as the potential crossing of flux lines [7,8]. These defects are very costly, making TDL climb unlikely, except, for instance, in the so-called supersolid phase, in which interstitials and vacancies are expected to proliferate [9]. Nevertheless, for a sample of finite extent, introduction and removal of flux lines from the edges may enable such motion. This seems to be the case in some of the decoration experiments where the number of flux lines is not the same on the two sides [1]. In order to be completely general, at this stage we allow for the possibility of transverse fluctuations in R(z), bearing in mind that they may be absent due to the constraints.
We decompose the displacement field u into two parts: u s (r − R(z), z), which represents the singular displacement due to a TDL passing through points R d (z) in independent two-dimensional planes; and u r , a regular field due to the couplings between the planes. By construction, the former is the solution for a two-dimensional problem with the circulation constraint [11], while the latter minimizes the elastic energy in Eq.(1), and is consequently the solution to Here, R(k z ) is the Fourier transform of R(z); R ⊥ and R stand for its components perpendicular and parallel to the Burger's vector, respectively; and The above expressions are obtained after integrating over q, with a long-wavevector cutoff Λ at distances of the order of the flux-line lattice spacing, below which the continuum treatment is not valid. If we also take into account a short-wavevector cutoff Λ * due to finite sample area, the dependence of the kernels on k z has different forms. For values of k z ≪ Λ * min(c 66 , c 11 )/c 44 , the logarithms in Eqs. (3)(4) reduce to the constant value 2 ln(Λ/Λ * ), and both kernels are simply proportional to k 2 z . In the opposite limit, if k z ≫ Λ max(c 66 , c 11 )/c 44 all the logarithms can be approximated by the first term of their Taylor expansion, and A ⊥ and A are independent of k z . In between these extremes, the form of the kernels is globally represented through Eqs. (3)(4). In practice, the smallest wavevector k z that can be probed experimentally is limited by the finite thickness of the sample, and is ultimately constrained (by the measured values of c 11 , c 66 and c 44 ) to the last two regimes. From Eqs. (2-4), we conclude that the TDL behaves as an elastic string with a dispersive line tension (ǫ d ∝ ln k z ), indicating a non-local elastic energy. (A single flux line shows a similar dispersive behavior, as pointed out by Brandt [12].) Equilibrium thermal fluctuations of a TDL are calculated from Eq.(2), assuming that one can associate the Boltzmann probability e −∆H/kB T to this metastable state. After averaging over all possible configurations of R(k z ), the mean square displacements are obtained as respectively, where L is the thickness of the film. In terms of the function these quantities satisfy the simple relation Thus, even if the TDL is allowed to meander without constraints, its fluctuations are anisotropic, as A ⊥ (k z ) = A (k z ) only for c 11 = c 66 . In HTSC materials, c 66 ≪ c 11 , so that A ⊥ (k z ) > A (k z ), limiting fluctuations largely to the glide plane.
In real space, the width of the TDL depends on quantities such as |R(L)| 2 c11 ≡ 1/L dk z /2π |R(k z )| 2 c11 . Its dependence on the film thickness L follows from Eq.(6), as where we have defined l 1 ≡ b c 44 /c 11 , and d o is a shortdistance cutoff along the z-axis. For length-scales below d o , the layered nature of the material is important. In order to estimate typical fluctuations for the TDL, we assume that c 66 ≪ c 11 , and approximate Eq.(4) by its leading behavior. In this limit, |R | 2 ≃ 2 |R| 2 c66 , with |R| 2 c66 defined as in Eq.(6), after replacing c 11 with c 66 . A behavior similar to Eq.(8) is obtained for |R| 2 c66 , with a corresponding crossover length l 6 ≡ b c 44 /c 66 . Thus, longitudinal fluctuations of the TDL are approximately constant for samples thinner than l 6 , and grow as L/ ln L for thicker samples. If allowed, transverse fluctuations follow from Eq. (7), whose leading behavior for small c 66 gives, |R ⊥ | 2 ∼ |R| 2 c11 . In Fig.2 we have plotted |R| 2 c11 and 2 |R| 2 c66 as a function of the thickness L. Both quantities are very sensitive to the elastic coefficients. We have considered the values of c 11 = 2.8 × 10 −2 G 2 , c 44 = 8.1G 2 , and c 66 = 9.6 × 10 −3 G 2 reported in Ref. [2] for a sample decorated in a field of 24G, which shows a single dislocation. The Burger's vector is equal to the average lattice spacing a o = 1µm, and the short-distance cutoff in the plane is taken to be of the same order of magnitude. The crossover lengths introduced turn out to be l 1 ∼ 17µm, and l 6 ∼ 29µm, so that the experimental sample thickness (L ∼ 20µm) approximately falls into the constant regime in Fig.2. From the top curve in Fig.2, we estimate |R | 2 1/2 ∼ 3µm for T ∼ 80K, with an uncertainty factor of about √ 10 due to, for instance, the uncertain values of temperature, and both the in-plane and perpendicular cutoffs. If unconstrained, transverse fluctuations of the TDL are smaller, and given by |R ⊥ | 2 1/2 ∼ 1µm, in the same regime. The question mark in Fig.2 is a reminder that once these fluctuations exceed a lattice spacing, proper care must be taken to account for constraints, and their violation by defects or surface effects. As discussed in Ref. [2], the values of c 11 and c 66 measured in the experiments are about three orders of magnitude smaller than the theoretical predictions from Ginzburg-Landau theory. If we use the latter values in our computations, the crossover lengths become much shorter and TDL fluctuations are reduced by three orders of magnitude! Due to this sensitivity, analysis of transverse and longitudinal fluctuations of TDLs in twosided decoration experiments should provide a complementary method for determining the elastic moduli. Unfortunately, the films studied so far are in the short distance regime where details of the cutoff play a significant role. Experiments on thicker films are needed to probe the true continuum limit.
TDLs also produce anisotropies in the flux line density n(r, z), and the corresponding diffraction patterns. Neutron scattering studies can in principle resolve the full three dimensional structure factor S(q, k z ) = |n(q, k z )| 2 , although only a few experiments are currently available for different HTSC materials [14,15]. Two-sided decoration experiments also provide a quantitative characterization of the two-dimensional structure factors calculated from each surface, as well as the correlations between the two sides of the sample.
The diffraction pattern from a vortex solid has Bragg peaks at the reciprocal lattice positions. Unbound dislocations modify the translational correlations; a finite concentration of dislocation loops can drive the long wavelength shear modulus to zero, while maintaining the long-range orientational order [13]. The resulting hexatic phase has diffraction rings with a 6-fold modulation, which disappears in the liquid phase. In all phases, the diffuse scattering close to q = 0 is dominated by the long wavelength density fluctuations, which are adequately described by n = ∇ · u, leading to S(q, k z ) ∼ |q · u(q, k z )| 2 .
The contribution of equilibrium density fluctuations (from longitudinal phonons) to Eq.(9) has the form [10] where A is the sample area. This contribution is clearly isotropic, and independent of the shear modulus in the solid phase. (The anisotropies of the solid and hexatic phases are manifested at higher orders in q.) For a sample of finite thickness, the phonon contribution in rather general situations including surface and disorder effects, was obtained in Ref. [10], as is the 2D structure factor of each surface, while R(q, L) measures the correlations between patterns at the two sides of the film. The above results were used in Ref. [2] to determine the elastic moduli c 11 and c 44 of the vortex array, at different magnetic fields. However, the decoration images used for this purpose have the appearance of a solid structure with a finite number of topological defects. We shall demonstrate here that the presence of a single trapped TDL modifies the isotropic behavior in Eq. (10). We henceforth decompose the displacement field u(r) into a regular phonon part u o , and a contribution u d = u s + u r from the meandering TDL described by R(z). The overall elastic energy also decomposes into independent contributions To calculate the average of any quantity, we integrate over smoothly varying displacements u o , and over distinct configurations of the dislocation line R(z). Thus the structure factor in Eq.(9) becomes a sum of phonon and dislocation parts.
The results of Eq.(5) can be used to calculate the contribution from a fluctuating TDL, which has the form with q ≡ q · b/b, and q ⊥ the component perpendicular to the Burger's vector. The first term on the r.h.s. of Eq.(11) corresponds to the straight TDL, and vanishes in the liquid state with c 66 = 0. The next two terms on the r.h.s. result from the longitudinal and transverse fluctuations of the TDL respectively. (The latter is absent if the TDL is constrained to its glide plane.) The dislocation part is clearly anisotropic, and the anisotropy involves the shear modulus c 66 . Thus after inverting the k z transform in Eq.(11), the TDL contribution to the structure factors calculated from the two-sided decoration experiments can also be exploited to obtain information about the elastic moduli.
In conclusion, we have calculated the energy cost of meanderings of a TDL in the flux lattice of a HTSC film such as BSCCO. Flux decoration experiments indicate that such metastable TDLs are indeed frequently trapped in thin films in the process of field cooling. We have estimated the thermal fluctuations of a TDL in crossing the sample, as well as its contribution to the structure factor. Both effects can in principle be used to estimate the elastic moduli of the vortex solid. However, there are strong interactions between such defects, which need to be considered when a finite number of TDLs of different Burger's vectors are present. The generalization of the approach presented here to more than one TDL may provide a better description of the experimental situation. From the experimental perspective, it should be possible to find samples with a single trapped TDL, providing a direct test of the theory. Other realizations of TDLs can be found in grown films [6], and may also occur in smectic liquid crystals. It would be interesting to elucidate the similarities and distinctions between the defects in these systems.
MCM acknowledges financial support from the Direcció General de Recerca (Generalitat de Catalunya). MK is supported by the NSF Grant No. DMR-93-03667. We are grateful to D.R. Nelson for emphasizing to us the constraints on transverse motions of TDLs. We have benefited from conversations with M.V. Marchevsky, and also thank Z. Yao for providing us with the raw decoration images in Refs. [1,2]. | 3,984.4 | 1997-03-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Gravitational dynamics from entanglement “thermodynamics”
In a general conformal field theory, perturbations to the vacuum state obey the relation δS = δE, where δS is the change in entanglement entropy of an arbitrary ball-shaped region, and δE is the change in “hyperbolic” energy of this region. In this note, we show that for holographic conformal field theories, this relation, together with the holographic connection between entanglement entropies and areas of extremal surfaces and the standard connection between the field theory stress tensor and the boundary behavior of the metric, implies that geometry dual to the perturbed state satisfies Einstein’s equations expanded to linear order about pure AdS.
Introduction
Since the first connections between gravity and thermodynamics were realized in the study of black hole physics [1][2][3], various attempts have been made to derive Einstein's equations from the thermodynamics of some underlying degrees of freedom, starting with Jacobson's intriguing paper [4] (see also [5,6]). With the AdS/CFT correspondence [7,8], the underlying degrees of freedom for certain theories of gravity with AdS asymptotics have been explicitly identified as the degrees of freedom of a conformal field theory. It is thus interesting to ask whether the Einstein's equations in the gravitational theory can be derived from some thermodynamic relations for the CFT degrees of freedom.
In this note, following [9][10][11][12][13] we demonstrate that at least to linear order in perturbations around pure AdS, Einstein's equations do follow from a relation dE = dS closely related to the First Law of Thermodynamics, but where the entropy S is the entanglement entropy of a spatial region in the field theory, and E is a certain energy associated with this region. A key point is that dS and dE can be defined and the relation dS = dE shown to hold for arbitrary perturbations around the vacuum state; thus, the relation is more general than the ordinary first law which applies only in situations of thermodynamic equilibrium.
The specific relation we employ, which we write as was derived recently by Blanco, Casini, Hung, and Myers in [13]. Here A represents a ballshaped spatial region, δS A represents the change in entanglement entropy of the region A
JHEP04(2014)195
relative to the vacuum state, and δE hyp A represents the "hyperbolic" energy of the perturbed state in the region A, the expectation value of an operator which maps to the Hamiltonian of the CFT on hyperbolic space times time under a conformal transformation that takes the domain of dependence of the region A to H d × time. We review the derivation of this relation in section 2 below.
For holographic conformal field theories, each side of (1.1) has an interpretation in the dual gravity theory. Assuming that the perturbed state |Ψ corresponds to some weakly-curved classical spacetime, the entanglement entropy S A may be calculated (at the leading order in the 1/N to which we work) via the Ryu-Takayanagi proposal [9] and its covariant generalization [10] as the area of an extremal surface in the bulk, as we review in section 3.1. In section 3.2, we recall that the energy δE A can be calculated from the asymptotic behavior of the metric. Thus, the field theory relation δS A = δE hyp A translates to a constraint on the dual geometry.
In section 4, we show that this constraint is precisely that the bulk metric corresponding to |Ψ must satisfy Einstein's equations to linear order in the perturbation around pure AdS (the geometry corresponding to the CFT vacuum state). That solutions of Einstein's equations satisfy δS A = δE hyp A has already been shown in [13] (see also the related earlier work [12,14,15]). For completeness, we provide an alternate demonstration of this in section 4.1. In section 4.2, we go the other direction, showing that any perturbation to pure AdS satisfying δS A = δE hyp A must satisfy Einstein's equations. This requires more than simply reversing the arguments of section 4.2 (or of [13]). In particular, demanding that δS A = δE hyp A for all ball-shaped spatial regions A in a particular Lorentz frame only places mild constraints on the metric, determining the combination H xx + H yy in terms of the other components. It is only when we demand that δS A = δE hyp A in an arbitrary Lorentz frame (i.e. for ball-shaped regions on arbitrary spatial slices) that the full set of linearized Einstein's equations is implied.
In appendix A, we give an alternative proof that Einstein's equations imply δS A = δE hyp A that is perhaps more straightforward, but assumes that the metric is analytic. We conclude in section 5 with a discussion.
Entropy-energy relation
In this section, we review the relation δS A = δE hyp A , derived by Blanco, Casini, Hung, and Myers in [13] as a special case of an inequality that follows from the positivity of relative entropy.
General expression for variation of the entanglement entropy. Consider a CFT on R d,1 in some state |Ψ . Choosing a spatial region A, define ρ A to be the reduced density matrix associated with this region for the state |Ψ , From this, we can define the modular Hamiltonian H A by
JHEP04(2014)195
For general states, this modular Hamiltonian is not related to the usual Hamiltonian, and cannot be written as the integral of a local density. We now consider an arbitrary variation of the state |Ψ . The change in entanglement entropy S A for the region A is given by where we have used the fact that tr(δρ A ) = 0, a consequence of assuming that the density matrix has a fixed normalization. In the last line, H A is the original modular Hamiltonian associated with the density matrix ρ A for the original state. Thus, we have the general relation valid in any spatial region A for arbitrary perturbations of an arbitrary state.
"Thermodynamic" relation for perturbations around the vacuum state. We now specialize to the case where |Ψ is the vacuum state, and the region A is a ball of radius R. In this case, the domain of dependence of the ball-shaped region 1 can be mapped by a conformal transformation to hyperbolic space times time. As shown in [11], such a transformation maps the vacuum density matrix for the region A to the thermal density matrix e −βH hyp for the hyperbolic space theory, where the temperature is related to the hyperbolic space curvature radius R H by β = 2πR hyp . In this case H hyp is the integral of the local operator T 00 hyp over hyperbolic space. Mapping back to the ball-shaped region of Minkowski space, it follows [11] that the modular Hamiltonian can be written as where T 00 is the energy density operator for the CFT and r is a radial coordinate centered at the center of the ball. In this case, we have i.e. the variation in the expectation value of the vacuum modular Hamiltonian H vac A under a small perturbation away from the vacuum state is equal to the change in the "hyperbolic" energy of the region. Thus, the general relation (2.1) gives reminiscent of the First Law of Thermodynamics. We emphasize however that the entanglement entropy S A can be defined for any state, in contrast to the usual thermodynamic entropy which applies to equilibrium states. Thus, (2.3) represents a much more general result.
JHEP04(2014)195
3 Gravitational implications of dS = dE in holographic theories Let us now consider the case of a holographic conformal field theory on Minkowski space, whose states correspond to asymptotically AdS spacetimes in some quantum theory of gravity. In this case, each side of the relation δS A = δE hyp A has a straightforward gravitational interpretation. As we review below, the left side may be calculated using the Ryu-Takayanagi proposal [9,10], while the right side can be calculated from the asymptotic form of the metric. The equality of these quantities represents a constraint on the gravitational dynamics implied by the dual field theory. In the next section, we show that this constraint is precisely equivalent to Einstein's equations linearized about AdS.
Gravitational calculation of dS
According to the Ryu-Takayanagi proposal [9] and its covariant generalization [10], the entanglement entropy S A for a state with a geometrical gravity dual is proportional 2 to the area of the extremal co-dimension two surfaceà in the bulk whose boundary coincides with the boundary of the region A on the AdS boundary, The surfaceà is an extremum of the area functional Starting from pure AdS, with metric 3 the extremal surface ending on the spatial boundary sphere of radius R is described by the spacetime surface We now consider a small variation In this case, the extremal surface changes, and the new area is
JHEP04(2014)195
where the variation δX will be of order δG. Since the original surface was extremal, we have Thus, the variation of the surface gives rise to changes in the area that start at order δG 2 . To find the order δG variation of the area, we need only evaluate expanded to linear order in δG. We find that where we have used lower-case letters to represent pullbacks to the extremal surface. Thus, for field theory state |Ψ close to the vacuum state with dual geometry described by (3.3), the change in the entanglement entropy for region A relative to the vacuum state is given by an integral of the metric perturbation over the original extremal surfaceÃ. Using the explicit metric (3.1) and parameterizing the extremal surface (3.2) by the boundary coordinates x i , we have finally that
Gravitational calculation of dE
General asymptotically AdS spacetimes with a Minkowski space boundary geometry may by described using Fefferman-Graham coordinates by a metric where pure AdS, dual to the CFT vacuum, corresponds to H µν = 0. With this parametrization, the expectation value t µν of the field theory stress-energy tensor is simply related to the asymptotic metric by [16,17] Thus, we may write the change in the hyperbolic energy (2.2) relative to the vacuum state as This is an integral of the boundary value of H over the region A.
Derivation of linearized Einstein's equations from dE = dS
We are now ready to demonstrate that using the holographic dictionary reviewed in the previous section, the CFT relation δS A = δE hyp A is equivalent to the constraint that metric corresponding to the perturbed CFT satisfies Einstein's equations to linear order. For
JHEP04(2014)195
clarity, we focus on the case of 2+1 dimensional conformal field theories, corresponding to gravitational theories with four non-compact dimensions. However, the result can also be proven for general higher-dimensional theories.
Using the results (3.5) and (3.7), the CFT relation δS A = δE hyp A implies that a disk of any radius R centered at any point (x 0 , y 0 ) on the boundary, the integral over the bulk extremal surface must equal the integral over the z = 0 surface, where we have absorbed a factor of 1/8G N R to define δŜ(R, x 0 , y 0 ) and δÊ(R, x 0 , y 0 ) (we drop the hats from now on). We will now show that this equality is true for all disks in all Lorentz frames if and only if the bulk metric satisfies Einstein's equations to linear order in H. As shown in [13], these are equivalent to the set of equations that arise by plugging the Fefferman-Graham form of the metric (3.6) into the zz, zµ, and µν components of Einstein's equations respectively and using the fact that H is regular at z = 0. In (4.3), the last equation is equivalent to saying that each component of z 3 H must satisfy the Laplace equation on the AdS background.
Proof that δS = δE for solutions of Einstein's equations
We begin by showing that solutions of the linearized Einstein's equations obey the equality δS = δE. This has already been checked in section 3.1 of [13] by demonstrating the result for a complete basis of solutions to the equations (4.3). In this section, we offer an alternative proof that does not require using an explicit basis of solutions. A third proof that is perhaps more straightforward but assumes a series expansion of H is given in appendix A. Using the equations (4.3), we have:
JHEP04(2014)195
We would like to use the last equation to eliminate H xy from (4.1). However, we have H xy rather than ∂ x ∂ y H xy in (4.1). To make progress, we begin by differentiating δS by x 0 and y 0 (the coordinates of the center of the boundary disk). This gives Now, using (4.4), we have It is straightforward to check that this expression is equal to the integral over the extremal surface of an exact form dA, where A is defined for all (x, y, z, t) as By Stokes theorem, this equals the integral of A over the boundary of the extremal surface, so we have In the second step, we have used the fact that all other terms in A vanish for z = 0. Similarly, we find that ∂ x 0 ∂ y 0 δE may be written as where we can choosê
JHEP04(2014)195
Again, using Stokes theorem, this reduces to the integral of (3/2)Â over the boundary, so We conclude that for any H satisfying Einstein's equations, where C x and C y are some functionals linear in H that do not depend on y 0 or x 0 respectively. Now, consider the class of functions H that vanish for sufficiently large x 2 0 + y 2 0 at the time t = 0 where we evaluate δS and δE. In this case, fixing any x 0 and taking y 0 → ∞ or fixing any y 0 and taking x 0 → ∞, the left side must vanish. For this to be true on the right side, both C x and C y must be constant (as functions of x 0 and y 0 ), with C x + C y = 0. Thus, the right side vanishes for any H that vanishes as x 2 0 + y 2 0 → ∞. But more general H can be written as linear combinations of such functions, and since the right side is a linear functional in H, it must vanish for all H. This completes the argument that δS A = δE hyp A for solutions of Einstein's equations.
Proof that δS = δE implies the linearized Einstein's equations
In this section, we go the other direction to show that the relation δS = δE implies that the metric satisfies Einstein's equations to linear order, i.e. that the equivalence of (4.1) and (4.2) implies the relations (4.3).
Given the boundary stress tensor t µν , let H EE µν be the corresponding metric perturbation that follows from Einstein's equations, i.e. the solution of (4.3) satisfying H EE µν (0, t, x, y) = (16πG N /3)t µν . We will show that there is no other H with these boundary conditions for which δS = δE in all frames of reference.
Suppose there were another H for which δS = δE for all disk shaped regions in all Lorentz frames. Then the difference ∆ = H − H EE must satisfy ∆ µν (z = 0, t, x, y) = 0 , and for arbitrary R, x 0 , and y 0 , and in an arbitrary Lorentz frame.
JHEP04(2014)195
Let us first see the consequences of demanding this result in a fixed frame. To begin, we note that (4.9) may be expanded in powers of R using the basic integral we find that (4.9) becomes 4 x 0 , y 0 )(I n,mx,my − I n,mx+1,my ) (4.12) The vanishing of the terms at order R N +2 implies that where the C coefficients can be read off from (4.12). As examples, the first few equations give
JHEP04(2014)195
We see that this set of equations completely determines the combination ∆ xx + ∆ yy at each order in z in terms of the lower order terms in the expansion of ∆. However, apart from the constraint (4.8) on the boundary behavior (equivalent to ∆ (0) µν = 0), the remaining elements of ∆ µν are completely unconstrained.
To constrain ∆ µν further, we need to use the requirement that the relation (4.9) should hold in an arbitrary Lorentz frame. Thus, for each choice of reference frame, we will have equations analogous to (4.13). Specifically, consider a general boost In the equations for a general frame of reference obtained by such a boost, the left sides in (4.13) will be replaced by Up to an overall constant factor, this gives Now, consider the general version of the second equation in (4.13) (the first equation already holds by (4.8)). This requires the vanishing of For a fixed x 0 and y 0 , this is a polynomial in β i that must vanish for all values of β i . Thus, the polynomial must be identically zero. At order β 0 , this gives ∆ (1) ii (t, x 0 , y 0 ) = 0 as we had before. At order β, we get ∆ (1) it (t, x 0 , y 0 ) = 0 . At order β 2 , this gives Thus, we have ∆ We can now continue to analyze the remaining equations in (4.13) in turn. Supposing that we have shown ∆ (k) µν = 0 for k < n, the general version of the nth equation in (4.13) requires the vanishing of since the right hand side in (4.13) will be zero. Repeating the analysis above, we conclude that ∆ (n) µν = 0. By induction, this holds for all n, so we have shown that ∆ µν = 0, completing the proof.
Discussion
In this paper, we have seen that to linear order in perturbations about the vacuum state, the emergence of gravitational dynamics in the theory dual to a holographic CFT is directly related to a general relation satisfied by CFT entanglement entropies on ball-shaped regions. This relation is closely related to the First Law of Thermodynamics, but is more general since it applies to arbitrary perturbations of the state rather than perturbations for which the system remains in thermal equilibrium.
While the CFT relation (1.1) is an exact equivalence, we have made use of this relation only at the leading order in 1/N where the entanglement entropy maps over to the extremal surface area. This corresponds to working in the classical limit in the bulk. According to [18], 1/N corrections to the CFT entanglement entropy correspond to bulk quantum corrections including the entropy of entanglement of bulk quantum fields across the extremal surface. It will be interesting to understand the implications of the CFT relation (1.1) beyond the classical level in the bulk, but we leave this for future work.
The derivations in section 4 were written specifically for the case of four-dimensional gravity. However, the proof given in [13] that Einstein's equations imply δS = δE, and our method of proof in section 4.2 that δS = δE implies the linearized Einstein's equations work for general dimensions. 5 The linearized Einstein's equations we derived are for the metric components in the field theory directions and radial direction of the bulk. Any additional fields in the gravitational theory, including metric components in any compactified directions, are not constrained by the CFT relation we have considered. At linear order, the equations for these fields decouple from the linearized Einstein's equations for the metric in the non-compact directions. Thus, we can say that the universal relation δS = δE is equivalent to the universal sector of the linearized bulk equations.
Our results do not imply that all holographic theories are dual to gravitational theories whose metric perturbations satisfy Einstein's equations. In this paper, we assumed that entanglement entropies are related to areas via the usual Ryu-Takayanagi formula, and that the stress-energy tensor in the dual field theory is related to the asymptotic form of the metric. In more general theories, the entanglement entropy may correspond to a more complicated functional of the bulk geometry and the relation between the stress tensor and asymptotic metric may be modified. In these cases, we expect that the bulk equations will be different, for example involving α ′ corrections with higher-derivative terms. However, it may be possible following the methods in this paper to derive the linearized version of these more general equations given a particular choice for the holographic entanglement entropy formula and the holographic formula for the stress tensor.
It will be interesting to see whether the first non-linear corrections to Einstein's equations in the bulk are equivalent to some simple property of entanglement entropies.
Finally, we comment on the relation to the work of Jacobson [4], which partly motivated our investigations. Jacobson realized that Einstein's equations could be derived from the
JHEP04(2014)195
assumption that the energy flux through a part of any bulk Rindler horizon gives rise to a proportional local change in area of this horizon. Interpreting the area as an entropy, such a relation looks like the first law of thermodynamics. However, in Jacobson's work, it was not clear why areas of segments of an arbitrary bulk Rindler horizon (not necessarily associated with any black hole) should correspond to an entropy, so the origin of the thermodynamic relation remained mysterious.
In our case, the "thermodynamic relation" dS = dE is an exact quantum relation (i.e. not really thermodynamics) derived to hold for the underlying fundamental degrees of freedom associated with our gravitational system. Thus, while our final result (in contrast to Jacobson's work) applies so far only at the linearized level, the starting point is well understood. In detail, the bulk interpretation of our dS = dE relation is somewhat different that Jacobson's starting point (the bulk surfaces/horizons we deal with are global rather than local and the energy has a different interpretation), but the two relations were similar enough to motivate the question of whether Einstein's equations could be derived from the first law of [13].
JHEP04(2014)195
A.2 Checking that solutions of Einstein's equations satisfy δS = δE Using these expansions, it is straightforward to verify that any solution of the linearized Einstein's equations (4.3) satisfies δE = δS, as was done originally in [13] and by another alternative approach in section 4.
Thus, we have verified that δS = δE for linearized solutions of Einstein's equations, providing an alternate argument to the one in [13].
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 5,346.6 | 2014-04-01T00:00:00.000 | [
"Physics"
] |
Compact Modeling of Allosteric Multisite Proteins: Application to a Cell Size Checkpoint
We explore a framework to model the dose response of allosteric multisite phosphorylation proteins using a single auxiliary variable. This reduction can closely replicate the steady state behavior of detailed multisite systems such as the Monod-Wyman-Changeux allosteric model or rule-based models. Optimal ultrasensitivity is obtained when the activation of an allosteric protein by its individual sites is concerted and redundant. The reduction makes this framework useful for modeling and analyzing biochemical systems in practical applications, where several multisite proteins may interact simultaneously. As an application we analyze a newly discovered checkpoint signaling pathway in budding yeast, which has been proposed to measure cell growth by monitoring signals generated at sites of plasma membrane growth. We show that the known components of this pathway can form a robust hysteretic switch. In particular, this system incorporates a signal proportional to bud growth or size, a mechanism to read the signal, and an all-or-none response triggered only when the signal reaches a threshold indicating that sufficient growth has occurred.
S1.1: Analysis of the Zds1 -PP2A interaction subsystem
Recall that Z, P are the active monomer concentrations of Zds1, PP2A respectively, Z , P the total monomer concentrations, and p , z the fraction of modified sites. Assume the From the MF framework, estimate Z = Zh 1 (z), P = Ph 2 ( p), where h i (x) = e γ i x / (δ i + e γ i x ) . The Zds1/PP2A dimer is assumed to be active exactly when both monomers are active, so if D is the active form concentration D D = Z Z P P = h 1 (z)h 2 ( p), that is, D = Dh 1 (z)h 2 ( p).
The differential equations for this system are: Notice that the dynamics of z depends on D , the active dimer. The dynamics of p depends on the concentration of the protein C , the active Rho1/Pkc1 dimer, which is treated for now as a given constant. One can eliminate the variables z in , p in and replace them with z, p using the conservation equations z in + z = 1, p in + p = 1. Similarly, one can use the mass conservation of Zds1 and PP2A, Z + D = S Z , P + D = S P to eliminate Z and P from the system. Here S Z ,S P are the total amounts of Zds1 and PP2A including monomer and dimer forms. Finally, the active dimer D can be written in terms of the other variables as described above. The new equations are Now, over time the first and third equation of this system converge towards a constant independent of the other variables. Solving for p at steady state leads to . The quadratic equation for D at steady state has two solutions, however one of them is larger than S Z , S P and can be discarded. Thus over time D converges towards a steady state value D 0 independent of C .
Putting all together, the system is reduced to a single differential equation for z . As stated in the main text, the solutions of the system converge towards the steady state whenever z does, and one can obtain bistability depending on parameter values. The differential equation for z can be rewritten as Both terms in the right hand side are described in Figure 3B as separate functions, and the steady states correspond to the intersection of the two graphs. By comparing both functions on the left and right hand sides of each steady state, one can determine that the first and third steady states are stable, and that the second steady state is unstable. This is a general property of the system whenever there are three steady states, since the sigmoidal shape of h 1 (z) is preserved under other parameter values, and therefore when multiplied by the constant h 2 ( p) and the function (1− z) , the qualitative shape of the graph is preserved. Therefore one can conclude that the Zds1/PP2A interaction system is bistable whenever there are three steady states.
The canonical output of the core model is the active PP2A/Zds1 dimer concentration D . This value can be calculated from the equation above, but more easily from the steady state values of z using the equation . This is the equation used to plot the graphs of C versus D in Figure 3C.
S1.2: Analysis of the Rho1 -Pkc1 interaction subsystem
This system involves the proteins Rho1, Pkc1, denoted R , K respectively in their active forms. Total Rho1 and Pkc1 monomer concentration, regardless of whether active or inactive, are R, K , while k denotes the fraction of modified Pkc1 sites. The given chemical reactions are The MF approximation is used to calculate A few conservation of mass equations hold: Using mass action kinetics one can derive a system of differential equations describing the behavior of this system. In the following, the variables k in and K are eliminated and replaced by 1− k and S K − C , respectively: A steady state analysis of this system is now carried out. Even though the system contains multiple variables, it is still possible to reduce all equations to a single 1D equation at steady state. After setting all left hand sides to zero, notice that the equation for k only depends on C (and the fixed input parameter D ). One can solve it to find where we introduce the notation Q i = q − i / q i for every i .
Next, the steady state equations for R in and R depend linearly on the variables R in , R .
The determinant of the linear system is the denominator of the equation, and it is larger than zero (S K − C = K ≥ 0 ) so that the solution is always unique. For convenience, call Y the denominator of this expression. Replacing into the equation for C , we have Y dC dt = q 4 (S K − C)[q 6 q −4 C + q −9 q −4 C + q 6 q 9 v] − Yq −4 C. By multiplying out on the right hand side, the first two terms of each summand cancel out, so that Y dC dt = q 4 q 6 q 9 (S K − C)v − q −4 q −6 q −9 C + q −4 q −9 C(q 6 + q −9 ). Therefore Y q 4 q 6 q 9 By multiplying on both sides by h 3 (k) and using the identity C = Ch 3 (k) , one obtains Finally, replacing k by its expression in terms of C , at steady state In order to determine the stability of the steady states, notice that h 3 (k)Y / q 4 q 6 q 9 > 0 , even though this expression might depend on k and C . By comparing the sign of the two summands in the right hand side of the expression for dC / dt , one can determine when the steady states are stable or unstable. In the example in Figure 3D, once again one can see that out of the three steady states, two are stable and one is unstable. S1.3: Rule--Based Derivation of MWC We begin by introducing in detail the assumptions that lead to the MWC model in the context of multisite phosphorylation. Recall that a substrate P is assumed to be phosphorylated nonsequentially by a kinase E , that the phosphorylation is cooperative, and that the number of sites is large enough that each individual phosphorylation has a small effect on substrate activation. The substrate may be either active or inactive at any given time, due to structural transitions or the binding of the substrate to another molecule. Based on these assumptions, we start with a basic model containing a large number of variables and reduce it to the MWC model. This framework also relates our modeling framework to the approach known as rule--based modeling, where a series of chemical reactions is defined using a streamlined algorithm, and high--powered computing is used to handle the resulting large number of variables.
Suppose that J ∈{0,1} n is any vector of n zeros and ones indicating the phosphorylation state of the substrate, also known as its phosphoform. A J and I J indicate the concentration of the active and inactive protein phosphoforms in state J , respectively. For any J , K ∈{0,1} n , we say that J K if J i ≤ K i for every i and | K |=| J | +1, that is the K --phosphoform has exactly one more phosphorylation than the J --phosphoform. Define the chemical reactions displayed in Figure S1A, where J is any fixed index and K is such that K J . That is, every A J can transition into I J and back, and A J can also become phosphorylated at any of its remaining locations at a rate proportional to E . Every additional phosphorylation makes the transition of active to inactive substrate slightly more difficult, since the transition rate is multiplied by ε |J| , where ε < 1 . Since each phosphorylation is assumed to have a small effect, we can assume that ε is close to 1.
Often there are so many reactions in such an approach that it is not possible to carry out a mathematical analysis, but we can do it in this case. Putting together all reactions, we have for every index J : The idea is to cluster together many variables A J by carrying out a change of variables. Define Then The rate equations for A 0 , A n , and I i for i = 0…n can be calculated in the same way. The full system has the following equations: This is exactly the system of equations for A i , I i in the diagram shown in Figure 2B. In this way, MWC is a sequential shorthand model for a nonsequential system with 2 × 2 n variables. The same framework can be used if the sites are modified through any covalent enzymatic reaction such as acetylation or methylation. They could also be non--covalent ligand binding sites, such as a receptor complex, with a background ligand E in large enough amount that its concentration is not affected by its binding to the receptor. Recall that to calculate the activation function h(x) , in the main text we assumed an approximate balance at steady state for the following reaction of activation and inactivation after i phosphorylations: After calculating the fraction of active sites with this number of phosphorylations, this shows that where γ = −nlnε and δ = L 1 / L 2 . The balance between A i and I i is guaranteed for the parameters as defined above since the system MWC satisfies the property of detailed balance [1]. However it can also be approximately satisfied when detailed balance fails, especially when the transition rates L 1 , L 2 are large compared with other parameters. Detailed balance is usually satisfied for systems that conserve energy such as multisite ligand binding. Systems that require energy to function, such as phosphorylation which requires ATP, are known as "dissipative" and don't necessarily satisfy this property, in which case the transition rates may play a more important role when compared with other rates in the system. In the case of the cell cycle checkpoint proteins, the standing assumption is that their activity is determined by internal transitions or by binding to a regulatory protein, which would need to be compared with e.g. the rate of the different enzymatic reactions. | 2,774.8 | 2014-02-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Assessment of Cassia siamea stem bark extracts toxicity and stability in time of aqueous extract analgesic activity
1 Laboratoire de Biochimie et Pharmacologie, Faculté des Sciences de la Santé, Université Marien NGOUABI, Brazzaville, B.P. 69, Congo. 2 Centre d’Etude et de Recherche Médecins d’Afrique (CERMA), B.P. 45, Brazzaville, Congo. 3 Laboratoire de Pharmacologie, Centre d’Etudes sur les Ressources Végétales (CERVE), B.P.1249, Brazzaville, Congo. 4 Institut de Chimie des Substances Naturelles (ICSN-CNRS), 1 Avenue de la Terrasse-Bat 27, 91198 Gif-sur-Yvette Cedex, France. 5 Service de Parasitologie et Mycologie, Hôpital de Rangueil, CHU Toulouse, 1 Avenue Jean Poulhes, TSA 50032, 31059 Toulouse Cedex 9, France. 6 Laboratoire de Chimie de Coordination (LCC-CNRS), 205 Route de Narbonne, 31077 Toulouse Cedex 4, France.
INTRODUCTION
Cassia siamea Lam (Fabaceae) is a medicinal plant widely used in Asia, Africa, Australia and South against constipation, malaria, and associated diseases such as fever and jaundice (Ahn et al., 1978;Nsonde et al., 2009;Kaur et al., 2006).A decoction of bark is given to diabetic patients while a paste is used as dressing for ringworm and chilblains.The roots are used as antipyretic preparations and leaves are present in remedies for constipation, hypertension, insomnia, and asthma (Ahn et al., 1978).Its antioxidant properties present the advantage to protect against heart disease people who consume it regularly as food (Kaur et al., 2006).Antiplasmodial, analgesic and antiinflammatory activities have been validated by pharmacological studies (Gbeassor et al., 1989;Nsonde et al., 2005;Mbatchi et al., 2006), as well as antioxidant and antihypertensive activities (Kaur et al., 2006), laxative activity (Elujoba et al., 1989) and sedative activity (Thongsaard et al., 1996;Sukma et al., 2002), antibacterial activity (Lee et al., 2014) and hepatoprotective (Kannampalli et al., 2005;2007).It was recently reported insecticide activity of Cassia siamea extracts and pures compounds (Kamara et al., 2011;Mamadou et al., 2014).
Given its ethnomedical and traditional uses, its pharmacological, nutritional and phytochemical properties, C. siamea has proved to be a plant of great interest and a therapeutic food.Since this study ultimate goal is the development of improved traditional plant medicine, it is essential to investigate thoroughly the toxicity of active extracts in order to direct any form of exploitation of this plant in medicine or pharmaceutical industry.
This study aims to assess the cytotoxicity, the acute and subacute toxicity of aqueous and alcoholic extracts of C. siamea stem bark, and the stability of the lyophilised aqueous extract.
Plant material
Stem barks of C. siamea Lam were collected from Mindouli (Pool, Congo) in May, 2007 and authenticated by the botanists of Centre d'Etudes sur les Ressources Végétales (CERVE), Congo Brazzaville.A voucher specimen has been deposited at the Herbarium of the botanic laboratory under reference number 128/16/01/1960/coll:P.Sita.
Preparation of extracts
Dried and powdered stem barks (1000 g) were successively extracted for 48 h by maceration at room temperature with petroleum ether (CSE1), dichloromethane (CSE2) and ethanol (CSE3).For each organic extract, 5 L of solvent was used free time successively.All organic extracts were concentrated to dryness under reduced pressure in a rotary evaporator to give the following yields: 0.62; 0.92 and 0.8% (w /w) respectively.The marc resulting from the extraction with ethanol was extracted for 10 min in boiling water (1.5 L).The cold aqueous extract was centrifuged (7000 rpm during 30 min.) and concentrated under reduced pressure with a rotavapor before lyophilization to give CSE4 (yield: 1.1%).
Animals
Male and female Wistar rats (150 and 250 g) obtained from the Health Science Faculty of Brazzaville were used.They were housed under standard conditions (25±5°C, 40 to 70% RH, 12 h light/dark cycle) and fed with a standard feed and water ad libitum.The rules of ethics published by the International Association for the Study of Pain (Zimmermann, 1983) have been considered.
Acute toxicity study
Wistar rats, male and female, were divided into groups of ten animals.The control groups received p.o. distilled water (10 ml/kg).Aqueous (CSE4), ethanol (CSE3), petroleum ether (CSE1) and dichloromethane (CSE2) extracts were given p.o. at the doses of 100, 400, 1000, 2000, 2600, 3000 mg/kg from one to seventh groups.The mortality rate within 72 h period was determined, and the LD50 was estimated according to the method described by Miller and Tainter (1944).
Cytotoxicity
The toxicity of cell extracts was tested on KB cells (cancerous cells of the human epidermis) and Vero cells (kidney cells from African green monkey) using the protocol described by Mbatchi et al. (2006).These cells were grown in a culture medium consisting of RPMI 1640 (Gibco BRL, Paisley, Scotland), 25 mM HEPES, 30 mM NaHCO3 (Gibco BRL) and 5% fetal calf serum (Boehringer, Germany).Culture was kept in an incubator at 37°C and 5% CO2.For the assessment of cytotoxicity, cells were distributed on a culture plate of 96 costar wells and then the concentration of 2x10 4 cells diluted in a volume of 100 ml were added in each well.The controls were cultured without extracts.Cell growth was estimated by the incorporation of tritiated hypoxanthine (3H) after 72 h of incubation.The results of the test groups were compared with controls.The reference used was taxotere (10µg/ml) provided by Aventis Pharma (Anthony, France).The tests were performed in triplicate.
Subchronic toxicity
This study concerns only the aqueous extract.This extract was chosen because of its popularity in traditional medicine, its ease of preparation, the pharmacological effects against malaria, oedema and pain that the study have highlighted (Nsonde et al., 2010).
Forty Wistar rats, males and females, were divided into eight groups of five rats each.These eight groups were then divided in two lots of four groups each.The lot of control group was treated with 10 ml/kg/ day of distilled water per os for 39 days maximum.The lot of the test group was treated with 200 mg/kg/ day of aqueous extract (CSE4) per os for 39 days maximum.Evaluation of toxicity was performed at days 7, 21, 28 and 40.At each assessment, animals were subjected to ether anaesthesia, a volume of blood was collected at the pre-orbital plexus.A portion of blood was collected in vacutainer tubes on the lithium heparin for determination of haematocrit.A drop of each blood sample was deposited on a strip for the determination of glycaemia.Remaining blood was centrifuged at 3000 rpm for 10 min.The serum aliquots were collected for enzymatic analysis (King, 1965b;Panigrahi et al., 2014).Biochemical parameters measured were AST (Aspartate Amino-Transferase), ALT (Alanine Amino-Transferase), Creatinine and ALP (Alkaline Phosphatase) (Luximon-Ramma et al., 2001).
Noble organs (liver, heart, kidneys, and brains) were removed, weighed and observed macroscopically.The weights of animals were determined at each evaluation.GOT or AST, GTP and ALT were assayed by the method described by Wooten (1964).This method is based on the ability of enzymes to form pyruvate, which reacts with 2,4-dinitrophenylhydrazine in hydrochloric acid to give the hydrazone.The hydrazone formed turns into an orange complex in alkaline medium, where the colour was measured spectrophotometrically at 540 nm.The alkaline phosphatase (ALP) was measured by the method described by King (1965a).This method is based on the determination of phenol liberated by enzymatic hydrolysis in phenyl disodium phosphate at pH = 10 (A vérifier, la phrase initiale n'était pas très claire).This rate of phenol was estimated by spectrophotometry at 640 nm.Creatinine forms a complex photometer with alkaline picrate.A kinetic assay at 490 nm helped to overcome the interference.
Study of the stability of analgesic activity in time
The study aimed at estimating an occasional way, over 2 years, the analgesic activity of the aqueous extract (CSE4).CSE4 appears under the shape of a lyophilised powder, kept away from heat, maintained under standard conditions of temperature and pressure of the laboratory.It was placed in a glass bottle with the possibility to close tightly after use, the whole being wrapped in aluminium foil.C. siamea Lam stem bark analgesic activity on the animal paw pressure was studied according to Randall and Sellitto method (1957), using an analgesimeter (Ugo basile 1740, Italy).Wistar rats were assigned to three groups of five animals each.The first group received distilled water (10 ml/kg) p.o. CSE4 was given at the dose of 200 p.o.The third group, which served as control, received morphine (2 mg/kg).Analgesic activity was measured 1 h after administration of test and standards drugs ( (Winter et al., 1962;Abena et al., 2003).In rat subjected to pressure of the left leg, the study measured the intensity of the threshold pain that triggers the withdrawal of the animal paw and determined the reaction time (Woolfe and MacDonald, 1994).
Statistical analysis
Values were expressed as mean ± S.E.M. Statistical significance for analgesic activity was calculated using a one-way analysis of variance (ANOVA).Significant differences between means were determined by Duncan's multiple-range tests.Values of p < 0.05 were considered significant.
Study of acute toxicity
No mortality was observed in groups of rats treated with aqueous (CSE4) and ethanol (CSE3) extracts.LD 50 values for petroleum ether (CSE1) and dichloromethane (CSE2) extracts were estimated at 1300 and 1275 mg/kg.The minimum lethal dose and maximum tolerated dose were 1000 and 1600 mg/kg for CSE1 and CSE2.Signs of toxicity include dyspnoea and disorders of motor function.
Cytotoxic study
The values for the cytotoxicity study are given in Table 1.CSE3 and CSE4 are completely devoid of toxicity against cell lines KB and Vero and the two other extracts present a very low cytotoxicity (maximum: 15% of inhibition of cell growth).
Subchronic toxicity
There is no difference in body weight, haematocrit and macroscopic observation of organs between the control group and test group (Figure 1).There is a very significant and progresive increase of weight of the animals treated daily for 40 days as compared to controls (Figure 2).Except for the glucose level, which increases significantly from day 21, there is no significant difference on biochemical parameters levels (Table 2).
Stability of analgesic effect
Kept in ordinary conditions, away from heat and light, aqueous extract, prepared by lyophilisation, in the form of powder retained it analgesic property for over two years, as one can see in Figure 3a and b.
DISCUSSION AND CONCLUSION
Toxic or beneficial effects of a drug have an intensity which depends on the dose and its plasma concentration or tissue.The effects of toxicities affecting organs are characterized by hypertrophy of these organs or weight loss when they are very important.There is no toxicity of aqueous extract (CSE4) affecting the weight of noble organs: heart, kidney, liver, pancreas and brain.Some biochemical parameters (AST, ALT, ALP) are the most sensitive markers employed in the diagnosis of hepatic lesions due to their localization in the cytoplasm of liver cells.They are released into the blood stream after cellular damage in the liver (Pradeep et al., 2007;Sallie et al., 1991).The high level of transaminase is seen as an indicator of liver damage.The elevation of serum ALT is considered as the most sensitive indicator (Zimmerman et al., 1993.;Ha et al., 2001;Huseby et al., 1993).The level of AST, ALT, and ALP in the serum shows that the aqueous extract does not involve the release of cytoplasmic enzymes in the blood.CSE4 may not show any liver toxicity observed in these conditions.Moreover, antioxidant and hepatoprotective properties in this plant have also been highlighted (Kusamran et al., 1998, Kaur et al., 2007).
Concerning creatinine, it is a very good indicator of glomerular function.The increase of creatinine level in the blood is followed to diagnose a possible renal dysfunction (Pradeep et al., 2005).The aqueous extract used in these conditions does not affect kidney.The increase of glucose level in the blood from the 21st day is probably due to a dysfunction of the glucose metabolism.Then, the weight gain could be explained by an increase in blood glucose as observed in the case of Type II diabetes.These results are completely opposite to those found in the study on the subchronic toxicity of barakol which is one of the active molecules isolated from this plant showing an increase in the level of ALT, and AST in the serum, weight loss and a reduction in amount of glucose (Ayutthaya et al., 2005, Pumpaisalchai et al., 2003).
Because of this pharmacological stability, the lyophilised aqueous extract of C. siamea bark could be considered as an enhanced phytomedicine, after a complete pharmacological study to understand the mechanism of weight gain induced by this extract.To avoid any kind of disturbance of glucose metabolism, the study advocate the use of this plant aqueous extract in treatment extended to seven days, or in repeated treatment at the reduced intervals for non diabetic patients and in any treatment for diabetic patients.
Figure 1 .
Figure 1.Effect of the lyophilised aqueous extract of C. siamea stem bark on organ weights of rats subjected to subchronic treatment for 40 days, values are expressed as mean ± SD of five animals.
Figure 2 .Figure 3
Figure 2. Weight gain of animals chronically treated with Cassia siamea stem bark aqueous extract (CSE4) for 40 days.The results are expressed as geometric mean ± standard deviation, n = 5.
Table 1 .
Cytotoxicity of C. siamea bark extracts on cell KB and Vero lines.
Table 2 .
Effect of lyophilised aqueous extract of C. siamea stem bark on blood serum biochemical in rats, values are expressed as mean ± standard deviation of five animals, *** p < 0,001 compared with control. | 3,040.8 | 2015-11-08T00:00:00.000 | [
"Biology"
] |
Design of Complexly Graded Structures inside Three-Dimensional Surface Models by Assigning Volumetric Structures
An innovative approach for designing complex structures from STL-datasets based on novel software for assigning volumetric data to surface models is reported. The software allows realizing unique complex structures using additive manufacturing technologies. Geometric data as obtained from imaging methods, computer-aided design, or reverse engineering that exist only in the form of surface data are converted into volumetric elements (voxels). Arbitrary machine data can be assigned to each voxel and thereby enable implementing different materials, material morphologies, colors, porosities, etc. within given geometries. The software features an easy-to-use graphical user interface and allows simple implementation of machine data libraries. To highlight the potential of the modular designed software, an extrusion-based process as well as a two-tier additive manufacturing approach for short fibers and binder process are combined to generate three-dimensional components with complex grading on the material and structural level from STL files.
Introduction
Additive manufacturing technologies are based on the layered construction of material into a finished component.
With these methods, different materials cannot be applied within a part, in particular not within one layer.
In the case of the open-space or layer construction methods, it is possible to use various materials [23,24].
However, the limitations for parts with complex material and structural composition lie in the properties of the file formats [25]. e most common file format in generative production methods is the STL format (standard tessellation language/standard triangulation language) [26][27][28]. In niche applications, the AMF format (additive manufacturing format) and the OBJ format (object format) are used as well [29,30].
A sphere is used to illustrate the file formats. e STL file format provides only surface information and uses triangles, often referred to as vertices, to represent the surfaces as shown in Figure 1(a). In the AMF or OBJ format, the surface information can also be supplemented with properties (material, textures, or metadata), emphasized by a red color scheme in the triangles on the top of the sphere in Figure 1(b). STL files as well as AMF or OBJ files represent geometric bodies exclusively by means of surface information. Regardless of whether the files are stored as surface or solid bodies, they contain no volume data. Basically, the objects are hollow inside and have "outer walls" that are infinitesimally thin. e only difference in solid bodies is the representation of a filled body. Figure 2(a) shows a graphic representation of the previously discussed sphere cut in half using the software Blender (Blender Foundation). As in all other software for editing or displaying STL files or similar formats (Netfabb, Cura, Slic3r, and Repetier, amongst others) as well as CAD software (FreeCAD, SolidWorks, AutoCAD, and CATIA, amongst others), only the surface of the structure can be addressed as it is simply impossible to select or click on other structures than the surface triangles.
is points up the decisive limitation of all surface-based file formats: within surface-approximated geometries, for example, from computer tomography recordings (CT) and magnetic resonance tomography representations (MRT) or 3D scans, property assignments cannot be implemented.
In CAD programs, however, objects that contain volume information can be designed and stored in the AMF or OBJ format, but again have to be regarded as individual surfaces approximated by triangles. It is thus possible to realize structures with grading on the material or structural level, supposing that they are designed from scratch as the twocolor sphere in Figure 2 However, assigning properties within previously defined or given bodies as obtained from CT scans, MRI scans, or radiographs for the determination of defect geometries in regenerative medicine or 3D scans from reverse engineering cannot be carried out in these programs because of the surface-based representation and the related restrictions. Figure 3(a) shows a CT scan of the lower spine and an STL file derived from that scan containing geometry information of a lumbar vertebra exhibiting a complex geometry. Substantial differences in the structural composition of the STL file can be observed in comparison with the anatomy of a vertebra (Figure 3(c)). e STL file features a hollow body and shows almost no structures in the area of the spongy bone in the center of vertebral body and infinitesimally thin walls instead of the dense cortical bone around the spongy bone structure.
ere is no software yet available to fill certain regions with different materials or structural variations within STL, AMF, or OBJ files of parts with complex geometry.
Software for Accessing the Inner Structure of Surface-Based
Bodies. In addition to surface-based file formats, bodies can also be represented by volumetric elements (voxels). Graphic representations of spheres in voxel format with assigned metadata are shown in Figure 4. is approach is mostly known from video games such as Minecraft (Mojang/ Microsoft Studios, 2009) and Blade Runner (Virgin Interactive, 1997) or simulations [34] to represent terrain features and is also widely used in medical imaging formats such as DICOM ® [35,36]. However, the voxel formats are not used for designing implants for regenerative medicine, prosthetic components, or 3D printing applications in reverse engineering as slicer software is usually developed for STL files. Hence, while it is possible to 3D print or display complex geometries with different surfaces, it is not possible to realize grading on the material or structural level inside the structures.
To access the inner structure of surface-based bodies, novel software for segmenting the structures into voxels and manipulating them is developed. e software is capable of processing STL files in ASCII format and is developed in C# within the development environment Visual Studio Community (Microsoft Corp.). Figure 5 shows the graphical user interface (GUI) that was created within the Windows Presentation Foundation (WPF) framework. After importing a file, it is automatically converted into the AMF format and can be viewed, rotated, and zoomed in and out on the "AMF" tab ( Figure 5(a)). e "Slice" tab as visualized in Figure 5(b) is used to slice the body and to additionally implement a rectangular grid. Slicing thickness and grid size can be adjusted arbitrarily and separately. In default mode, the grid size matches the slicing thickness.
us, the grid subdivides the body into cubic voxels. Generating the slice data for a graphical representation of the contours of the body requires the consideration of several special cases related to the surface representation by means of triangles. During the generation of a single slice, necessarily some triangles are cut by the section plane as evident in Figure 6(a). In total, there are 10 cases describing the situational relations between section plane and triangles [37]. While simple cases such as triangles located completely above or below the section plane are easy to process, five specific cases have to be considered more closely ( Figure 6(b)): (1) All three edges are located on the section plane (2) Exactly two edges are located on the section plane (3) One edge is located above, one below, and one on the section plane (4) One edge is located above/below the section plane and two edges are located on the other side (5) Exactly one edge is located on the section plane ese considered cases have a significant effect on the calculation of the intersection points. e coordinates of the edges of all triangles describing the edited body are stored in a separate class (triangle class) within the software. ey are processed during slice data generation and serve for intersection point calculation. After using a certain coordinate for calculation, it may be erased from the to-be-processed data. However, depending on the case and on the ratio between slicing thickness and triangle size, certain coordinates have to be used for the calculation of the next layer and must not be erased. To ensure a correct calculation, novel triangles are generated according to Figure 7(a). e triangle (defined by a, b, and c in Figure 7(a)) he is divided into three smaller triangles using the intersection points (P1 and P2) with the current section plane. In the case described here, the coordinates a and b can be erased from the triangle class, and only the triangle defined by P1, P2 and C is used for calculating the intersection points in the next layer.
After calculation of all intersection points, a polygon course is generated automatically and displayed for each layer in the slice tab. e software layout also allows processing more complex bodies, such as the "test your 3D printer! v2" file from ingiverse (MakerBot Industries, LLC) as illustrated in Figure 7(b) [38].
Assignment of Multiple Properties and g-Code to Subvolumes within Given Complex Geometries.
e imported body is divided into voxels by the rectangular grid implemented in the "Slice" tab. us, all voxels are accessible by scrolling through the single layers. ese features enable assigning arbitrary properties to every single voxel within any given geometry. e properties assigned to each voxel are used to generate machine-readable code. As g-code is the most common numerical control programming language, it is used for further processing. Specific commands are filed by means of a user-friendly editable .txt file. It can be adjusted depending on the manufacturing technology used. e standard file for extrusion-based additive manufacturing processes is shown in Figure 8(a). It contains the material id (matid), the number of layers necessary to fill one millimeter (fillings/ mm), the path distance (hatch), the path arrangement (pattern), the digital output command being used to control the extrusion nozzle (tool), the speed of the tool (speed), and the zero position of the tool used (X-, Y-, and Z-coordinates). e strand thickness can be calculated from the "fillings/mm" column as these values are used for the travel of the z-position to lay the strands directly on top of each other. us, according to Figure 8 Journal of Healthcare Engineering information provided by the "fillings/mm" column and the slice thickness. e file shown in Figure 8(a) leads to different material deposition. e patterns with "matid 1" and "matid 2" are extruded from the same nozzle (tool 1) and with the same speed but in different path distances. For "matid 1," the strand thickness matches the path distance and thus fills the voxel area in top view creating a dense structure (Figure 8(b), top left). "Matid 2" deposits the 125 µm strands in a path distance of 250 µm and thus exhibit filling levels of 50%. In "matid 3" and "matid 4," another tool is used to deposit larger strands in different patterns. Figure 9 shows three different structures manufactured from the same cuboid STL file (20 mm × 20 mm × 4 mm). Two extruding systems with nozzles 0.4 mm in diameter (Nordson EFD) filled with different colored clay were used to manufacture different structures within the STL file. e left structure was manufactured with both nozzles using the same path distance leading to a laydown of one material (light-blue colored clay) in the inner zone and the other material (orange colored clay) in the outer part while both regions featuring a dense path spacing. e structure in the middle was manufactured using solely one nozzle following dense path spacing in the outer part and a 0.8 mm path spacing in the inner region leading to 50% porosity. e structure on the right was manufactured with both nozzles using different path distances generating a structure featuring both different materials and different path spacing and thus porosity. e different material grading, porosity grading, or combination of both within the lattice structures were realized within a STL file that usually solely defines the outer geometry by making use of two extrusion nozzles.
Additive Manufacturing of Complexly Structured Lattice Structures from Surface Models.
e software is designed to assign the information from the editable configuration file to the voxels generated by slicing and gridding of the imported surface based file. An easy-to-use graphical user interface was developed to display the voxels and the boundary curve(s) of the imported body as well as the basic properties of the material deposition as defined in the editable .txt file. Figure 10(a) shows the GUI of the software during assigning materials from the standard file to one layer of the previously discussed sphere. e standard file is displayed in a reduced form to provide a large area for assigning the materials and patterns. e voxel size and overall structure size are not limited by the software. However, the voxel size should be dimensioned according to the tools used. In the case of extrusion-based processes, the strand width and the strand spacing should be considered. An appropriate voxel size for the extrusion nozzles used for manufacturing the structures from Figure 9 may lead to different relative disparities in geometries with different sizes. e comparison between Figures 10(a) and 11(a) shows larger relative deviations in the sphere with 9 mm in diameter and good conformity for the life-size vertebra while using the same voxel size. e processing time is dependent on the number of surface triangles of the processed structure and the hardware used. e process of loading and processing the vertebra structure that is defined by 34464 surface triangles into voxels took 71.6 seconds on a dual-core 2.4 GHz, 4 GB RAM, 256 MB graphics memory system and 14.5 seconds on an eight-core 3.4 GHz, 32 GB RAM, 1 GB graphics memory system, making the software applicable on a wide range of computing systems. Contiguous areas are processed with continuous paths by using the zig-zag algorithm according to Figure 10(b).
To show the potential of the novel developed software, the surface model of a human lumbar vertebra, extracted from a CT scan and provided as STL file by MarioDiniz on ingiverse [32] as shown in Figure 3, is subdivided in voxels and filled with different materials and patterns. e software GUI in Figure 11(a) shows polylines based on the calculations presented, showing walls, remains from scanning parts of the trabecular bones, and artefacts within the STL file. ese lines function as guidance for assigning the materials and structures from the editable configuration file to the part. For manufacturing the part, two extrusion-based nozzles with a diameter of 0.4 mm (Nordson EFD) as in Figure 9 are used. e example part features an orange pattern with narrow strand spacing of 400 µm for the areas of the cortical bone (compact bone) and a light-blue pattern with a strand spacing of 800 µm, leading to a porosity of about 50% in the areas of the cancellous bone (spongy bone) of the vertebral body. Figure 11(b) shows the printed body featuring a material grading (different colors) as well as a porosity grading (strand spacing).
Combined Additive Manufacturing of Complex Fiber-Based and Strand-Based Structures for Biomedical Applications.
e applications shown so far relate exclusively to the production of extrusion-based structures. e adaptation of the configuration file is used directly to automatically create a g-code with corresponding strand spacing. e software, which also serves as a postprocessor, also allows the use of completely different additive manufacturing processes.
e Net Shape Nonwoven Method (NSN) is a unique technology for the additive production of short fiber-based structures for regenerative medicine [40]. Similar to powder bed printing, this technology is a two-tier process. Journal of Healthcare Engineering First, a thin fiber bed is applied, and subsequently a binder is applied to selectively bonding the fibers together. Using the customizable configuration file, the developed modular software also allows the production of NSN structures. By appropriate selection of the tools for fiber application units (actuated by speed-controlled stepper motors) and the path distances, either a full surface fiber application or a local fiber application can be realized. Clearly, the width of the fiber track depends on the fiber length used and usually ranges from 0.5 mm to 2 mm. With these fiber lengths, which can be estimated simulation-based, suitable pore sizes and porosities for regenerative medicine can be generated [41]. For the second process step, a piezocontrolled adhesive nozzle is actuated. In order to achieve precise contours and geometries, path distances of about 200 µm are used. Due to the flexible controlling of extrusion nozzles, fiber application units, and adhesive nozzles, completely different additive manufacturing processes can be combined to create novel structures. Figure 12 shows different structures on the basis of simple STL files into which different materials have been inscribed. e newly developed approach allows assigning volumetric structures in three-dimensional surface models. STL files obtained from CT or MRI scans in medicine, 3D scans from reverse engineering, CAD software, or any other sources solely contain information of the surface of the bodies. With existing software, assigning properties within the bodies is not possible as only triangles on the surface area can be selected. e hosted editable configuration file allows controlling different extrusion nozzles, fiber application units, and a piezo-driven adhesive nozzle (via tool column). It features arbitrary nozzle diameters, leading to different strand thickness or different fiber layer heights (via filling/mm column). All materials are deposited in a z-pattern (via pattern column) starting in the x direction in the first layer of a voxel and subsequently changing its direction into the y direction which ensure a crossing of the strands and thus structural stability especially for strand deposition. e standard file also allows setting the speed (in mm/min via the speed column) and adjusting the tool position in the machine (in absolute XYZ coordinates (mm) in the respective columns). e "matid" column shows a colored pattern with dark and light colors representing the materials as well as the porosity to simplify the material/pattern choice during assigning the properties from the .txt file to the body.
After assigning the parameters to the voxels, a g-code can be generated automatically by clicking the "Create machine path" button on the lower right (see Figure 11(a)). Machine paths of adjacent voxels featuring the same "matid" are handed over to g-code as a continuous line and are processed according to the zig-zag pattern shown in Figure 10(b). e possible use of the software goes beyond the usage as a postprocessor for extrusion-based or fiber-based additive manufacturing as it basically allows assigning any control commands to every single voxel: (i) Positioning and travel commands for axes (e.g., XY tables and XYZ tables) (ii) On/off and/or speed commands for motors (e.g., material feed/deposition/compaction/removal and applying of substrates) (iii) Setting/resetting of digital or analog outputs (e.g., connected extruders, nozzles, heaters, coolers, fans, and lasers) (iv) Transfer information to other units (e.g., bus systems, direct digital controls, robot controls, displays, and user feedback) us, any processing technology or application may be implemented by customizing the spreadsheet file or implementing other (machine-readable) codes according to the users' needs.
Furthermore, the voxel-based approach allows assigning any information (e.g., materials, material morphologies, colors, porosities, and metadata) to the imported files. e information may be stored and used for other applications or further processing.
Conclusions
e presented method allows slicing and gridding bodies from STL or AMF files into volumetric elements (voxels) of arbitrary size. e underlying software allows assigning different tools and features such as path distances, strand thicknesses, or traveling speeds to each of these voxels. Adjoining voxels with equal properties are combined to subvolumes and may either be manufactured into structures with material grading, porosity grading as well as combinations of both within the different regions or be stored in AMF format and be used in compatible software or printing technologies.
Strand-based lattice structures with grading on the material and structural level can be designed and manufactured within a multinozzle additive manufacturing approach and can furthermore be combined with a two-tier process for fiber-based additive manufacturing well suited for applications in regenerative medicine. e combination of both approaches enables, e.g., designing press-fit applications with flexible transition areas on the basis of geometry data from complex defects. Furthermore, large defects affecting different tissue types or tissue morphologies, e.g., osteochondral defects involving bone and cartilage, may be addressed.
Data Availability
e software used to support the findings of this study is described extensively within the article. e described findings can be used to replicate the findings of the study.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 4,691.4 | 2019-02-04T00:00:00.000 | [
"Engineering",
"Materials Science",
"Computer Science"
] |
Financial Brownian particle in the layered order book fluid and Fluctuation-Dissipation relations
We introduce a novel description of the dynamics of the order book of financial markets as that of an effective colloidal Brownian particle embedded in fluid particles. The analysis of a comprehensive market data enables us to identify all motions of the fluid particles. Correlations between the motions of the Brownian particle and its surrounding fluid particles reflect specific layering interactions; in the inner-layer, the correlation is strong and with short memory while, in the outer-layer, it is weaker and with long memory. By interpreting and estimating the contribution from the outer-layer as a drag resistance, we demonstrate the validity of the fluctuation-dissipation relation (FDR) in this non-material Brownian motion process.
The financial economic literature uses the Wiener process as the standard starting point for modeling and for financial engineering applications [11]. Extending the initial intuition of Bachelier, the random nature of financial price fluctuations is presently mostly understood as resulting from the the imbalance of buy and sell orders at each time step [12]. In order to explain non-Gaussian properties of market price fluctuations, extensions in the form of Langevin-type equations with an inertia term have been proposed [13][14][15][16][17][18][19][20].
Essentially all previous models based on the random walk picture or its continuous version (the Wiener process) involve just the price dynamics. Other approaches simulate financial markets with computational economic models with different class of agents' strategies or using the statistics of buy and sell orders from the viewpoint of statistical physics [21][22][23][24][25][26][27][28].
Here, we introduce a qualitatively novel type of model for financial price fluctuations.
Rather than focusing on the dynamics of a single price for a given market that requires complicated modifications to the basic random walk model in order to account for the numerous stylised facts, we propose the picture that the observed financial motion is analogous to a genuine colloidal Brownian particle embedded in a fluid of smaller particles, which themselves reflect the structure of the underlying order book (defined as the time-stamped list of requests for buy and sell orders with prices and volumes). The "Financial Brownian particle in order book molecular fluid" (in short FBP) picture provides a novel quantification of the correlations between different layers in the order book that can be interpreted as the analogy of the correlation between a Brownian particle itself with the surrounding fluid molecules. We present empirical estimations of the correlation functions that confirm the proposed mapping as well as provide non trivial insights on the correlations with deeper fluid molecular layers within the order book.
We analyze the order book data of the Electronic Broking Services for currency pairs provided by a market managing company ICAP. This foreign exchange market is continuously open 24 hours per day except over weekends and its transaction volume per day at about 4 trillion US dollars makes it the largest among all financial markets, with also much larger liquidity than stock markets. We present our results for the US dollar-Japanese Yen market, which is characterized by a large transaction volume. The traders in this market are international financial companies which are connected to the ICAP's market server by a special computer network. Any orders, either buy or sell, are quantized by a unit of 1 million US dollar with its price given with a granularity of 0.001 Yen (called a pip) recorded with a time-stamp of 1 millisecond. A pair of buy and sell orders meeting at the same price immediately triggers a transaction and determines the latest official market price. These orders disappear from the order book just like a pair annihilation of matter-antimatter. The price and time quantizations enable us to describe the market by particles in discrete space and time, where a particle represents either a buy or sell order of 1 million US dollar. In the following discussion, we assign a superscript "−" (resp. "+") for buy (resp. sell) orders. At a given time, a state of this market is characterized by its order book schematically represented in Fig. 1(a), which contains the set of yet unrealized buy (resp. sell) orders in the lower (resp. higher) side of the discrete price axis. The highest buy (resp. lowest sell) order price, denoted as x − (t) (resp. x + (t)) is called the best bid (resp. ask), and the gap between the best bid and best ask is called the spread. For each buy (resp. sell) order in the order book, we introduce an important measure of depth γ − , (resp. γ + ), which is defined by the distance of this buy (resp. sell) order from x − (t) (resp. x + (t)) in pip unit.
The order book is evolved by spontaneous injection of three types of orders; limit orders, market orders and cancelation. A limit buy (resp. sell) order is introduced by a trader by specifying the buying (resp. selling) price. If the buying price is lower than the best ask (resp. higher than the best bid) in the order book, the order is accumulated at the specified price in the order book as a new buy (resp. sell) order. If the buying price is equal to or higher than the best ask (resp. lower than the best bid), this order makes a deal with a sell order at the best ask (resp. bid) in the order book, and those pair of orders annihilate.
A market buy (resp. sell) order directly hits against a sell (resp. buy) order at the best ask (resp. bid) causing a deal. A cancelation simply deletes an order, and it can be done only by the trader who created the order. This highly irreversible particle dynamics evokes chemical catalysis, and lead to a rich phenomenology [29][30][31][32][33][34][35].
The FBP model that we propose is illustrated in Fig. 1(b). An imaginary colloidal Brownian particle, called a colloid, has its center positioned at the mid-price, x + (t)}/2, with the core diameter given by the spread, x + (t)−x − (t). The accumulated orders are regarded as embedding fluid particles with diameter equal to 1 pip. We visualize the core of the colloid by the yellow-disk and the interaction range by the yellow-green ring area that overlaps with the particles near the spread (the green and orange disks). We call this interaction range, the inner-layer, and the domain outside of this interaction range, the outer-layer. The values of threshold depths for defining the inner-layer, γ − c and γ + c , will be estimated below from the data. With the injection of new orders, the surrounding particles change their configuration and the colloid moves as a result, as shown in Fig. 1(c) for a specific example. The colored arrows indicate typical particle density changes in the layers, which we are going to analyze in detail.
Observing the evolution of the configuration of particles from time t to t+∆t, we measure the change in the number of "−" (resp. "+") particles as a function of the depth γ − (resp. γ + ), where the depth is measured from x − (t) (resp. x + (t)), at each time. When x − (t) (resp.
x + (t)) stays at the same location, the change of particle number at a given depth is simply given by counting the change in the number of "−" (resp. "+") particles. When x − (t) (resp. x + (t)) moves, the density profile as a function of depth shifts accordingly and the changes of particle numbers at different depth are simply due to the translation. Note that the depth γ − (resp. γ + ) can take a negative value, for example, in the case when a limit order falls in the spread. implying that the dynamics is close to symmetric. Intuitively, when the price goes up, more new buy than sell orders are injected in the inner range and sell orders near the market price tend to be canceled and replaced by higher sell prices, based on the anticipation of larger future returns by traders assuming trend persistence. The opposite direction is explained in the same way.
Let us define the total change of particle numbers in the inner-layer at time t, and a − i (t) denote the numbers of "−" particles that are created and annihilated, respectively, in the inner-layer at time t, and c + i (t) and a + i (t) are the same quantities for "+" particles. Note that "−" and "+" particles are counted with the opposite sign as they are conjugate "matter" and "anti-matter". In Fig. 2(b 1 ), the scatter plot of the velocity of the colloid, v(t), observed in the same time window, ∆t = 100, as a function of the sum F i (t) = ∆t s=0 f i (t + s) from t to t + ∆t demonstrates a strong linear correlation. Fig. 2(c) shows the correlation coefficient between v(t) and F (t) for different values of ∆t. The correlation increases for larger ∆t reaching the value 0.7 around ∆t = 100.
These empirical results suggest the following basic relation: The factor L(∆t) represents the mean step length of the colloid motion as a response to the motion of the surrounding fluid particles. It ranges from L(∆t) ≈ 0.44 pips for ∆t = 100 to L(∆t) ≈ 0.32 pips for ∆t = 4. The last term η(t) is the error term. Similar correlation between the velocity and market orders have been previously documented [29]. However, we find here significantly higher values due to the more appropriate separation of the negative and positive sides of the layered structure of the order book. We also observed similarly the changes in outer-layer particle numbers for buy orders and sell orders separately, and confirmed the correlations with the velocity as shown in Fig. 2(b 2 ). Fig. 2(d) shows the time-shifted correlations between the velocity and these changes of particle numbers, which confirms that the inner-layer particles correlate strongly with the velocity but with a short memory, while the outer-layer particles' correlation is weaker but decays slowly with a power tail.
Next, we present a more detailed description of the fluid particles. Fig. 3(a) shows a schematic diagram of the space-time configuration. We categorize the particles that annihilate in the inner-layer into two classes, a ii and a oi . An a ii particle is created in the inner-layer, stays in the inner layer and annihilates in the inner-layer, while an a oi particle is either born in the inner-layer or outer-layer, visits the outer-layer at least once and then annihilated in the inner-layer. By surveying the whole particle's life, we find that 73.5% of particles are created in the inner-layer (denoted as c i ), and 72.3% of particles are annihilated in the inner-layer (denoted as a i ). The share of a ii particles is 61.5% and that of a oi is 10.8%.
Statistics of a ii and a oi are compared in Fig. 4. The time-shifted correlations with v(t) are plotted in Fig. 4(a) for g ii (t) = −a − ii (t)+a + ii (t) by blue line and for g oi (t) = −a − oi (t)+a + oi (t) by red line. We find that a ii and a oi are oppositely correlated with the velocity. Fig. 4(b) shows the cumulative distributions of lifetime of these particles in log-log scale. The distribution of the lifetimes of a ii particles decays exponentially with a mean lifetime of approximately 2.6 ticks, while that of a oi follows a power law with an exponent close to −0.5, which corresponds to the distribution of recurrence time intervals for 1-dimensional random walks.
These results justify a more sophisticated FBP picture in which the a ii particles contribute to the driving force, directly pushing or pulling the colloid at the time of annihilation, and the a oi particles work as a drag that impede the colloidal motion since they always collide with the front of the colloidal motion. Based on this picture, the velocity equation, Eq. (1), ii (s)}/∆t. As the term v F (t) is nothing but the direct driving force term that reflects the immediate orders of traders, we focus on the term v I (t), which is caused by the long-term response of the fluid particles. The power spectra of v(t)∆t, v I (t)∆t, and their ratio, are plotted in Fig. 4(c). The power spectrum of v(t) is nearly white with slightly more energy in the high frequency band, implying that there are zig-zag fluctuations at very short times.
On the other hand, the spectrum of v I (t)∆t clearly decays at high frequency. The ratio of power spectra, |v I (ω)| 2 /|v(ω)| 2 , has a Lorentzian form, implying that the response function is approximated by an exponential function, v I (t) = The external force term G(t) includes v F , η and their derivatives, which are not simple white noises, and the drag coefficient is estimated as µ = 2.2. The validity of this continuum formulation can be checked by estimating the Knudsen number [36][37][38] of the financial markets, defined as the ratio of the mean free path of collisions of the colloid and the a oi particles over the diameter of the colloid. We find that the averaged value of the Knudsen number is approximately 0.02, whose smallness guarantees the validity of the continuum representation of market price given by Eq. (2).
So far, we have analyzed the whole data set to observe the averaged behaviors, neglecting the well-known fact that markets are not stationary and are characterized by regime shifts: calm periods are punctuated by turbulent periods of high transient volatility including speculative bubbles and crashes. It is thus more appropriate to revisit our above estimations of observables in shorter time-scales and analyze their possible time variation. A detailed analysis will be described in a separate paper. Here, we show the temporal change of the ratio of a oi /a i in Fig. 4(e) with the corresponding market price in Fig. 4(d). In addition to being easy to observe, the ratio a oi /a i constitutes the key parameter related to the strength of the drag force exerted by the fluid particles. One can see that a oi /a i fluctuates significantly, confirming that market conditions are not stationary.
In summary, we have established a fundamental analogy between the motion of a colloidal particle embedded in a fluid and the price dynamics of a financial market in the order book.
By observing the detailed behaviors of the colloid and surrounding particles in the order book, we found that the drag resistance is caused by particles moving from the outer-layer to the inner-layer. The proposed quantitative correspondence provides a novel perspective for the analysis of financial markets. In addition, it should provide a stimulus for physicists from many different fields, since the question of the origin of drag resistance is a fundamental question in physics that still remains to be fully clarified. We also showed the need to | 3,512.6 | 2014-01-29T00:00:00.000 | [
"Physics"
] |
Global Emissions of Perfluorocyclobutane (PFC-318, c-C4F8) Resulting from the Use of Hydrochlorofluorocarbon-22 (HCFC-22) Feedstock to Produce Polytetrafluoroethylene (PTFE) and related Fluorochemicals
Emissions of the potent greenhouse gas perfluorocyclobutane (c-C4F8, PFC-318, octafluorocyclobutane) into the global atmosphere inferred from atmospheric measurements have been increasing sharply since the early 2000s. We find that these 20 inferred emissions are highly correlated with the production of hydrochlorofluorocarbon-22 (HCFC-22, CHClF2) for feedstock (FS) uses, because almost all HCFC-22 FS is pyrolyzed to produce (poly)tetrafluoroethylene ((P)TFE, Teflon) and hexafluoropropylene (HFP), a process in which c-C4F8 is a known by-product, causing a significant fraction of global c-C4F8 emissions. We find a global emission factor of ~0.003 kg c-C4F8 per kg of HCFC-22 FS pyrolyzed. Mitigation of these c-C4F8 emissions, e.g., through process optimization, abatement, or different manufacturing processes, such as 25 electrochemical fluorination, could reduce the climate impact of this industry. While it has been shown that c-C4F8 emissions from developing countries dominate global emissions, more atmospheric measurements and/or detailed process statistics are needed to quantify country to facility level c-C4F8 emissions.
al., 2021). Mühle et al. (2019) reported that global atmospheric emissions of c-C 4 F 8 began in the late-1960s, reaching a plateau of ~1.2 Gg yr -1 during late-1970s to the late-1980s, followed by a decline to a plateau of ~0.8 Gg yr -1 during the early-1990s to early-2000s, and then increased sharply reaching ~2.2 Gg yr -1 in 2017. Emissions of c-C 4 F 8 from developed countries are regulated and reported under the Kyoto Protocol of the United Nations Framework Convention on Climate 35 Change (UNFCCC). However, these reports from developed countries to UNFCCC only account for a small fraction of global emissions of c-C 4 F 8 inferred from atmospheric measurements (Mühle et al., 2019), similar to the emissions gaps observed for other synthetic GHGs (e.g., Montzka et al., 2018;Mühle et al., 2010;Stanley et al., 2020). This emissions gap results partly from emissions in developing countries, which do not have to be reported to UNFCCC and are therefore missing, and/or from uncertainties in emissions reported by developed countries. To understand the sources of recent global 40 c-C 4 F 8 emissions, Mühle et al. (2019) used Bayesian inversions of atmospheric c-C 4 F 8 measurements made at sites of the Advanced Global Atmospheric Gases Experiment (AGAGE, Prinn et al., 2018) in East Asia and Europe and from an aircraft campaign over India. For 2016, these limited regional measurements allowed Mühle et al. (2019) to allocate ~56% of global c-C 4 F 8 emissions to specific regions with significant emissions from Eastern China (~32%), Russia (~12%), and India (~7%). Spatial patterns of these regional c-C 4 F 8 emissions were roughly consistent with facilities that produce 45 polytetrafluoroethylene (PTFE, Teflon) and related fluoropolymers and the necessary precursor monomers tetrafluoroethylene (TFE) and hexafluoropropylene (HFP), which are produced via the pyrolysis of hydrochlorofluorocarbon-22 (HCFC-22, CHClF 2 ). c-C 4 F 8 , essentially the dimer of TFE, is one of several byproducts/intermediates of this process (Chinoy and Sunavala, 1987;Broyer et al., 1988;Gangal and Brothers, 2015;Harnisch, 1999;Ebnesajjad, 2015). Process control and optimization to reduce the formation of c-C 4 F 8 and other by-products 50 are complex, and under unsuitable conditions c-C 4 F 8 by-production could be as high as 14% (Ebnesajjad, 2015). On the other hand, Murphy et al. (1997) demonstrated that co-feeding several percent of c-C 4 F 8 to the HCFC-22 feed could reduce additional c-C 4 F 8 formation to less than 0.5% of the combined TFE and HFP yield, thus increasing combined TFE and HFP yield to more than 96%. But they also stated that perfect process control may be impractical. In 2018, one of China's largest TFE producer confirmed c-C 4 F 8 by-product formation (Mühle et al., 2019). Unless c-C 4 F 8 is recovered or recycled, excess 55 c-C 4 F 8 may therefore be emitted to the atmosphere, consistent with the observations. Historically, similar c-C 4 F 8 by-product venting occurred in the US and Europe (Mühle et al., 2019), unnecessarily increasing the carbon footprint of this industry.
Note that Ebnesajjad (2015) and e.g. Mierdel et al. (2019) discuss research into the use of electrochemical fluorination (ECF) which may offer significantly reduced by-product formation rates in addition to energy savings and overall waste reduction.
Closely related to c-C 4 F 8 (as a by-product of HCFC-22 pyrolysis) is hydrofluorocarbon-23 (HFC-23, CHF 3 ), also a strong 60 GHG, which has long been known to be a by-product of the actual production of HCFC-22 from chloroform (CHCl 3 ), that is often vented to the atmosphere, unnecessarily increasing the carbon footprint of this industry, despite technical solutions, regulations, and financial incentives (e.g., Stanley et al., 2020).
Here we show that global emissions of c-C 4 F 8 since 2002 are highly correlated with the amount of HCFC-22 produced for feedstock (FS) uses, because almost all this FS HCFC-22 is pyrolyzed to produce TFE/HFP, a process with c-C 4 F 8 as a known by-product. This supports the hypothesis that recent global c-C 4 F 8 emissions are dominated by c-C 4 F 8 by-product emissions from the production of TFE/HFP, PTFE and related fluoropolymers and fluorochemicals.
Atmospheric observations of c-C4F8 and inverse modeling of global emissions
We have extended the 1970-2017 AGAGE in situ c-C 4 F 8 atmospheric measurement record used by Mühle et al. (2019) and 70 produced updated global emissions through 2020. For this we used measurements of c-C 4 F 8 by "Medusa" gas chromatographic systems with quadrupole mass selective detection (GC/MSD) (Arnold et al., 2012;Miller et al., 2008) In situ data were filtered with the AGAGE statistical method to remove pollution events (Cunnold et al., 2002). Fig. 1 shows the continued increase of pollution free monthly mean c-C 4 F 8 mole fractions in the global atmosphere. The data were then 85 used in conjunction with the AGAGE 12-box two-dimensional model (Rigby et al., 2013) and a Bayesian inverse method to update global emissions (Table 1 and
HCFC-22 feedstock (FS) production data
To investigate if the chemical relationship between HCFC-22 pyrolysis and c-C 4 F 8 by-product (as discussed in the introduction) results in a correlation between HCFC-22 feedstock (FS) production and c-C 4 F 8 emissions, we compiled 100 HCFC-22 FS production statistics (Table 1 and Table 4-1 in the TEAP (2020) report for 2008 to 2018; this report contains data used for the determination of the funding requirement for the Multilateral Fund (MLF) for the implementation of the MP. It also lists totals for A5 countries which show small inconsistencies with the UNEP (2021) data, probably due to recent updates.
Results and Discussion
In agreement with Mühle et al. (2019), our updated global inversion results show that c-C 4 F 8 emissions were relatively stable 110 at ~0.8 Gg yr -1 in the early-1990s to early-2000s. However, in 2002 c-C 4 F 8 emission growth resumed, reaching levels not seen before, with a relatively steady increase to 2.26 Gg yr -1 in 2017 (Table 1 and Fig. 2, black diamonds). Here, we find a stabilization at this emission level from 2017 to 2019, followed by a possible resumed increase in emission growth to 2.32 Gg yr -1 in 2020 (however, differences between the 2017-2020 emissions are not statistically significant). In comparison, global HCFC-22 production for feedstock (FS) uses has increased relatively steadily since the early 1990s, initially driven by 115 FS production in non-A5 (developed) countries (Fig. 2, red circles). This non-A5 growth slowed down in the early-2000s and non-A5 HCFC-22 FS production has been relatively stable since then. The global growth in HCFC-22 FS production since 2002 has been driven by the increase in production in A5 (developing) countries (Fig. 2, blue squares), dominated by China (Fig. 2, open orange squares). This is the time frame of a steady increase of inferred global c-C4F8 emissions. We find a strong correlation between HCFC-22 FS production in A5 (developing) countries and inferred global c-C 4 F 8 emissions (R 2 = 0.97, p < 0.01) (Fig. 3, blue squares and fit, 2002-2019). While HCFC-22 FS production itself does not lead to c-C 4 F 8 by-production and emissions (HFC-23 is by-produced in this process and emitted, Stanley et al. (2020)), the fact that 98-99% of global HCFC-22 FS production is used to produce TFE (~87%) and HFP (~13%), to in turn produce PTFE and related fluoropolymers and fluorochemicals, causes the observed strong correlation with HCFC-22 FS production. This 130 would probably not be the case if a significant fraction of HCFC-22 FS production were used for other processes without c-C 4 F 8 by-production and emissions. Note that the HCFC-22 to TFE route (with c-C 4 F 8 by-product) can also be used to produce HFC-225 isomers and hydrofluoroolefin HFO-1234yf (CF 3 -CF=CH 2 ) (Sherry et al., 2019), with HFO-1234yf being the preferred replacement for HFC-134a (CF 3 -CFH 2 ) in mobile air conditioning (MAC). Note that the EFs of ~0.003 kg/kg or ~0.3% (by weight) of c-C 4 F 8 emitted per HCFC-22 FS used are similar to the optimal production conditions explored by Murphy et al. (1997) of less than 0.5% c-C 4 F 8 by-product of the combined TFE and HFP yield (excluding other by-products).
From 1996 to 2001, before the start of any significant production of HCFC-22 for FS uses in A5 countries, c-C 4 F 8 emissions and non-A5 HCFC-22 FS production were relatively stable (Fig. 2). Assuming that all of the HCFC-22 produced for FS uses 160 in non-A5 countries was pyrolyzed to TFE/HFP with c-C 4 F 8 by-product emissions, an EF of 0.0052 ± 0.0004 kg/kg could be calculated, which is larger than the EF for A5 (developing) countries (or the total global) in recent years. However, it cannot be excluded that other sources, such as the semi-conductor industry, caused emission during this timeframe (but see the small emissions from the semiconductor producing countries Japan and South Korea in Mühle et al., 2019) or that EF reductions have occurred since then. Still, if we multiply this EF with the HCFC-22 FS production in non-A5 countries we 165 could estimate non-A5 country c-C4F8 emissions in recent years and subtract these from total global emissions. From an investigation of the correlation of the remaining c-C 4 F 8 emissions against HCFC-22 FS production in A5 countries, we find the same EF (0.0031 ± 0.0001 kg/kg) as for A5 countries determined earlier, but a negative offset (-0.21 ± 0.05 Gg yr -1 c-C 4 F 8 ). This negative offset indicates that the subtracted estimates of non-A5 c-C 4 F 8 emissions were too high, and thus that an EF of 0.0052 kg/kg (from 20 years ago) may not be applicable to today's non-A5 country Ultimately, atmospheric measurements covering more facilities that pyrolyze HCFC-22 and/or detailed mass balance statistics would be needed to determine EFs for A5 and non-A5 countries, and how EFs may differ from facility to facility. (Table 1), results in an EF of 0.0021 ± 0.0003 kg/kg. This is lower than the EF determined for non-A5 countries (or the 175 total global) in recent years, which seems unlikely, as total A5 country HCFC-22 FS production is dominated by China (Fig. 2). Most probably, total Chinese c-C 4 F 8 emissions are larger than those determined for eastern China as several Chinese facilities that likely emit c-C 4 F 8 are outside of the inversion domain used in Mühle et al. (2019). More measurements would be needed to answer this question and similar questions for other parts of the world. and India and that spatial emission patterns were roughly consistent with facilities that produce tetrafluoroethylene (TFE) and/or hexafluoropropylene (HFP) and from these polytetrafluoroethylene (PTFE, Teflon) and related fluoropolymers and fluorochemicals. TFE and HFP are produced via the pyrolysis of hydrochlorofluorocarbon-22 (HCFC-22), a process in which c-C 4 F 8 is a known by-product. In this investigation, we find that this chemical relationship between the HCFC-22 185 pyrolysis and c-C 4 F 8 by-product leads to tight correlations between a) HCFC-22 FS production in A5 (developing) countries and global c-C 4 F 8 emissions and between b) total global HCFC-22 FS production and global c-C 4 F 8 emissions (both from 2002 to 2019). These correlations arise as ~98% of the HCFC-22 FS production is used to produce TFE and HFP via HCFC-22 pyrolysis, with c-C 4 F 8 as by-product. Our results support the hypothesis that current global c-C 4 F 8 emissions are mostly due to avoidable by-product venting during the production of TFE/HFP, PTFE and related fluoropolymer and 190 fluorochemicals. Emission factors are estimated to be ~0.003 kg c-C 4 F 8 emitted per kg of HCFC-22 FS (to produce TFE and HFP) or ~0.3% (by weight). In 2018, one of the largest TFE producer in China confirmed c-C 4 F 8 by-product formation, which, unless recovered or recycled, may lead to c-C 4 F 8 emissions. Historically, similar c-C 4 F 8 by-product venting occurred in the US and Europe, unnecessarily increasing the carbon footprint of this industry. Due to the relatively stable HCFC-22 FS production in non-A5 (developed) countries since 2002, it is not possible to determine whether facilities that pyrolyze 195 HCFC-22 to TFE/HFP in non-A5 (developed) and A5 countries (developing) currently emit c-C 4 F 8 at similar rates.
Summary and Conclusions
Atmospheric measurements covering c-C 4 F 8 emissions from more HCFC-22 pyrolyzing facilities in non-A5 and in A5 countries and/or detailed mass balance statistics would be needed to investigate this further and to determine contributions of other countries to global c-C 4 F 8 emissions. Similarly, more atmospheric measurements and/or data are needed to determine Closely related to emissions of c-C 4 F 8 are emissions of hydrofluorocarbon-23 (HFC-23), also a strong GHG, which has long been a known by-product of the actual production of HCFC-22 from chloroform (CHCl 3 ). Emissions of HFC-23 contribute unnecessarily to the carbon footprint of HCFC-22 industry despite technical solutions, regulations, and financial incentives | 3,284.4 | 2021-11-04T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Nitrite build-up effect on nitrous oxide emissions in a laboratory-scale anaerobic/aerobic/anoxic/aerobic sequencing batch reactor
Biological wastewater treatment processes with biological nitrogen removal are potential sources of nitrous oxide (N2O) emissions. It is important to expand knowledge on the controlling factors associated with N2O production, in order to propose emission mitigation strategies. This study therefore sought to identify the parameters that favor nitrite (NO2 ) accumulation and its influence on N2O production and emission in an anaerobic/aerobic/anoxic/aerobic sequencing batch reactor with biological nitrogen removal. Even with controlled dissolved oxygen concentrations and oxidation reduction potential, the first aerobic phase promoted only partial nitrification, resulting in NO2 build-up (ranging from 29 to 57%) and consequent N2O generation. The NO2 was not fully consumed in the subsequent anoxic phase, leading to even greater N2O production through partial denitrification. A direct relationship was observed between NO2 accumulation in these phases and N2O production. In the first aerobic phase, the N2O/NO2 ratio varied between 0.5 to 8.5%, while in the anoxic one values ranged between 8.3 and 22.7%. Higher N2O production was therefore noted during the anoxic phase compared to the first aerobic phase. As a result, the highest N2O fluxes occurred in the second aerobic phase, ranging from 706 to 2416 mg N m h, as soon as aeration was triggered. Complete nitrification and denitrification promotion in this system was proven to be the key factor to avoid NO2 build-up and, consequently, N2O emissions.
INTRODUCTION
High nitrogen (N) concentrations in effluents may cause eutrophication and deterioration of recipient water bodies. N overloads favor microalgae and water plant growth, which may release toxins into the water (von Sperling, 2005). Although non-toxic, geosmin and 2methylisoborneol (2-MIB), two products released by cyanobacteria, can influence drinking water organoleptic characteristics, representing an obstacle to water treatment (Freitas et al., 2008). In order to prevent eutrophication, wastewater treatment plants (WWTPs) must be improved to ensure that N loads to receiving water bodies are within the limits stipulated by local legislation (Yang et al., 2017).
An economically viable and widely studied alternative for N removal is the application of biological processes involving nitrification and denitrification (von Sperling, 2005). However, the possibility of N2O release exists in both reactions, thus resulting in an anthropogenic source of this gas into the atmosphere (Wrage et al., 2001). In the troposphere, N2O is a chemically stable and long-lived greenhouse gas, with a global warming potential about 265 times that of carbon dioxide (CO2) (IPCC, 2014). Furthermore, in the stratosphere, N2O is the most emitted gas from anthropogenic sources displaying ozone (O3) depletion potential (Ravishankara et al., 2009). The highest N2O emission rates in WWTPs occur on those that apply biological processes, especially those that operate nitrification and denitrification processes in activated sludge (IPCC, 2019).
During nitrification under aerobic conditions, the ammonium ion (NH4 + ) is converted to hydroxylamine (NH2OH), which is, in turn, oxidized to nitrite (NO2 -) with the participation of ammonia-oxidizing bacteria (AOB) under alkaline conditions, while NO2is oxidized to nitrate (NO3 -) by nitrite-oxidizing bacteria (NOB). During denitrification under anoxic conditions, facultative heterotrophic bacteria convert NO3into molecular N (N2) (Wrage et al., 2001). N2O production is commonly attributed to three pathways: (1) partial nitrification, as a by-product of NH2OH oxidation; (2) nitrifier denitrification, which can occur under oxygen-limiting conditions, as an intermediate product; and (3) heterotrophic denitrification, where N2O is an intermediate product but can be released when the process is incomplete (Duan et al., 2017;Terada et al., 2017). Variations in N2O production and emissions occur according to the type of applied treatment process and configuration and operational parameters (Law et al., 2012).
N2O generation is usually associated with dissolved oxygen (DO) concentrations, NH4 + and NO2accumulation, pH and organic carbon availability (Duan et al., 2017;Vasilaki et al., 3 Nitrite build-up effect on nitrous … Rev. Ambient. Água vol. 16 n. 2, e2634 -Taubaté 2021 2019). Pijuan et al. (2014) reported N2O emissions almost ten-fold higher when altering a pilot plant system from continuous operation to a sequencing batch reactor (SBR). The authors attributed the N2O increases to the transient conditions between the anoxic and aerobic stages. Rodriguez-Caballero et al. (2015) also reported higher emissions in a full-scale SBR due to anoxic/aerobic transition. The authors also pointed out NO2build-up and the length of the aeration phases as contributors. Thus, SBR may become a potential source of emissions when applying operational conditions that favor N2O generation.
Knowledge of parameters that affect N2O production during the nitrification and denitrification stages is necessary to improve the sustainability of the process (Blum et al. 2018). Therefore, this study evaluated and identified the parameters responsible for NO2buildup and its effects on N2O production and emission in an SBR operated under anaerobic/aerobic/anoxic/aerobic conditions.
SBR operation
The study was carried out in a laboratory-scale anaerobic/aerobic/anoxic/aerobic SBR ( Figure 1A). The different SBR system phases were adjusted and regulated by a programmable logic controller (PLC), favoring higher DO and oxidation reduction potential (ORP) control. The reactor comprises 8.1 L of working volume and treats 4 L during each 8-hour cycle ( Figure 1B). An air compressor pump was used to provide system aeration, with an air flow rate of 120 L h -1 . Peristaltic pumps were used for the feeding and discharge of raw and treated wastewater, respectively. A mixed liquor volume was removed from the reactor during each cycle by a peristaltic pump, to guarantee a solid retention time (SRT) of 30 days. The reactor was inoculated with sludge from a WWTP designed to treat sanitary wastewater from a 2,500 population equivalent (PE) and fed with synthetic wastewater, which was prepared by adapting the formulation used by Holler and Trösh (2001). The synthetic wastewater was composed of casein peptone (500 mg N L -1 ), beef extract (323 mg N L -1 ), dibasic potassium phosphate (35 mg N L -1 ), sodium chloride (24 mg N L -1 ), urea (23 mg N L -1 ), calcium chloride dihydrate (23 mg N L -1 ) and magnesium sulfate heptahydrate (11 mg N L -1 ). SBR stabilization took 2 months. After this period, samples were collected for efficiency and N2O production and emission assessments.
Sampling and analysis
Monitoring took place for six consecutive weeks, where one sampling of one cycle was performed (one cycle per week). Throughout the sampling stage, raw and treated wastewaters were collected for chemical oxygen demand (COD), dissolved organic carbon (DOC) and total nitrogen (TN) analyses. In addition, volatile suspended solids (VSS) were analyzed in the discharged mixed liquor. After each metabolic phase, mixed liquor samples were collected for NO2and NO3analysis. Dissolved and emitted N2O sampling were also carried out during this period.
COD and VSS were determined according to APHA et al. (2012). DOC and TN analyses were performed on a TOC-L and TN Analyzer model TOC-L/TNM-L (Shimadzu). NO2and NO3analyses were performed by a Personal IC ion chromatograph (Metrohm) with conductivity detector, using a Polyvinyl alcohol with quaternary ammonium groups column (Metrosep A Supp 5 -150x4.0 mm); a solution of Sodium Carbonate (3.2 mmol L -1 ) and Sodium Bicarbonate (1.0 mmol L -1 in 5% acetone) was used as anion carrier and a solution of Tartaric Acid (4 mmol L -1 ) and Dipicolinic Acid (0.75mmol L -1 ) was used as cation carrier. A HI 9828 multiparameter probe (Hanna) was used to monitor reactor DO, ORP, temperature and pH.
A technique similar to the one applied by Brotto et al. (2015) was used for the emitted N2O sampling. Emitted N2O was collected using a modified lab-scale upturned funnel partially submerged in the reactor and a syringe ( Figure 1A). The N2O flux (F) was calculated using Equation 1: (1) Where Qupturned funnel stands for the emerging air flow rate passing through the upturned funnel, Δ[N2O] is the difference between the N2O concentration determined in the reactor and the concentration present in the atmosphere and Aupturned funnel is the superficial area of the upturned funnel.
Dissolved N2O was collected from the mixed liquor using a syringe and the concentration in the liquid was determined by the headspace gas method (de Mello et al., 2013). Dissolved N2O concentrations (C) were calculated with Equation 2: Where K0 is the N2O solubility coefficient (Weiss and Price, 1980), Chs is the N2O concentration stripped from the liquid (final), P is the atmospheric pressure, R is the universal gas constant, T is the liquid temperature (K) and Car is the N2O concentration present in the atmosphere (initial).
RESULTS AND DISCUSSION
The SBR reached average COD, DOC and TN removal efficiencies of 89, 91 and 79%, respectively, similar to those reported by other authors (Jia, et al., 2012;Rodriguez-Caballero et al., 2015). Jia et al. (2012) reported removal efficiencies of 91 and 85% for COD and TN, respectively, in an anaerobic/aerobic SBR designed for simultaneous nitrification and denitrification (SND), with a 6-hour cycle. Rodriguez-Caballero et al. (2015) also reported removal efficiencies near 90%, for both COD and TN in a full-scale SBR operated with alternate aerobic and anoxic phases in 4.5-hour cycles.
Despite the high TN removal obtained during this study, a high concentration of NO2at the end of both the aerobic and anoxic phases was noted, with higher values in the aerobic phases ( Figure 2). Too low NO3concentrations (<0.16 mg N L -1 ) during the cycle were also observed. These results can be an indicator that both nitrification and denitrification were only partial, as reported by other authors (Guo et al., 2009;Stenström et al., 2014;Du et al., 2016). Stenström et al. (2014) also observed NO2build-up in a SBR applying both nitrification and denitrification, indicating higher NO2production during nitrification and consumption reduction during denitrification throughout the study. Guo et al. (2009) observed NO2concentrations over 25 mg N L -1 and NO3concentrations under 6 mg N L -1 in an SBR aerobic designed for partial nitrification. In the present study, the NO2accumulation rate reached 96%. Du et al. (2016), for an anoxic SBR designed for partial denitrification, reported that NO2accumulated due to decreased NO2enzyme reduction activity. It is known that high nitritation rates may result in decrease of pH, provoking a reaction shift of nitrite and nitrous acid upon pH below 5, which may impact the denitrification (Todt and Dörsch, 2016). However, the pH values measured in the present study were above 5, which may not have been sufficient to increase nitrous acid production. This reinforces the theory that partial nitrification was responsible for the accumulation of nitrite. Figure 3A presents NO2production (first aerobic phase), consumption (anoxic phase) and build-up (operational cycle) throughout the study. During the first aerobic phase, NO2 -consumption rates were high and varied from 43.1 to 63.8 mg N h -1 . In the next phase (anoxic), the consumption rates decreased throughout the sampling period, from 40.4 to 20.3 mg N h -1 . The combination of high partial nitrification rates (NO2generation) during the first aerobic phase with decreased NO2consumption in the subsequent anoxic phase led to NO2build-up at the end of each operational cycle. The NO2build-up rate increased from 29 to 57% at the end of each operational cycle and expresses the NO2percentage that was not consumed during the anoxic phase in relation to that produced in the previous phase (aerobic). Wu et al. (2011) observed NO2accumulation rates near 90% in an anaerobic/aerobic SBR designed for partial nitrification with lower biomass concentrations, while Stenström et al. (2014) reported that NO2accumulation can lead to increased N2O production and emission, as observed herein. Figure 3B indicates NO2production (first aerobic phase) and consumption (anoxic phase) rates and the N2O/NO2ratio for each phase (first aerobic and anoxic) during the study. The increased NO2build-up rate coincides with increased N2O production rate, raising the N2O/NO2ratio ( Figure 3B). During the aerobic phase, N2O production accounted for 0.5 to 8.5% of NO2production, where the main production mechanism was nitrification. During the anoxic phase, values were substantially higher and corresponded to the evolution of the denitrification process, ranging from 6.3 to 22.7%. Therefore, denitrification was responsible 7 Nitrite build-up effect on nitrous … Rev. Ambient. Água vol. 16 n. 2, e2634 -Taubaté 2021 for the higher N2O production in this type of system, being extremely important for the control of the operational conditions of the anoxic phase to mitigate N2O emissions in the following phases, mainly the aerobic stage. Rodriguez-Caballero et al. (2015) reported NO2build-up as a key factor in increasing N2O production during nitrifying denitrification. Mampaey et al. (2016) described the anoxic phase as an important N2O generation factor, in addition to high NO2concentrations. According to these authors, the anoxic phase was responsible for 70% of N2O production in a SHARON reactor. Therefore, effective operational parameter control, in order to minimize NO2accumulation and, consequently, N2O supersaturation in the liquid, is necessary for N2O emission mitigation measures (Kampschreur et al., 2009;Vasilaki et al., 2019).
The higher N2O production rate occurring simultaneously with NO2build-up in the system may be associated with a reduction in the reactor biomass (VSS). Figure 4 displays the NO2production (first aerobic phase) and consumption (anoxic phase) rates in parallel with the VSS concentrations in the reactor throughout the study period. A biomass concentration reduction of approximately 30% at the end of the sampling period was observed. The loss of biomass may be related to a mechanical problem in the mixer observed during the fifth week of sampling. The mixer malfunction led to sludge flotation during sedimentation, causing biomass losses through the treated wastewater discharge and, consequently, a sharp reduction in SRT. This event may have caused decreased efficiency of both the nitrification and denitrification processes, resulting in N2O and NO2accumulation. Wu et al. (2011) observed that decreased suspended solid (SS) concentrations in an SBR system favored NO2accumulation. Noda et al. (2003) reported higher concentrations of dissolved and emitted N2O in anoxic and oxide reactors with lower SRT. Other authors have also associated lower SRT with NO2build-up and increased N2O production and emissions (Hanaki et al., 1992;Kampschreur et al., 2009;Castellano-Hinojosa et al., 2018). As previously reported, solid losses in SBR alongside decreased SRT are likely to favor increasing NO2build-up and N2O generation. However, an additional effect was observed regarding the magnitude of the N2O transfer rate from the liquid to the atmosphere during the second aerobic phase. Figure 5 presents the N2O production rates (from the retained and notemitted portion) of the first aerobic and anoxic phases in parallel to the maximum N2O flux at the beginning of the second aerobic phase throughout the study period. The N2O flux peak occurred as soon as aeration began during the second aerobic phase, with substantially high values ranging from 706 to 2416 mg N m -2 h -1 . These findings are close to those reported by Ribeiro et al. (2017) in a conventional activated sludge WWTP with landfill leachate addition, where a maximum N2O flux of 1890 mg N m -2 h -1 was observed, which was correlated to decreased DO concentrations and partial nitrification. Figure 5. Non-emitted N2O production rate (mg N h -1 ) during the first aerobic and anoxic phases and maximum flux emitted during the second aerobic phase (mg N m -2 h -1 ).
In the present study, the peaks from the second aerobic phase represented the amount of N2O produced and retained (not emitted) from the previous phases (first aerobic and anoxic). The same behavior for N2O/NO2accumulation in the liquid phase was observed for the maximum N2O flux throughout the study period, with an increase in emitted N2O in parallel with an increased N2O production rate retained from the previous phases ( Figure 5). Other studies have reported the same liquid N2O accumulation problem during the anoxic phase and its implications for the next aerobic phase (Gustavsson and La Cour Jansen, 2011;Yang et al., 2017;Pijuan et al., 2014;Mampaey et al., 2016). Thus, alternation between the anoxic and aerobic phases can be a negative point for N2O emission mitigation in wastewater treatment processes applying N removal.
Therefore, operational control in order to favor lower N2O production during the aerobic phase and its rapid consumption in the subsequent anoxic phase are necessary to mitigate emissions in the following phases, especially in the aerated units. Otherwise, in systems with a subsequent aerobic phase, N2O produced and not consumed may be emitted. In systems without this subsequent step, N2O may be emitted during treated wastewater disposal into receiving water bodies. Other operational adjustments, such as NO2and SRT control, are extremely important to create favorable conditions for nitrification and denitrification processes without liquid N2O accumulation and subsequent emission.
CONCLUSION
This study correlates N2O production and emissions with the operational condition of an SBR undergoing anaerobic/aerobic/anoxic/aerobic phases. The main conclusions are: • Even with controlled DO and ORP, partial nitrification was observed during the first 9 Nitrite build-up effect on nitrous … Rev. Ambient. Água vol. 16 n. 2, e2634 -Taubaté 2021 aerobic phase, resulting in high NO2production. The NO2was not totally consumed during the anoxic phase, also indicating the partial denitrification.
• NO2build-up favored N2O production during both the aerobic and anoxic phases of the process. The increases observed in both parameters can be associated with decreased biomass concentrations.
• The anoxic phase was responsible for the highest N2O production rates. The N2O accumulated during this phase was released during the second aerobic phase, causing emission peaks as soon as aeration began.
• An environment able to sustain complete nitrification and denitrification is required, minimizing NO2accumulation and allowing for rapid N2O consumption, thus minimizing emissions. | 4,069.8 | 2021-03-23T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Evaluation of the intensity of electromagnetic fields radiated by radar
Abstract A method to calculate intensity of electromagnetic fields radiated by powerful radars equipment is proposed in this paper. In such radars exposures the energy may reach very high values of power density during peaks, but relatively low levels of power density averaged over time. Calculations are made for a TA 10MTD radar according to proposed equations. It is shown that an antenna 5 m in height radiates a power flux density at the height of a man's head more than forty times smaller than the level permitted by Lithuanian law ‐ 20.0 μW/cm2.
Introduction
Public health authorities require calculations of electromagnetic field intensity around radars equipment before an installed. Such evaluation takes place in spite of manufacturers' theoretical analyses and experimental measurements of the radiation issued by concrete radar. The method of evaluation is very simple and has been used many times. We make a merit of an average value of radiated power operations in calculations of antenna electromagnetic field exposure. Pulse duration is hundreds of times shorter than pulse repetition, and thus the average value of power density is hundreds of times lower than the peak value of the radiation. Additionally a rotating radar antenna radiates the EM field at the measurement point periodically within a very short time, which strongly depends on the width of the main lobe of the pattern and scan sector of the antenna. Lithuania has accredited normative defining permitted levels of electro-magnetic radiation. Results of calculations can be compared to these levels and practical conclusions can be made for radar installation. An example of the radiation intensity evaluation of a Thomson radar antenna is in this paper. It is reinforced that calculated radiation is many times smaller than the permitted level and is not dangerous to the population. The real situation will become clear after the radar equipment in place and experimental tests of EM field intensity have been conducted.
Finding the density of radiated power
The density of the radiated power of the radar is verified at frequencies greater than 300 MHz. This density calculation method has often been used [3,5]. Figure 1 explains the situation.
Waves spread from the radar antenna and reach point B at the level of a man's head in two ways: propagated direct and reflected from the surface of Earth. The energy may reach very high peak values of power density, but relatively low levels of power density averaged over time. This is because the pulse duration τ of the radar radiation is hundreds of times shorter than the pulse repetition T p , and therefore the average value of power density is hundreds of times lower than the peak value. Due to the rotation of the radar antenna, point B is exposed to pulse modulated microwave radiation periodically within a very short time corresponding to the width of a main lobe of the antenna in the horizontal plane. It is easy to guess that an average value of power, P av , averaged over pulse repetition and over the period of antenna rotation, is low: In such a situation, the best parameter to characterize radar radiation is average power flux density, S av , at point B: where G -antenna gain in direction of maximum radiation; f(Θº) -antenna's pattern of directivity in vertical elevation plane (dependence of electric field intensity on angle Θº ); η -efficiency factor of antenna and wave-guide, less than one; p -coefficient of electromagnetic waves reflection from ground; R -distance AB between phase centre of antenna A and observational point B (Fig 1).
The efficiency of parabolic antennas is η<0,7. This parameter is usually involved in experimentally measured pattern of directivity (Fig 2).
Choosing different values of angle Θº permits one to calculate S av and to draw curve of power density dependence on distance. Distances R and L can be found by: The sanitary protection zone is defined in territory, where calculated average power density value exceeds permitted level. The radius of this zone can be changed and reduced by selection of antenna installation h 1 and choice of maximal radiation angle Θº max in vertical plane.
There is not flexibility because increasing of h 1 bid up a cost of the antenna installation and the rise of Θº max worsens detection of down flying aircrafts.
Permitted level of electromagnetic radiation
In Lithuania hygienic normative for the electromagnetic radiation of stationary electronic systems in living and working areas describes the permitted level of radiation [1]. In frequency band over 300 MHz of continuous oscillation the power flux density is normalized and cannot be greater than 10.0 µW/cm 2 . The same normative exists for the permitted electromagnetic radiation level of basic mobile telecommunication stations [2]. The European standard for the permitted level of power flux density is also 10.0 µW/cm 2 [4]. In the Lithuanian normative, the safe average level of power flux density for pulse radiation is 20.0 µW/cm 2 , because radiation heats the human body during a short pulse and the body turns cold during the long pause between pulses [1]. Russia has a similar normative for evaluating the radiation of meteorological radars [5]. In case of radiated wavelength 10 cm± 15 %, permitted level of average power flux density is accredited 20.0 µW/cm 2 at 2 m above the surface of the ground.
Separate description of permitted levels of electromagnetic radiation is proposed for the working places of people. In the 300 MHz -300 GHz band of frequencies, the permitted power flux density depends on exposition time [1]. In the case of five minutes exposition the normative is the greatest -1000 µW/cm 2 . The minimal normative -25.0 µW/cm 2 is valid when the duration of exposure is during eight hours or more.
Evaluation of radiation level of radar
Power flux density calculations were made for the TA 10MTD Thomson radar which is used in Lithuania.
The primary radar data for calculation is taken from the TA 10MTD technical documentation: -frequency of oscillations in radiated radio pulses : f = 2900 MHz (λ = 10.34 cm ); -peak power of transmitted impulses P peak = 600 kW; During calculations of power flux density values were found at height h 2 = 2 m above surface of the ground. P imp = 600 kW, τ = 1 µs, Tp = 1000 µs, and αº 3dB = 1.5° were put into (1) and found P aver = 2.5 W was found. Only part of pattern directivity in vertical plane with negative values of angle Θº from -0.5º to 10º every 0.5º were used because only radiation down was analysed (Fig 2). Twenty values of -Θº and values of gain, G (Θº) in dB, were read from the pattern. In equation (2), G (Θº) will correspond G·f 2 (Θº) η set in times instead of G in dB.
The concrete values of Θº were recalculated into distance L according to (4) (Fig 1). In equation (2) multiplier (1p 2 ) corresponding to reflection of electromagnetic waves from ground (Fig 1) was inserted. According to recommendations, (1-p 2 ) = 1.5 of a frequency 2900 MHz [5]. It is necessary to note that this value is in reality smaller than 1.5, because reflected wave with direct wave adds together not essentially in phase.
The results of the calculation of average power flux density according to (2) in depending on distance are shown in figure 3. The calculated curves have a maximum level due to two factors. On one hand, the power flux density decreases as the result of energy dissipation in receding from the radar according to the law 1/ R 2 , on the other hand radiation increases with distance when radiation is greater in elevated directions according to the antenna directivity pattern in the vertical plane. Power flux density decreases very quickly after reaching the maximum.
The maximum power flux density 0.45 µW/cm 2 , at a distance 115 m is many times smaller than the permitted level according to normative -20.0 µW/cm 2 . In this case absence of sanitary protection zone takes place and electromagnetic radiation of radar antenna is not dangerous for population in all area around the radar. If the antenna phase centre is higher -at 14 m -the maximum power flux density reaches only 0.06 µW/cm 2 and is farther from the antenna -at distance 500 m.
Conclusions
1. Method of evaluating radar antenna microwave radiation calculations uses the average value of radiated power, which is many times smaller than pulse peak power because the duration of pulses is many times shorter than the long pauses between them. Besides, a rotating antenna with a narrow pattern of directivity in the horizontal plane emits the electromagnetic field in supervisory point for a small time during the period of rotation. 2. At microwave frequencies greater than 300 MHz, the average value of radiated power flux density is used to evaluate the danger for population and is compared with the permitted density level according to normative or standards. 3. Lithuania and the European Union have the same normative for permitted levels of electromagnetic field power flux density -10.0 µW/cm 2 in the 300 MHz -300 GHz frequency band if radiation is continuous and 20.0 µW/cm 2 in the case of the pulse radiation.
4. An evaluation of the radiation emitted by powerful radar TA 10MTD installation in Lithuania was made. If the phase centre of the antenna is 5 m in height, the radiated power flux density is 40 times smaller than the permitted level and is not dangerous to the health of the population. In this case, a sanitary protection zone is not necessary. If the antenna is established higher, the maximum radiation intensity is at greater distance but with many times smaller amplitude. 5. In this investigation, the theoretical evaluation of electromagnetic radiation is done before the installation of the radar. The real situation will be evident after the radar antenna is mounted and experimental measurements of radiated power flux density are taken. Measuring such small levels of radiation can be impeded by interferences. | 2,364.6 | 2008-06-30T00:00:00.000 | [
"Physics",
"Engineering"
] |
Construction of closed horizontal drainage on irrigated lands and determination of its parameters
. The article describes the construction technology of closed horizontal drainage in irrigated areas, presents the results of theoretical studies to determine the depth of drainage, the width of the drainage trench, the diameter of the drainage pipe, the thickness of the filter material, the distance between the drains and the drainage module. According to the results of theoretical studies, the average drainage depth is 1.5 m , the width of the drainage trench is 0.3 m , the diameter of the drainage pipe is 0.1 m , the thickness of the filtration material is 0.1 m . The distance between the drains is 150 m with the drainage module 0.1 l/s , the distance between drains is 180 m with a drain module of 0.12 l/s, and the distance between drains is 210 m with a drain module of 0.14 l/s .
Introduction
Drainage has great importance in improving the condition (amelioration) of lands. According to the recommendations of the scientists mentioned above, 30 percent of irrigated land should be open, and 70 percent should be installed as closed. However, due to the lack of scientifically-importance technology for the construction of closed horizontal drainage, scientists' recommendations are not being followed. The drains built in 1966 using as the world's first drainage machine designed are still in usage today.
In our country, 39 thousand km of closed horizontal drainage have been built, of which 70 percent are out of order, and other remaining 30 percent are being operated with very low efficiency. One of the main reasons for this is increasing the groundwater level. During the operation of imported drainage machines, drains are laid to a depth of 3.0 m. At present, the groundwater level has risen to a depth of 1.5 m from the earth's surface. Therefore, in places, open drainages are built, and due to a decrease in the area for agricultural crops, the land-use coefficient is being decreased. According to preliminary calculations, currently, open drainages occupy an area of 250 thousand hectares. Long-term observations show that drainages fail because the groundwater level is almost equal to the water level in the reservoirs into which they are removed.
Methods
There are two solutions to this problem: the first -choosing other deeper reservoirs, where groundwater can easily flow out, but this solution requires more expenses. the second -improving the design of machines for the construction of drainage and build drainage to a depth of 1.5 m.
The research aims to determine the parameters of closed horizontal drainage in irrigated areas (diameter of the drainage pipe, filtration material and its thickness, installation depth, and distance between drains).
The horizontal drainage structure is shown in Figure 1. It consists of a drainage pipe 3 installed at a certain depth, a filtration material 4 wrapped around it, holes 5 for water entering the drainage pipe, inspection wells 2 installed at regular intervals (100 or 400 m), and a cover 1 of the inspection well. Construction technology of horizontal drainage. A drainage trench is dug with an appropriate slope and a drainage pipe and its filtration material installed on it. In this case, the length of the drainage can be 400...1200 m. The soil from the drainage trench is filled up and compacted. Inspection wells installed at certain intervals (100 or 400 m).
The function of control wells is to monitor the operation of the drain and their exploitation when flushing the drain pipe. In this case, of course, the first inspection well should be installed at the beginning/top of the drain. If the water inside the drain pipe is discharged into an open drain, the asbestos pipe is connected to the end of the closed drain pipe, which is diverted to the open drain at a certain distance [4].
Results and Discussion
Typically, drainage pipes of different diameters are installed over large areas to transfer water from a small-diameter pipe to a large-diameter pipe (for example, from a 100 mm pipe to a 150 mm pipe, from a 150 mm pipe to a 200 mm pipe, from a 200 mm pipe to a pipe diameter 250 mm, etc.), and the last pipe collects all the water and discharges it into an open drain [1,2,6].
The volume of water taken from one hectare can be determined using the following formula: where ℎ is a layer of water-saturated soil, m; is moisture content of water-saturated soil ( = 0.28 .. where д is the number of days in a year. Formula (2) can be written as follows: The drainage module can be expressed as follows: where т is the diameter of the holes on the surface of the drainage pipe, m; n is the number of holes in one meter of the drainage pipe, ос is the groundwater velocity, m/h. If there are four holes in each centimeter of the pipe, then the number of holes in a meter of the drainage pipe is n = 4 • 100 = 400 pcs.
Consequently, the water velocity in heavy sandy soils is 75 m/h, and the distance between drain pipes is ℓ = 150 m.
The volume of water for each meter of drainage is: If the area of one hectare is 10 thousand m 2 and the drainage pipe collects water at a distance of 150 m, then the width b of the area can be determined as follows: Therefore, the distance between the drains is ℓ = 210 m. The volume of water flowing through the drain pipe can be determined using the following formula: where d is the diameter of the drainage pipe, m; is water flow rate in the pipe, m/s; h is lifting height of the drainage pipe, m.
If during every hour 22 3 of water passes through the drainage pipe, then 960 3 of water from a hectare flows out in 44 hours, that is 1.83 days.
In this case, the depth of the drainage trench for all drainages was taken equal to H = 1.6 m, and its width was B = 0.3 m.
Water supplied from the soil's surface for irrigation of crops is absorbed into the soil, flushes out salts from the soil, and the mixture of water and salt enters the drainage pipes. In this case, layers are formed moistened with ℎ н and saturated with water ℎ (Fig. 2).
It can be seen from the figure that the groundwater boundary has a curved line; it is located low in places that are close to the drainage pipes and high in the middle of the distance between the drainage pipes. Drainage modulus values were determined based on the field experience [3].
Conclusions
According to practical results obtained through experiments and theoretical calculations, the average drainage depth is 1.6 m, the width of the drainage trench is 0.3 m, the diameter of the drainage pipe is 0.1 m, the thickness of the filter material is 0.1 m, the distance between drains is 150 m at drainage module 0.10 l/s, the distance between drains 180 m with drainage module 0.12 l/s, the distance between drains 210 m with drainage module 0.14 l/s, the distance between drains 240 m with drainage module 0.16 l/s, the distance between drains was 270 m with a drainage module of 0.18 l/s and the distance between drains was 300 m with a drainage module of 0.20 l/s. | 1,737.4 | 2021-01-01T00:00:00.000 | [
"Environmental Science",
"Agricultural and Food Sciences",
"Engineering"
] |
eHealth Literacy: Essential Skills for Consumer Health in a Networked World
Electronic health tools provide little value if the intended users lack the skills to effectively engage them. With nearly half the adult population in the United States and Canada having literacy levels below what is needed to fully engage in an information-rich society, the implications for using information technology to promote health and aid in health care, or for eHealth, are considerable. Engaging with eHealth requires a skill set, or literacy, of its own. The concept of eHealth literacy is introduced and defined as the ability to seek, find, understand, and appraise health information from electronic sources and apply the knowledge gained to addressing or solving a health problem. In this paper, a model of eHealth literacy is introduced, comprised of multiple literacy types, including an outline of a set of fundamental skills consumers require to derive direct benefits from eHealth. A profile of each literacy type with examples of the problems patient-clients might present is provided along with a resource list to aid health practitioners in supporting literacy improvement with their patient-clients across each domain. Facets of the model are illustrated through a set of clinical cases to demonstrate how health practitioners can address eHealth literacy issues in clinical or public health practice. Potential future applications of the model are discussed. (
Access Barriers to eHealth
What if we created tools to promote health and deliver health care that were inaccessible to over half of the population they were intended for?Consumer-directed eHealth resources, from online interventions to informational websites, require the ability to read text, use information technology, and appraise the content of these tools to make health decisions.Yet, even in countries with high rates of absolute access to the Internet, such as the United States and Canada, over 40% of adults have basic (or prose) literacy levels below that which is needed to optimally participate in civil society [1,2].A multi-country study of information technology use and literacy found that as literacy skill levels rise, the perceived usefulness of computers, diversity and intensity of Internet use, and use of computers for task-oriented purposes rise with it, even when factors such as age, income, and education levels are taken into account [3].If eHealth is to realize its potential for improving the health of the public, the gap between what is provided and what people can access must be acknowledged and remedied.
Greater emphasis on the active and informed consumer in health and health care [4] in recent years has led to the realization that ensuring the public has both access to and adequate comprehension of health information is both a problem [5] and an achievable goal for health services [2,3].A recent report from the US Institute of Medicine (IOM) entitled Health Literacy: A Prescription to End Confusion looked at the relationship between health and literacy and found that those with limited literacy skills have less knowledge of disease management and health promoting behaviors, report poorer health status, and are less likely to use preventive services than those with average or above average literacy skills [6].
Health Literacy
The IOM report focuses largely on health literacy, using the following definition (originally proposed by Ratzan and Parker [7]): "the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions" [7].
This definition underscores the importance of contextual factors that mediate health information and the need to consider health literacy in relation to the medium by which health resources are presented.Within a modern health information environment, this context includes the following: interactive behavior change tools, informational websites, and telephone-assisted services, which are all being deployed globally to promote health and deliver health care (eg, [8-[11]).However, even among North American adolescents, the highest Internet-use population in the world, many teens report that they lack the skills to adequately engage online health resources effectively [12].There is a gap between the electronic health resources available and consumers' skills for using them.By identifying and understanding this skill set we can better address the context of eHealth service delivery [13].
As we witness the impact that basic literacy has on health outcomes, questions arise about how literacy affects eHealth-related outcomes and experiences [14].But unlike literacy in the context of paper-based resources, the concept of literacy and health in electronic environments is much less defined.Consumer eHealth requires basic reading and writing skills, working knowledge of computers, a basic understanding of science, and an appreciation of the social context that mediates how online health information is produced, transmitted, and received-or what can be called eHealth literacy.A definition and model of eHealth literacy is proposed below that describes the skills required to support full engagement with eHealth resources aimed at supporting population health and patient care.
eHealth Literacy Model
The Lily Model Eng (2001) defines eHealth as "the use of emerging information and communication technology, especially the Internet, to improve or enable health and health care [15]; this is one of many published definitions currently in use [16].Taken in the context of the IOM's definition of health literacy stated above, the concept of eHealth literacy is proposed.Specifically, eHealth literacy is defined as the ability to seek, find, understand, and appraise health information from electronic sources and apply the knowledge gained to addressing or solving a health problem.Unlike other distinct forms of literacy, eHealth literacy combines facets of different literacy skills and applies them to eHealth promotion and care.At its heart are six core skills (or literacies): traditional literacy, health literacy, information literacy, scientific literacy, media literacy, and computer literacy.The relationship of these individual skills to each other is depicted in Figure 1.Using the metaphor of a lily, the petals (literacies) feed the pistil (eHealth literacy), and yet the pistil overlaps the petals, tying them together.
Within the lily model, the six literacies are organized into two central types: analytic (traditional, media, information) and context-specific (computer, scientific, health).The analytic component involves skills that are applicable to a broad range of information sources irrespective of the topic or context (Figure 2), while the context-specific component (Figure 3) relies on more situation-specific skills.For example, analytic skills can be applied as much to shopping or researching a term paper as they can to health.Context-specific skills are just as important; however, their application is more likely to be contextualized within a specific problem domain or circumstance.Thus, computer literacy is dependent upon what type of computer is used, its operating system, as well as its intended application.Scientific literacy is applied to problems where research-related information is presented, just as health literacy is contextualized to health issues as opposed to shopping for a new television set.Yet, both analytic and context-specific skills are required to fully engage with electronic health resources.
eHealth literacy is influenced by a person's presenting health issue, educational background, health status at the time of the eHealth encounter, motivation for seeking the information, and the technologies used.Like other literacies, eHealth literacy is not static; rather, it is a process-oriented skill that evolves over time as new technologies are introduced and the personal, social, and environmental contexts change.Like other literacy types, eHealth literacy is a discursive practice that endeavors to uncover the ways in which meaning is produced and inherently organizes ways of thinking and acting [17,18].It aims to empower individuals and enable them to fully participate in health decisions informed by eHealth resources.
Traditional Literacy
This concept is most familiar to the public and encompasses basic (or prose) literacy skills such as the ability to read text, understand written passages, and speak and write a language coherently [19].Technologies such as the World Wide Web are still text dominant, despite the potential use of sound and visual images on websites.Basic reading and writing skills are essential in order to make meaning from text-laden resources.A related issue is language itself.Over 65% of the World Wide Web's content is in English [20], meaning that English-speakers are more likely to find an eHealth resource that is understandable and meets their needs.
Information Literacy
The American Library Association suggests that an information literate person knows "how knowledge is organized, how to find information, and how to use information in such a way that others can learn from them" [21].Like other literacies, this definition must be considered within the context of the social processes involved in information production, not just its application [19].An information literate person knows what potential resources to consult to find information on a specific topic, can develop appropriate search strategies, and can filter results to extract relevant knowledge.If one views the Web as a library, with search tools (eg, Google) and a catalogue of over eight billion resources, the need for Web users to know how to develop and execute search strategies as well as comprehend how this knowledge is organized becomes imperative.
Media Literacy
The wide proliferation of available media sources has spawned an entire field of research in the area of media literacy and media studies.Media literacy is a means of critically thinking about media content and is defined as a process to "develop metacognitive reflective strategies by means of study" [22] about media content and context.Media literacy is a skill that enables people to place information in a social and political context and to consider issues such as the marketplace, audience relations, and how media forms in themselves shape the message that gets conveyed.This skill is generally viewed as a combination of cognitive processes and critical thinking skills applied to media and the messages that media deliver [23].
Health Literacy
As discussed earlier, health literacy pertains to the skills required to interact with the health system and engage in appropriate self-care.The American Medical Association considers a health literate person as having "a constellation of skills, including theability to perform basic reading and numerical tasks required to function in the health care environment.Patients with adequate health literacy can read, understand, and act on health care information" [24].Consumers need to understand relevant health terms and place health information into the appropriate context in order to make appropriate health decisions.Without such skills, a person may have difficulty following directions or engaging appropriate self-care activities as needed.
Computer Literacy
Computer literacy is the ability to use computers to solve problems [25].Given the relative ubiquity of computers in our society, it is often assumed that people know how to use them.Yet, computer literacy is nearly impossible without quality access to computers and current information technology.For example, it is not helpful to learn PC-based commands on a Mac, to learn Windows 98 if one requires Windows XP, or be trained on a laptop when a personal digital assistant (PDA) is required for a task.Computer literacy includes the ability to adapt to new technologies and software and includes both absolute and relative access to eHealth resources.To illustrate this, Skinner and colleagues found that while nearly every Canadian teenager has access to the Internet, far fewer have the quality of access or the ability to fully utilize it for health [26,27].
Scientific Literacy
This is broadly conceived as an understanding of the nature, aims, methods, application, limitations, and politics of creating knowledge in a systematic manner [28].The latter-mentioned political and sociological aspects of science are in response to earlier conceptions of science as a value-free enterprise, a position that has been vigorously challenged [28][29][30].For those who do not have the educational experience of exposure to scientific thought, understanding science-based online health information may present a formidable challenge.Science literacy places health research findings in appropriate context, allowing consumers to understand how science is done, the largely incremental process of discovery, and the limitations-and opportunities-that research can present.
The Six Literacy Types
Taken together, these six literacy types combine to form the foundational skills required to fully optimize consumers' experiences with eHealth.A profile of each literacy type with examples of the problems patient-clients might present is summarized in Table 1.Also included is a list of resources, many of them Web-based, that can be consulted to help health practitioners support patient-clients in improving their literacy skills across each domain.Although it would not be unexpected to find that older adults and those from nonindustrialized countries report greater difficulties in certain domains, particularly those that are context-specific, it is the authors' experience that few assumptions about which groups or individuals are likely to encounter difficulties can be made.As work with highly Internet-connected populations (like North American adolescents) shows, many of whom we would expect to be skilled users, there is a lack of skills, opportunity, and environments to use eHealth to its fullest potential [12,26,27].
Potential Resources Identifying Problems
Analytic literacy skills can be generically applied to a number of sources and circumstances.These are foundational skills that are required to participate in daily informational life.Training aids are commonly found in many countries.
Analytic
Traditional Literacy and Numeracy
Context-Specific
Computer training courses are widespread; however, accessibility is an issue for those on fixed incomes.Many libraries offer special programs to teach patrons both computer and search skills for little or no cost.Some countries have job training centers that provide basic computer courses as part of their core mandate.
Computer Literacy These six skill types illustrate the challenges that eHealth presents to those with low literacy in any one area.Although one need not have mastery in all these areas to benefit from eHealth resources, it can be argued that without moderate skills across these literacies, effective eHealth engagement will be unlikely.Using a specific health-related issue (smoking prevention and cessation) as an example, Table 2 illustrates how these literacy issues may present within the context of primary care while suggesting possible intervention strategies.Unlike other areas of health care, there is no "best practice" solution to addressing problems of literacy that fits into a single session or neatly packaged brief intervention.Rather, improving literacy is a process that requires coordinated remediation and education, involving partnerships among patient-clients, practitioners, educators, and community health organizations over time.It is as much a process as it is an outcome.
Literacy Type(s) Required Case Study
Media Literacy: Teens need to know the difference between the perspectives presented on each site to make an informed decision.One site belongs to a tobacco company with a vested interest in selling cigarettes, and it advocates prevention strategies not supported by the best evidence.The other two sites are from a teen-focused research project at a public university and from a government health agency.These three sites together encourage discussion about media issues and allow for exploration with patient-clients the ways in which information on one issue can be presented differently.The Media Awareness Network [37] has resources for working with children and youth in enhancing media literacy that can aid in fostering this discussion.
A group practice has decided to provide smoking prevention resources for teens and their parents on its website.The resources are to be approved by a patient advisory committee.The three sites put forward are Phillip Morris USA's smoking prevention material site [40], The Smoking Zine by TeenNet at the University of Toronto [41], and Health Canada's Quit4Life program [42].
Traditional Literacy: A basic literacy assessment should be undertaken before recommending use of the Internet as a resource.This may be done by having the patient read a few simple text passages from consumer health materials or the newspaper or by asking the patient directly if he has difficulties reading.If basic text materials are difficult, the person is likely to require assistance in using the Web or other Internet resources even at a rudimentary level.
Computer Literacy: If the man has limited experience with computers, specific training through a local library, community center, or other community program might be necessary to provide him with the means to use Web-assisted tobacco interventions.This requires that the practitioner arrange and assist the patient in connecting with one of these community resources or inquire if there are family members or friends who can assist him in getting online.
A 60-year-old man with little formal education and no experience using computers presents with concerns about continuing to smoke.He has made many unsuccessful quit attempts and has been told there are Internet resources available that can help him.He is interested in trying something different to help him stop using tobacco.
Information Literacy: A referral to the local library or on-staff librarian (if available) is the simplest strategy.A short tutorial on the use of search engines, search strategies, and health databases can provide the basics on how to navigate the Internet for health information.Once basic search strategies have been established, the patient may wish to use evidence-supported resources for evaluating consumer health information, available through tools such as the DISCERN Project websites [43,44].
A 35-year-old woman presents with an interest in finding information on smoking to share with her teenage daughter.She uses email at work and regularly visits a local website for news, but otherwise does not surf regularly and does not know how to find Internet resources easily.
Science Literacy: This scenario presents a teachable moment to outline some of the issues that address science literacy, such as how evidence changes over time and issues of quality.In this case, it may be useful to direct the patient to reference sources outlining contrary views and encourage a dialogue around what makes good science.It is possible the research she has referred to is out of date, contested, or heavily biased (eg, tobacco-industry sponsored).
A 24-year-old mother of two small children and current smoker challenges the claim that secondhand smoke is harmful to her children, citing research she found on the Internet.
Health Literacy:
The presenting patient is following the product instructions.It is worth exploring the context around this behavior to see if it is a matter of fit between the NRT delivery method and the person or whether it is an issue of literacy.Patient instructions should be reviewed to ensure that they are written in plain language.Practitioners may also wish to explore whether there are other media tools available from the manufacturer or local health unit that can be used to supplement the written instructions, such as visual aids or videos to reduce the amount of required reading.
A 45-year-old patient has been prescribed nicotine replacement therapy (NRT) using an inhaler.The patient is unsure when to use the inhaler and under what conditions and reports behaviors that indicate he is not using the inhaler as originally prescribed.
Discussion
Literacy is as much a process as an outcome and requires constant attention and upgrading.The key is to reach a level of fluency at which one can achieve working knowledge of the particular language (or skill), enough to function at a level conducive to achieving health goals.Knowledge, information, and media forms are context-specific, and context dictates what skills and skill levels are required to access health resources.For example, technical jargon may be appropriate in academic discourse provided it allows for a more precise explanation of certain concepts.However, when directed at nontechnical consumers or those outside of a particular research or practice culture, technical language may need to undergo a translation process in order to convey a message properly [45].Whereas a scientist may be interested in acetylsalicylic acid, a patient requiring pain relief knows this substance only as Aspirin or ASA.
As the World Wide Web and other technology-based applications become a regular part of the public health and health care environment, viewing these tools in light of the skills required for people to engage them becomes essential if the power of information technology is to be leveraged to promote health and deliver health care effectively.The eHealth literacy model presented here is the first step in understanding what these skills are and how they relate to the use of information technology as a tool for health.The next step is to apply this model to everyday conditions of eHealth use-patient care, preventive medicine and health promotion, population-level health communication campaigns, and aiding health professionals in their work-and evaluate its applicability to consumer health informatics in general.Using this model, evaluation tools can be created and systems designed to ensure that there is a fit between eHealth technologies and the skills of intended users.By considering these fundamental skills, we open opportunities to create more relevant, user-friendly, and effective health resources to promote eHealth for all.
Figure 1 .Figure 2 .Figure 3 .
Figure 1.eHealth literacy lily model Few widespread resources exist to teach people science literacy.The most common approach to learning about science is through formal education; however, many science institutions such as universities and colleges have open lectures and educational events for the public on a regular basis.In Canada, the Royal Institute for the Advancement of Science holds monthly lectures on science topics to educate the public, as does the Royal Society in the UK.
Table 2 .
Case scenarios: tobacco use and the six literacy types | 4,835.6 | 2006-06-16T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Combining Experience Sampling and Mobile Sensing for Digital Phenotyping With m-Path Sense: Performance Study
Background The experience sampling methodology (ESM) has long been considered as the gold standard for gathering data in everyday life. In contrast, current smartphone technology enables us to acquire data that are much richer, more continuous, and unobtrusive than is possible via ESM. Although data obtained from smartphones, known as mobile sensing, can provide useful information, its stand-alone usefulness is limited when not combined with other sources of information such as data from ESM studies. Currently, there are few mobile apps available that allow researchers to combine the simultaneous collection of ESM and mobile sensing data. Furthermore, such apps focus mostly on passive data collection with only limited functionality for ESM data collection. Objective In this paper, we presented and evaluated the performance of m-Path Sense, a novel, full-fledged, and secure ESM platform with background mobile sensing capabilities. Methods To create an app with both ESM and mobile sensing capabilities, we combined m-Path, a versatile and user-friendly platform for ESM, with the Copenhagen Research Platform Mobile Sensing framework, a reactive cross-platform framework for digital phenotyping. We also developed an R package, named mpathsenser, which extracts raw data to an SQLite database and allows the user to link and inspect data from both sources. We conducted a 3-week pilot study in which we delivered ESM questionnaires while collecting mobile sensing data to evaluate the app’s sampling reliability and perceived user experience. As m-Path is already widely used, the ease of use of the ESM system was not investigated. Results Data from m-Path Sense were submitted by 104 participants, totaling 69.51 GB (430.43 GB after decompression) or approximately 37.50 files or 31.10 MB per participant per day. After binning accelerometer and gyroscope data to 1 value per second using summary statistics, the entire SQLite database contained 84,299,462 observations and was 18.30 GB in size. The reliability of sampling frequency in the pilot study was satisfactory for most sensors, based on the absolute number of collected observations. However, the relative coverage rate—the ratio between the actual and expected number of measurements—was below its target value. This could mostly be ascribed to gaps in the data caused by the operating system pushing away apps running in the background, which is a well-known issue in mobile sensing. Finally, some participants reported mild battery drain, which was not considered problematic for the assessed participants’ perceived user experience. Conclusions To better study behavior in everyday life, we developed m-Path Sense, a fusion of both m-Path for ESM and Copenhagen Research Platform Mobile Sensing. Although reliable passive data collection with mobile phones remains challenging, it is a promising approach toward digital phenotyping when combined with ESM.
Background
One of the greatest challenges in social and behavioral sciences is to obtain reliable information about what people do, think, and feel during the course of their day-to-day lives. It is critical to learn more about this because it can aid in the prevention, diagnosis, and treatment of mental and behavioral issues. Until recently, the gold standard for collecting data in everyday life has been the experience sampling methodology (ESM)-also known as ecological momentary assessment (EMA)-in which ≥1 daily questionnaires are administered (nowadays via smartphones) to participants to report on their everyday behavior, thoughts, feelings, and context. However, the information that can be obtained through this method is limited owing to the nature of subjective self-report and participant burden. Nevertheless, recent advances in smartphones and other portable digital technologies allow us to collect much rich and more comprehensive information that goes well beyond what is available via ESM.
Prompting participants to complete multiple questionnaires per day, especially over long periods, can quickly become burdensome, resulting in deteriorating data quality or even participant dropout [1][2][3]. This problem may be partially mitigated in a research context because participants are generally motivated by monetary or other incentives, but it is more of a problem in clinical practice because the substantial participant burden makes it difficult to persuade individuals to use ESM on their own. Furthermore, although ESM has proven to be a substantial improvement over traditional questionnaires that only retrospectively inquire about past experiences (whereas ESM generally focuses on the present), the method's reliance on self-report and inherent subjectivity can influence data quality through participants' biases in terms of self-representation, introspective capabilities, and memory [1, 3,4].
However, smartphones can not only administer questionnaires but can also collect all types of other information about the behavior, activity, and context of its user. These data are known as mobile or smartphone sensing data and include data about location, movement, activity, phone use, and app use, among others [5][6][7]. Mobile sensing data can contribute to research on what happens in people's daily lives because it is able to track people's behavior and environment unobtrusively, objectively, and without effort, thus revealing patterns that could not be discovered until now. Mobile sensing and other passive sensing methods have become increasingly important in the era of digital phenotyping [8], which is defined as "moment-by-moment quantification of the individual-level human phenotype in situ using data from personal digital devices" [9]. The use of digital phenotyping and mobile sensing has seen a huge increase, yet it has not been able to fulfill its promise to provide directly meaningful insight into people's thoughts and feelings [10]. Given that ESM and mobile sensing each have advantages and disadvantages, a possible path forward is to complement ESM with mobile sensing to maximize the opportunities of both methods (and minimize their weaknesses).
To do this, a mobile app that is capable of collecting both ESM and mobile sensing data in the background must be used. Although there are some mobile apps that are already available (eg, AWARE [11]; mind Learn, Assess, Manage, and Prevent [12]; Beiwe [13]; Remote Assessment of Disease and Relapse-Base (RADAR-Base) [14]; Sensus [15]; and Effortless Assessment of Risk States [16]), most other apps are no longer maintained or poorly documented, posing a major barrier for researchers who want to incorporate mobile sensing into their research. In addition, most of the existing mobile sensing apps are focused on passive data collection and pay relatively little attention to complementing the obtained data with ESM, which could be a strong addition for examining many of the phenomena under study. For example, although location via GPS is able explain some of the variation in depressive symptoms, such symptoms have been linked more strongly to the emotions that people report to experience from moment to moment (using ESM) [17,18]. The limited predictive power of mobile sensing data is exacerbated in attempts to predict variables that change even more quickly such as momentary mood. In conclusion, although mobile sensing data can provide some information on participants' everyday behavior, thoughts, feelings, and context, its stand-alone usefulness is limited when not combined with other sources of information such as ESM.
In particular, there are several ways in which ESM and sensing data can complement each other [19]. First, mobile sensing can add information to ESM measures such as biological, behavioral, or contextual variables of interest that cannot be reasonably measured by using only ESM. For instance, weather information based on participants' current location can add to the understanding of stress [20] and well-being [21][22][23]. Second, mobile sensing could substitute or corroborate ESM measures by replacing items that can be measured directly with mobile sensing. For example, instead of asking participants where they are, the item could be replaced by mobile sensing that unobtrusively tracks their continuous location via GPS. Although this can also be measured with ESM, it is more subjective as opposed to real-world measurements from the phone itself, while also operating at a much higher sampling frequency. Third, it could improve the precision of ESM measures by allowing for event-contingent sampling or context-aware triggering [19]. An example of this is geofencing, a type of location service that can trigger a survey if the smartphone enters or exits a predetermined perimeter (eg, 50-m radius around their home). Finally, mobile sensing can also play an important role in ecological momentary interventions [24] and just-in-time adaptive interventions [25], where real-time data collection can trigger interventions or investigation of the participant's condition. For example, evaluation of their position via GPS can monitor movements (eg, going out of the house) or certain high-risk locations (eg, liquor stores) and launch a prompt with specific questions or instructions.
Objectives
To facilitate the complementary use of ESM and mobile sensing data for researchers, we created a new platform (by integrating 2 existing platforms) to enable the combined collection and application of ESM and sensing data. In this paper, we present a novel, full-fledged ESM platform with background mobile sensing. This new platform was evaluated based on 2 key criteria: the sampling reliability of mobile sensing and the perceived user experience of the technical implementation (battery drain, bugs, etc). First, we describe how this platform was created, its sensing capabilities, privacy and security considerations, general workflow for end users, and data processing and visualization for researchers. The second part of this paper contains the findings of a pilot study that used the new platform to evaluate the sampling frequency and user experience criteria.
Implementation
To develop an app with both ESM and mobile sensing capabilities, we used m-Path [26] as the starting point, as it is well established in both research and clinical settings. m-Path is a versatile and user-friendly platform for ESM and ecologic momentary interventions that has already proven itself as an asset to >15,000 users. Since 2019, >500,000 questionnaires have been completed on the platform. Some of its advantages include its ease of use through the user-friendly web-based dashboard, its wide array of question types, the ability to create applets, and the highly tailorable control flow within questionnaires in which one can even use piped text and real-time computations within and between interactions. As we wanted to incorporate mobile sensing into m-Path, we named the new app m-Path Sense. m-Path is aimed at both clinical practitioners and researchers, whereas m-Path Sense is only primarily directed at researchers who want to conduct ESM studies using mobile sensing.
To enhance m-Path with mobile sensing functionality, we added the Copenhagen Research Platform Mobile Sensing (CAMS) framework, a reactive cross-platform framework for digital phenotyping [27]. CAMS is specifically designed to be integrated into other apps, acting as a loosely coupled component of the overall app. As both m-Path and CAMS are programmed in Flutter (Google LLC) [28]-a cross-platform programming framework that compiles to both Android and iPhone Operating System (iOS) apps-there is a unique opportunity to combine the 2 frameworks. Therefore, m-Path Sense has been made available in the Apple App Store [29] and the Google Play Store [30].
The integration of m-Path and CAMS was accomplished by adding the various CAMS components to m-Path as a plug-in and using them as needed. A first trigger point for CAMS is when the app is launched and when a beep (ie, a notification for a questionnaire) is pressed if the app was previously closed inadvertently. The CAMS pipeline is then activated as a result of this interaction. CAMS's underlying code (written in Dart) creates a unifying interface for both Android and iOS, but this interface is then converted to a platform-specific code (ie, Swift, Java, or Kotlin).
Another crucial step in the integration of m-Path and CAMS was the configuration and validation of the various sampling schemes and study protocols. A sampling schema in CAMS defines a specific configuration of a sensor, such as the frequency at which the measure should collect data. The study protocol specifies which sensors should be used and how frequently they should be activated. Some sensors provide a stream of data; thus, they only need to be triggered once (for example, if it is triggered by an event), whereas others provide data only once; therefore, the period over which this should occur must be specified. Although CAMS provides some standard values, testing different values is a time-consuming yet crucial process because it can have a considerable impact on both the quality and quantity of data and on the participants' smartphones. Moreover, it also provided an initial idea of how well CAMS qualitatively performed under different sampling conditions.
The final step in the integration was to deal with the permissions (eg, location access) needed for CAMS (and parts of m-Path) to work properly. When participants first launch m-Path Sense, they must grant all permissions via a special screen that informs them about this requirement ( Figure 1). These permissions are displayed individually to the participants. Currently, it is necessary to grant all permissions for the app to function properly, but we intend to customize this process based on which sensors are included in a particular study in a future version. A detailed overview screen (adopted from CARP Mobile Sensing [27]) that shows what types of data are being collected (and how frequently). KU: Katholieke Universiteit.
Mobile Sensing Functionality
m-Path Sense is capable of collecting a wide array of mobile sensing data, depending on whether this type of data is available on the device's operating system (OS; ie, Android or iOS). Table 1 lists all the available sensors responsible for capturing their corresponding data. It should be noted that the functionality for collecting call and text logs is not listed in Table 1 but is supported (for Android only). However, as collecting these types of data is against Google's policy, they are not included in the Google Play Store version.
Security and Privacy
First, m-Path Sense is a completely separate app from m-Path to prevent users from inadvertently using m-Path Sense (both ESM and mobile sensing) instead of m-Path (ESM only) without being informed. On registration, they must also provide a code from a practitioner who has indicated in m-Path's web-based dashboard that they allow mobile sensing functionality to use m-Path Sense. After providing informed consent and the sensing has started, participants can always stop sensing by closing or removing the app. There is purposefully no pause button, as participants may accidentally press it and consequently stop sensing inadvertently. At the same time, sensing data can only be gathered as long as the app is running; therefore, participants can always choose to terminate it or even remove the app entirely if they feel that their privacy is at risk. As long as the sensing component is active, a permanent notification (Android) or a blue dot (iOS) is displayed to remind participants that they are being tracked. By going to the menu and pressing the mobile sensing tab, participants can also see their personal participant ID, specific sensors that are being run, and amount of data that has been collected.
Given the highly sensitive nature of collected data, data security and privacy are of utmost importance. These elements were maximally considered in the app development process and handling of data. All data collected with the m-Path Sense app (both ESM and sensor data) are initially stored locally in a protected folder on the participant's smartphone, which can only be accessed through the app. This folder cannot be accessed by other apps. Furthermore, we have implemented a privacy scheme [27] that can render certain extrasensitive data unreadable by using a 1-way cryptographic hash or encrypting it with an asymmetric Curve25519 public key [31]. Managing the storage of the private key is the responsibility of the researcher, such that the m-Path Sense team will never have access to the encrypted data in transit.
For the transfer of data from smartphone to server, asymmetric HTTP Secure encryption is in place. The server used in this study was owned by the university, but we are currently migrating to pCloud [32], a secure Europe-based cloud storage service [33,34]. Researchers then grant the app access to their own pCloud folder, and the app can directly upload data to their folder via HTTP Secure, from which the encrypted data can be downloaded. To enhance data security and to prevent data leakage, highly secured application-layer encryption is applied at all times. Specifically, all answers given to questionnaires are stored on the phone using Advanced Encryption Standard (AES) 256-bit encryption with Public-Key Cryptography Standard (PKCS) 7 padding. The collected mobile sensing data are written to a JSON file until this file has reached a size of 5 MB and subsequently zipped to reduce its size to approximately 1 MB. Both questionnaire and sensor data are transferred to secure servers only if the participant has access to a Wi-Fi network. In the rare event when participants do not have access to a Wi-Fi network for longer than 24 hours, data are uploaded via their mobile data connection or they can always upload the data manually at any other time. The data stored in this local folder are automatically deleted once it is sent to the server. Moreover, all local data are deleted once the app is removed from the phone.
Processing and Visualization
In the current implementation, the data from the participants' smartphones are stored as JSON files of up to 5 MB. On completion of each beep, all data (if connected to Wi-Fi) are sent to a secure server. This means that (1) data arrive in batches (and not in real time) and (2) the data are not immediately structured and integrated with all other data. In other words, the data will first have to be extracted from JSON files (where sensor data are also not written in sequence) and then imported into a central database.
To aid in the process of data processing, we have developed and made available an R package on the comprehensive R archive network (CRAN), named mpathsenser [35]. The central function in this package is called import, which reads the JSON data, extracts it, and writes it to an SQLite database. Although other database systems are more powerful and possibly fast, we opted for SQLite because it is widely available, can be easily shared offline (because it is only a single file), and can be automatically installed in R with the RSQLite package [36].
One of the specific challenges in analyzing mobile sensing data alongside ESM data is that the 2 must be aligned, despite being collected at very different timescales and frequencies [37]. The function link in the mpathsenser package does exactly this. It allows the user to link 2 tables together within a certain time window, for example, 30 minutes before or after each beep. Another issue that the R package assists in overcoming is the fact that mobile sensing data are often large in size, making efficient analysis difficult. Thus, many functions in the R package are written in such a way that they are executed in the SQLite database (and the computations in SQL), and only the result of the computation is returned to R.
Furthermore, a dashboard built in R using Shiny [38] has been made available, allowing the researcher to import new data with a few clicks and visualize it in various ways to assess data quality. For example, a coverage chart ( Figure 2) can be generated for a user that displays the measurement frequency per sensor, which can be used to inspect the eventual data collection frequency on particular smartphones, check whether the app has stopped working at some point, or identify any underperforming sensors.
Workflow
After installing m-Path Sense, the workflow is as follows. When participants start m-Path Sense for the first time, they are prompted with a screen that informs them about every type of data that is being collected ( Figure 1A). If they want to continue, they can press "I agree" and are subsequently prompted with a series of in-app permission modals where they have to grant access for m-Path Sense to be able to collect sensor data. After that, they have to agree with m-Path's terms and conditions to continue and enter the researcher's ID code (which is usually provided during the briefing session) to identify the person who has access to the collected data. This code also helps to prevent unintentional sign-ups by people who download the app while not participating in any study. The app will start data collection only after these steps are completed. After the study has started, participants can always view which data are being collected ( Figure 1B) and withdraw from a study at any time by removing the app.
Pilot Study
To evaluate the app, we conducted a 3-week pilot study in which we administered ESM questionnaires while mobile sensing data were being collected in the background. In addition to answering substantive questions about the value of digital phenotyping for psychological variables, this pilot study served to evaluate the app based on several criteria. First, we aimed to evaluate the reliability of the sampling and its related ability to stay alive on the participant's phone as this is a crucial challenge with mobile sensing apps. A well-known problem for mobile sensing apps is that OSs have become increasingly strict in pushing apps further to the background and eventually stopping them; thus, a crucial challenge for such apps is to enable sufficient data coverage [39][40][41]. Second, we wanted to evaluate the perceived general user experience of the app, that is, whether participants had any trouble with battery drain, app crashes, or other nontechnical aspects. The ease of use of the ESM system was not evaluated, as m-Path has already been widely used, evaluated, and tailored [26].
Between June 2021 and December 2021, an initial pool of 462 participants was recruited through various Facebook groups and an experiment recruitment system associated with Katholieke Universiteit Leuven. On the basis of a prescreening process, we excluded participants who were (1) not native Dutch speakers, (2) aged <18 years, or (3) All volunteers provided their informed consent after receiving information regarding the study protocol, remuneration, and obligations and advantages of participation. Participants were advised that their participation was entirely voluntary and that they could exit the study at any time. To preserve the privacy and confidentiality of participants, the data used in this study were pseudonymized. Personal information is kept separate from study data and will not be shared with third parties. Furthermore, as mentioned in the Security and Privacy section, several procedures were implemented to assure data security. Participants were reimbursed for their time through university course credits or monetary compensation of up to €70 (US $79). The remuneration was dependent on the rate of compliance with the questionnaire responses. Participants received full remuneration of €70 (US $79) or 10 credits if they met a compliance rate of at least 75% of the responses on the ESM questionnaires. Each 10% reduction in compliance resulted in a reduction of €10 (US $11.29) or 1 credit of compensation.
Ethics Approval
This study was approved by Katholieke Universiteit Leuven Social and Societal Ethics Committee (G-2020-2200-R3[AMD]) and was conducted in compliance with human subject research ethical principles.
Data Output
A first step toward evaluating the data from m-Path Sense is to examine the raw data output in the form of either JSON or ZIP files. It should be noted that the ZIP files contain JSON files also, but that JSON files may appear in the data output when they were not properly closed because, for example, the app was killed. To avoid JSON syntax errors, when the app is restarted, it will begin writing to a new JSON file rather than continuing to write to the previous one. When this occurs, the old JSON file usually has incorrect file endings, indicating that it is not in the valid JSON format. One of the functions of the R package, mpathsenser, is to automatically fix this. After unzipping the data, 5.55% (4654/83,875) of the files had to be repaired, including some (approximately 50/4654, 1.01%) that were partially corrupted and had to be repaired manually by simply deleting the corrupted parts in the file. It is unknown what causes file corruption, but it is most likely a problem with the participants' phones (because most corrupted files that could not be fixed automatically belonged to a single user, 16/20, 80%) or a bug in Flutter's internal file writing software.
In total, there were 5.51% (4622/83,875) JSON files and 94.49% (79,253/83,875) ZIP files across 104 participants, measuring 69.51 GB in size or 37.50 files and 31.10 MB per day per participant. Interestingly, iOS devices provided many more JSON files (140.12 MB) and ZIP files (921.42 MB) per person than Android (60.13 MB and 212.23 MB, respectively). This is also reflected in the time it took to fill an entire file (maximum of 5 MB); iOS devices needed a median time of 8.83 minutes before starting a new file, whereas Android devices took 1.80 hours before a new file was needed. The primary reason for this discrepancy is that iPhones produce far more accelerometer and gyroscope measurements than Android phones. In conclusion, even if a participant does not upload their data to the server for an entire day, only approximately 59.49 MB is stored on iPhones and 17.33 MB is stored on Android phones.
After the data are sent to the server, they are removed from the phone so that the file size is no longer an issue for users. However, for researchers, the file size of the (unpacked) data is still important owing to hardware constraints. The size of these data deviates from the previously stated figures because these were mainly ZIP files with highly compressed packed data. Extracting the ZIP files to JSON and then converting them into another format (for example, in an SQLite database) has consequences for the corresponding size. The total size after extracting the ZIP files (which only leaves JSON files) was 430.43 GB. Differences existed between Android and iOS devices. Android users, for example, had an average of 389 JSON files with an average size of 4.71 MB, whereas iOS users had an average of 1207 JSON files with an average size of 4.91 MB.
As the unpacked data were relatively large (430.43 GB) and parsing each file to read it in an SQLite database would further increase the size, importing all data to an SQLite database would be a time-consuming and computationally expensive task. However, the accelerometer and gyroscope sensors produced many observations per second, accounting for approximately 90% of the data (in terms of size). Consequently, reducing the data from these sensors while retaining relevant information would be conducive to performing analyses more efficiently.
Using multiple values per second, we calculated the Manhattan distance (L1-norm); Euclidean distance (L2-norm); and average of each x, y, and z dimension per second, because for most purposes, 1 accelerometer or gyroscope value per second suffices. We used an incremental approach, by importing accelerometer and gyroscope data in chunks of about 60 GB and then shrinking the size by calculating these summary metrics. The total SQLite database size-including relevant indexes-after importing all data was 18.30 GB. Table 3 shows the number of observations.
Sampling Reliability
The first objective of the pilot study was to assess the reliability of m-Path Sense in terms of the sampling frequency, that is, whether the number of data points was satisfactory. Figure 3 depicts the average number of observations per hour for all participants. As separate sensors have different target sampling frequencies (and thus, different scales in the figure), the color range of each row is determined from the sensor's sampling frequencies, ranging from 0 (red) to the maximum observed sampling frequency for that sensor (blue). In general, we found the number of collected observations for most sensors to be quite satisfactory in terms of providing sufficient data for most types of analyses and typical research questions. For example, the accelerometer sensor provides approximately 780 samples per hour (after binning), which is slightly more than once every 5 seconds. Another example is the location sensor that provides a location update 97.20 times per hour on average (once every 37.04 seconds), with far more updates during the day (once every 31.65 seconds) than at night (once every 1.74 minutes), instead of its targeted sampling frequency of once per minute. In general, sampling appears to be decreasing at night, possibly owing to the OS pushing the app to the background.
Although results when looking at the absolute number of measurements per hour appear to be good, the coverage rate-the ratio between the actual number of measurements and the expected number of measurements-may be a better method for assessing sampling reliability. Figure 4 depicts the pilot study's relative coverage rate. Multimedia Appendix 1 provides an overview of the expected number of measurements, that is, the sampling schema. Figure 4 shows that the relative coverage is well below 1, frequently hovering around 0.50. This means that only half of the measurements were collected in comparison with what the app was designed to do. For example, if the location of the participants was supposed to be collected once every minute, it was only collected every 2 minutes. For most sensors, the targeted sampling frequency was quite high; therefore, even collecting half of the intended data may be sufficient for a given study. However, for a general-purpose mobile sensing app, this effect is generally undesirable.
The low relative coverage rate raises the question of why few observations were collected. A related issue is that the data contains a large number of gaps ( Figure 5). In this case, a gap is defined as a period of at least 5 minutes during which no measurements were recorded by any sensor. Counterintuitively, the colored bars in Figure 5 depict these gaps over time for each participant. During the 21-day pilot study, Android users had a median of 157 gaps lasting 7.55 minutes in their data, whereas iOS users had a median of 165 gaps lasting 47.36 minutes. Thus, although Android and iOS users had roughly the same number of gaps in their data, gaps occurring on iOS devices were much long, resulting in great loss of data. Naturally, the more gaps there are in the data, the fewer observations can be collected: consider a total data loss of 4.19 days for Android users and 1.95 weeks for iOS devices, both of which would be undesirable. Fortunately, nightly data could account for a large portion of this data loss. After removing gaps between 12 PM and 6 AM, the total data loss per participant over the 21-day study period was 23.93 hours for Android devices and 5.02 days for iOS devices. The scale's lower bound is always 0 and completely red. The maximum observed sampling frequency for that sensor determines the upper bound. For example, weather has a value of 1 and is thus completely blue. However, in the case of location, it is only at 166.50 that it is completely blue, with approximately 1 measurement every 22 seconds. It is also worth noting that the accelerometer and gyroscope measurements were binned to an average value per second. Furthermore, we also assessed whether there were differences between successive OS updates. To this end, we ran a 2-way ANOVA (α=.05) with the total gap time per participant as the dependent variable and whether it was a new or old version of this OS as independent variables. The classification was required because there may be only a few participants for some OS versions, resulting in severely unbalanced groups. For example, there were only 9.6% (10/104) participants who used an OS older than Android 10. iOS 15 ( [partial]=0.03, 95% CI 0-1). Therefore, although there were differences between Android and iOS in terms of the overall number of gaps, the specific OS versions did not show statistically significant differences within each OS.
One of the most likely causes of these data gaps is the OS itself, specifically how it attempts to save energy and resources when the device is not in use (eg, Android's doze mode). There are some guidelines [44] that should help prevent the app from being gradually pushed into the background (and eventually killed), even though both Android and Apple have become increasingly strict in recent years on apps that consume battery in the background. A solution that is currently being implemented to mitigate this issue is to send a signal to the server every 5 minutes to show that it is still alive. When the server does not receive this signal, it can send a notification to the app, which the user must then click on to resume sampling.
In addition to the app simply being pushed to the background and eventually killed, we investigated several other possible causes for the gaps in the pilot study data. Differences between smartphone brands were one of the first explanations we considered. For example, it is possible that a particular brand may be underperforming, thus lowering the overall coverage rate. Multimedia Appendix 2 shows the average coverage rate per day for each brand. There are some differences between brands, but they are minor. A second hypothesis proposed that the low average sampling frequency was caused by a small number of participants who provided very little data. The average relative coverage per participant is presented in Multimedia Appendix 2. This demonstrates that, although there is some variation in the average relative coverage of participants for most sensors, there are no obvious outliers or skewed distributions. Finally, we hypothesized that the large gaps in iOS data were caused by a previously identified problem in the study, namely, iOS abruptly deleting some files owing to a backup issue. If the m-Path Sense folder became very large, they were backed up by iOS, which inexplicably deletes all files. After a few days, we fixed this bug by explicitly stating that these data should not be backed up and that it will be sent to the server more frequently. However, as this backup solution only applied to log-in data, the sensing data could still be compromised. A method to evaluate the impact of this problem would be to identify a relationship between the proportion of Wi-Fi and mobile data and the total number of gaps. If the data were sent every 5 minutes when connected to Wi-Fi, the possibility of a large amount of data being lost is low. Thus, participants who use Wi-Fi more frequently will most likely have few gaps, and a negative association between total Wi-Fi time and gaps stands to reason. Although there was a negative relationship between the 2 aspects, it was neither very strong (r=−0.16) nor statistically significant (P=.10) when considering the OS version.
User Experience
We assessed the participants' perceived user experience with m-Path Sense using feedback from the debriefing session and an ESM item assessing app issues ("Has your smartphone and/or app worked normally since the last beep?"). The figures in parentheses represent the number of participants who mentioned the issue at least once; however, this was not a structured interview because we wanted to let the participants decide what they thought was most important to address. They were asked, among other things, whether they had any problems with the app, what parts bothered them, and what could be improved.
First, participants noticed minor battery drain (as also reported by battery tests [27]), but this was not perceived as bothersome (27/104, 25.9%). This is also reflected in the fact that only 1.34% (133/9924) of all beeps received on Android and 15.00% (1365/9096) of all beeps on iOS devices reported that participants noticed the battery running down faster than usual at that time. However, when asked whether this was a burden during the debriefing session, most participants (27/104, 25.9%) reported that it was manageable because they carried their charger with them.
Second, some participants (18/104, 17.3%) reported that the app sometimes crashed or froze on start-up; however, this was only reported in 2.23% (221/9924) and 0.45% (41/9096) of ESM beeps for Android and iOS users, respectively. Finally, participants who used an iOS device (12/104, 11.5%) mentioned a problem at the beginning of the study (for iOS only), where some of m-Path Sense's files were deleted by the OS, causing the app to stop working. Participants who experienced this issue were asked to reinstall the app, and the problem was permanently resolved through an update after a few days.
Principal Findings
In this study, we attempted to combine the collection of ESM and mobile sensing data in 1 app, m-Path Sense, and an associated R package and Shiny app, so that these data can be easily brought together and can interact with each other in the future. The platforms combined for this purpose are m-Path [26] and CAMS [27]. During integration, there was a strong emphasis on security and privacy, such as the decision to create a separate version for m-Path Sense (rather than including this as an option in m-Path itself) and encrypting and hashing different types of data.
A pilot study with 104 participants was conducted to assess the evaluation criteria (sufficient reliability and not very invasive for the user). Although the total amount of data collected was adequate for most studies, it was less than the intended sampling frequency, likely caused by the OS's attempts to save energy and resources when the device is not in use (eg, Android's doze mode). This issue is not uncommon in the mobile sensing literature. For example, a study [40] discovered that when collecting geolocation data using a smartphone, 12% of all gaps lasted longer than 60 minutes, even though measuring every 30 minutes was planned. In another study [39], 17.2% of participants had only 2 location measurements per day, whereas 24 were planned, and 17.2% had no measurements at all, with significant differences between Android (better) and iOS. As these gaps may have a direct impact on sampling reliability, it is critical to assess their effect on studies' findings.
As the field of digital phenotyping and mobile sensing research expands and evolves, we can expect increased scrutiny and device OS constraints imposed by corporations such as Apple and Google. Further limitations may include strict guidelines for data collection, such as limitations on the types of sensors that can be accessed and prioritizing transparency to users about what data apps collect. Although this may make it more difficult for researchers to gather the data they need for their studies, it is also possible that these limitations will drive innovation in the field and lead to the development of new, more privacy-sensitive mobile sensing methods. This increased transparency is unlikely to provide a long-term impediment to mobile sensing research, as researchers already strive to be maximally transparent to participants about the data collected in their studies. The future of digital phenotyping and mobile sensing research will almost certainly be influenced by a trade-off between the requirement for precise and extensive data collection and the need to preserve individuals' privacy and prevent unnecessary battery drain.
One of m-Path Sense's shortcomings in comparison with other mobile sensing apps is the lack of a web-based dashboard in which the researcher can inspect and possibly extract all data at a glance. It should be noted that this functionality is already available for ESM data [26,45]. To accomplish this, a new pipeline (possibly with another backend) should be built, through which data are automatically extracted, imported, and stored in a structured format (as the R package does now). The researcher should then be able to generate several interactive plots to check the data via the web-based dashboard, such as a coverage plot (Figure 3).
When conducting mobile sensing research, it is not always necessary to collect and store all types of smartphone data. An important reason for this is to protect the participant's privacy, but it is also because adding sensors consumes extra battery power, which is unnecessary. Therefore, one of our next steps forward will be to configure sensors and their sampling frequencies remotely. A concrete implementation could include allowing researchers to choose sensors from a web-based dashboard (as described previously) and adjust the sampling frequency according to their liking. This could also be done during the study if certain sensors are no longer considered to be relevant or if a sensor's sampling frequency is found to be very high or very low.
A third important functionality for the future is the integration of event-contingent sampling, particularly regarding just-in-time adaptive intervention. CAMS already supports event-contingent sampling to some extent by allowing certain sensors to trigger each other. For instance, location activity could only be activated when the accelerometer activity increases, which saves battery when the participant is not moving. The web-based dashboard should be configured so that researchers can specify whether certain beeps or items should be requested only when a sensor reaches a certain value. A good example of this is in dyadic research, where beeps could only be requested when the participant is within Bluetooth range of their partner.
Limitations
The study has certain limitations that should be considered when interpreting the results. A limitation is that the study was conducted as a pilot study with a small sample size; therefore, not all smartphone brands were adequately covered. This could have an impact on the validity of the findings and their generalizability to a large group. Another limitation is that mobile sensing technology is subject to changes and updates from companies such as Apple and Google; therefore, the results of this study are merely a snapshot of the performance at the time of measurement. At the same time, m-Path Sense is a software package that is continually evolving to stay up to date with these developments. These limitations should be considered when interpreting the findings; however, the study still provides useful information for evaluating the current performance of m-Path Sense.
Conclusions
We combined the strengths of m-Path (a comprehensive ESM platform) and CAMS (mobile sensing) to produce m-Path Sense, a new mobile software app that prioritizes ESM and can be easily expanded to include mobile sensing. By examining its sampling reliability and perceived user experience in a pilot study, we found that the total amount of data gathered is sufficient for most studies, even though it is lower than the intended sampling frequency owing to OS limitations. Minor battery drain was reported by some individuals, but it was not considered to be problematic. m-Path Sense can be a step-up for research into digital phenotyping that calls for a combination of complete ESM and mobile sensing functionality, even though the accessibility of sensors is increasingly being restricted by OSs. Future studies should include a web-based dashboard for inspecting data and switching sensors on and off, event-contingent sampling, and more methods for monitoring and reactivating the app to minimize data gaps. | 9,930.2 | 2022-10-07T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Application Effect of Computer-Assisted Local Anesthesia in Patient Operation
In order to avoid the psychological harm caused by pain to patients, in this study, the application effect of computer-assisted local anesthesia in patient surgery was studied. In this method, 72 patients with hypertension, 35 males and 37 females, aged 53–83 years, with an average age of 70.8 ± 1.3 years, were selected for appointment tooth extraction in the department of stomatology from January to December 2014. All patients were booked for tooth extraction by ECG monitoring. Patients who were contraindicated for tooth extraction, had a history of mental illness, and had used antianxiety drugs and sedatives within 1 week before surgery were excluded. Patients were randomly divided into two groups according to their ID numbers: observation group, 36 cases, and control group. Painless oral local anesthesia injection instrument was used for local anesthesia injection. In the control group, 36 patients were injected with local anesthesia by traditional manual injection. The results showed that 86.11% of patients in the observation group had decreased anxiety scores after anesthesia, while only 13.88% of patients in the control group had decreased anxiety scores. Among patients with decreased anxiety scores, 80.65% in the observation group became nondental anxiety compared with 28.57% in the control group. Computer-assisted oral local anesthesia can effectively control dental anxiety and relieve the pain and discomfort of local anesthesia injection, and improve patient satisfaction, conducive to the smooth nursing work.
Introduction
With the improvement of people's living standards, patients have higher requirements for the comfort and safety of oral local anesthesia, especially for patients with dental anxiety [1]. Comfortable, safe, and painless oral local anesthesia is also the premise and guarantee for smooth oral treatment and also an effective means to relieve patients' fear, so as to avoid patients' dental anxiety and avoid oral treatment [2]. Dental anxiety refers to the patient's anxiety, tension, and fear of dental treatment. Tooth extraction is a common operation in alveolar surgery. e pain caused by local anesthesia injection is one of the important causes of dental anxiety in patients, which can lead to the excitation of the sympathetic nervous system, accelerated heart rate, and increased blood pressure. Especially for patients suffering from systemic diseases such as cardiovascular disease, the original diseases can be induced or aggravated, leading to serious complications [3]. erefore, how to reduce the pain and fear of such patients, reduce the occurrence of dental anxiety, and reduce the risk of surgery have become problems to be solved at present [4]. Single tooth anesthesia is a computer-controlled local anesthesia injection instrument, which provides a new method for painless local anesthesia injection. Dental anxiety can lead to an increase in the secretion of endogenous adrenaline, thus improving the excitability of the sympathetic nerve, resulting in positive cardiac variability, manifested by accelerated heart rate and increased blood pressure [5]. It was observed that the blood pressure and heart rate of the experimental group were stable during the whole anesthesia process, while the blood pressure of the control group was significantly increased during the anesthesia injection.
is suggests that STA computer-assisted oral local anesthesia can effectively reduce the risk of tooth extraction in such patients and reduce the occurrence of angina pectoris, myocardial infarction, arrhythmia, ventricular fibrillation, and other serious complications [6]. Pain is the main factor causing dental anxiety. In the clinical operation of stomatology department, it is accepted by more and more stomatological workers to avoid the psychological harm caused by pain. According to a large amount of clinical evidence, traditional local anesthesia injection can cause very obvious pain, thus aggravating patients' anxiety during and after treatment [7]. However, the anxiety level of hypertensive tooth extraction patients was the highest due to local injection and anesthetic effect. e usual way to reduce painful stimuli is to inject anesthetic drugs very slowly.
For this purpose, the computer-assisted oral local anesthesia instrument emerged as the times required. Due to its low injection speed, patients could hardly feel pain during the injection, thus achieving the purpose of reducing anxiety [8]. Hypertensive patients are a special group in oral therapy. Studies have shown that due to fear of pain in such patients, their preoperative anxiety level is significantly positively correlated with the increase of intraoperative blood pressure. erefore, hypertensive patients may experience hypertensive crisis due to anxiety and pain during treatment [9]. For hypertensive patients with dental anxiety, pain relief and anxiety reduction are particularly important. In the treatment of tooth extraction assisted by computer-assisted oral local anesthesia, nurses need to cooperate with doctors to assist doctors to evaluate patients' dental anxiety. For patients with high MDAS scores, preoperative psychological counseling should be actively performed, and the whole process of treatment should be explained, especially the advantages of computer-assisted oral local anesthesia that can reduce pain and eliminate patients' fear of the instrument [10]. During the injection, assist the doctor to closely observe the patient's vital signs. Mn A. et al.'s study showed that 56% of elderly patients had different degrees of anxiety about dental treatment, most of which was caused by fear of pain. e level of anxiety before anesthesia was the highest in patients with cardiovascular disease due to local injection and anesthesia effect, and the level of anxiety before anesthesia was positively correlated with the increase of intraoperative blood pressure. erefore, it is very important to use a painless local anesthesia injection technique for tooth extraction in cardiovascular patients with dental anxiety [11]. Ji H. L. et al. put forward the manual syringe with the needle tube. Since then, clinical oral local anesthesia has been following this traditional anesthesia injection method for more than a hundred years. Although the form and materials have been improved, the basic injection method is the same. e pain caused by the traditional manual injection mainly comes from the puncture pain caused by the puncture into the tissue and the pressure pain caused by the injection velocity. STA, a computer-controlled anesthesia system, was introduced to stomatology in 2007. e main principle is that the microprocessing chip in the host can automatically and accurately control the changes of injection pressure, flow rate, and other variables, so that the pressure of anesthesia drug injection is lower than the pain threshold of the body, so as to achieve the ideal effect [12]. Sreeja R. et al. believed that the slow and constant "slow flow rate" could significantly reduce the pain of patients with local anesthesia injection of palatal mucosa than "fast flow rate." Second, STA uses a tubular package, which changes the traditional syringe style. e nonthreatening injection handle reduces the patient's expected anxiety.
e introduction of anesthetics is controlled by the foot switch at the bottom of the machine, which changes the previous threefinger operation with one hand into pen-holding two-finger operation, and increased hand stability and flexibility. e rotating injection method avoids the deviation of injection position caused by the deflection force generated by the bevel angle of the needle and improves the accuracy of injection [13]. On the basis of the current research, this study proposed the application effect of computer-assisted local anesthesia in patient surgery. In this method, 72 patients with hypertension, 35 males and 37 females, aged 53-83 years, with an average of 70.8 ± 1.3 years, were selected for appointment tooth extraction in the department of stomatology from January to December 2014. All patients were booked for tooth extraction by ECG monitoring. Patients who were contraindicated for tooth extraction, had a history of mental illness, and had used antianxiety drugs and sedatives within 1 week before surgery were excluded. Patients were randomly divided into two groups according to their ID: observation group (n � 36) and control group. In the control group, 36 patients were injected with local anesthesia by traditional manual injection. e results showed that 86.11% of patients in the observation group had decreased anxiety scores after anesthesia, while only 13.88% of patients in the control group had decreased anxiety scores. Among patients with decreased anxiety scores, 80.65% in the observation group became nondental anxiety compared with 28.57% in the control group. In the monitoring of heart rate and blood pressure, this study found that the heart rate and blood pressure of the observation group did not change significantly before, during, and after injection, while the blood pressure of the control group increased significantly during local anesthesia injection compared with before and after injection. By reducing patients' injection pain and anxiety, changes in blood pressure and heart rate can be effectively controlled, so as to reduce the incidence of hypertensive crisis as much as possible.
Clinical Data.
A total of 72 hypertensive patients, 35 males and 37 females, aged from 53 to 83 years old, with an average of 70.8 ± 1.3 years old, were selected from the department of stomatology from January to December 2014. All patients were booked for tooth extraction by ECG monitoring. Patients who were contraindicated for tooth extraction, had a history of mental illness, and had used antianxiety drugs and sedatives within 1 week before surgery were excluded. Patients were randomly divided into two groups according to their ID: observation group (n � 36) and control group. In the control group, 36 patients were injected with local anesthesia by traditional manual injection.
Operation Method.
e observation group and the control group were treated with four-hand operation, and the anesthetic was articaine epinephrine injection. Computer-assisted oral local anesthesia was used in the observation group, and the local anesthesia apparatus was STA (single tooth anesthesia) produced by Milestone Company. e control group received traditional manual oral local anesthesia. During the whole tooth extraction process, patients in both groups were monitored continuously for blood pressure, heart rate, respiration, and oxygen saturation, and the doctors, nurses, and patients maintained verbal communication from the beginning to the end [14].
Nursing Cooperation.
Psychological nursing: patients are dental anxiety patients, and there is more need to understand their psychological state when seeing a doctor, targeted psychological nursing. Timely communication with patients and their families is particularly important, in the communication with patients to be warm, sincere, patient. All treatment and expenses should first obtain the understanding and informed consent of the patient and obtain the trust and cooperation of the patient. When introducing computer-assisted or traditional manual oral local anesthesia to patients and their families, try to combine physical objects, pictures, and texts to let them understand the operation method, process, and how to cooperate with medical staff. Relieve patients' anxiety and fear as far as possible, which is conducive to giving play to patients' subjective initiative and good treatment. During the whole operation, the patient was closely observed, always paying attention to the patient's own feelings. Before the implementation of each operation, inform patients in advance, so that they have the corresponding psychological preparation. Individual postoperative guidance should be given, and do a serious, meticulous follow-up. Nursing before local anesthesia: understand the patient's chief complaint, medical history, medication, anesthesia history, allergy history, and oral treatment history in detail and preliminarily formulate a thoughtful and reasonable nursing plan. Explain to the patient how to fill in the MDAS scale. Routine instruments and drugs for oral local anesthesia were prepared. e observation group needed to prepare STA painless oral local anesthesia instrument and ensure the normal operation of the equipment. Before, the observation group used STA for local anesthesia injection, and the working principle and advantages of STA must be explained to the patients to eliminate the tension of the patients. Lead the patient to sit in the dental chair, adjust the position and light, and connect the blood pressure and heart rate monitor. Local anesthesia nursing: the observation group and the control group were coordinated with the four-hand operation method, monitoring the blood pressure and heart rate of all patients and recording, asking patients to raise their hands if they have any discomfort, encouraging patients at any time to ensure the successful completion of local anesthesia and treatment, and paying attention to psychological nursing throughout the whole process. e control group received traditional manual oral local anesthesia. e observation group connected the STA oral painless local anesthesia instrument to the power supply. After the completion of the equipment self-test, pilin was loaded into the STA tube according to the routine, and the tube was connected to the STA device. After checking STA exhaust and cartridge capacity, insert the injection handle into the handle slot of the equipment for later use and adjust the injection speed of the equipment according to the doctor's injection requirements. After disinfecting the injection site, the doctor passes the injection handle to the nurse [15]. A cotton swab is placed slightly below the injection site prior to insertion to assist the physician with preanesthesia and/or suction of local anesthetic exudates to reduce discomfort caused by puncture and/or oral inflow. After the tip of the needle is inserted into the mucous membrane, the cotton swab can be withdrawn, while the patient's feelings and requirements are observed and questioned. Postoperative care of local anesthesia: after local anesthesia, the patients were asked about their feelings, using a 10-point visual analog scale (VAS), and the pain scores during local anesthesia were assessed by the patients themselves and recorded. MDAS scores and adverse reactions of local anesthesia were investigated, and patients in the observation group were asked whether they were willing to continue to choose computer-assisted oral local anesthesia for oral local anesthesia.
Observations.
Dental anxiety: dental anxiety scores were assessed by the MDAS scale before and after local anesthesia. MDAS consists of four questions, with options for each question including (a) relaxed, (b) a little uneasy, (c) and (d) be afraid or anxious, and (e) be so afraid or anxious that you sometimes sweat or feel unwell, count 1-5 points in order. MDAS ≥ 11 are considered as dental anxiety patients [16]. Blood pressure and heart rate: ECG monitor (UT4000 F) was used to measure patients' blood pressure and heart rate before, during, and after local anesthesia injection [17]. Pain degree: the visual analog scale (VAS) was used to evaluate the pain degree of patients. e VAS scale is a straight line with a length of 10 cm, with "0" and "10" at both ends representing "no pain" and "severe imagined pain," respectively. Subjects mark the straight line according to their own pain situation, and the distance from the "no pain" end indicates the degree of pain. Adverse reactions of anesthesia: follow-up after anesthesia was conducted to investigate whether pain, swelling, hematoma, ulcer, tissue necrosis, and other adverse reactions occurred at the injection site [15].
Statistical
Methods. SPSS13.0 statistical software was used for data analysis. e measurement data with normal distribution and homogeneity of variance were expressed as mean ± standard deviation (x ± s). e t-test was used for comparison between groups. Count data were expressed as Contrast Media & Molecular Imaging frequency and percentage (%), and comparison between groups was performed by the x 2 test. P < 0.05 was considered statistically significant.
Results and Analyses
Comparison of pain degree of local anesthesia injection between the two groups is shown in Figure 1: VAS score of the observation group was 0-6 points, with an average of 2.17 ± 1.859 points. e control group was 1-8 points, with an average of 3.67 ± 1.973 points [16]. VAS value in the observation group was lower than that in the control group, and the difference was statistically significant (t � 3.321, P � 0.001).
e comparison of dental anxiety between the two groups before and after local anesthesia is given in Table 1. After anesthesia, anxiety scores of 86.11% (31/36) patients in the observation group decreased, while only 13.88% (5/36) patients in the control group decreased, and the difference was statistically significant (x 2 � 6.72, P < 0.01). In the observation group, 80.65% (25/31) patients' anxiety score decreased to nondental anxiety state (MADS < 11), while only 28.57% (4/14) in the control group, and the difference was statistically significant (x 2 � 6.93, P < 0.01).
Comparison of blood pressure and heart rate between the two groups before, during, and after anesthesia is given in Tables 2 and 3. In the control group, blood pressure during local anesthesia injection increased compared with before and after anesthesia, with a statistical significance (P < 0.05), while heart rate accelerated, but the difference was not statistically significant (P > 0.05) [18].
ere were no significant differences in blood pressure and heart rate between the observation group and the observation group before and after anesthesia (P > 0.05).
In the treatment of tooth extraction assisted by computerassisted oral local anesthesia, nurses need to cooperate with doctors to assist doctors to evaluate patients' dental anxiety. For patients with high MDAS score, preoperative psychological counseling should be actively done, and the whole process of treatment should be explained. In particular, this study introduces the advantages of computer-assisted oral local anesthesia technology to reduce pain and eliminate
Conclusions
is study presents a study on the application effect of computer-assisted local anesthesia in patients undergoing surgery. is method selected 72 hypertensive patients, 35 males and 37 females, aged from 53 to 83 years old, with an average of 70.8 ± 1.3 years old, who underwent appointment tooth extraction in the department of stomatology from January to December 2014. All patients were booked for tooth extraction by ECG monitoring. Patients who were contraindicated for tooth extraction, had a history of mental illness, and had used antianxiety drugs and sedatives within 1 week before surgery were excluded. Patients were randomly divided into two groups according to their ID: observation group (n � 36) and control group. In the control group, 36 patients were injected with local anesthesia by traditional manual injection.
e results show that 86.11% of patients in the observation group have decreased anxiety scores after anesthesia, while only 13.88% of patients in the control group have decreased anxiety scores. Among patients with decreased anxiety scores, 80.65% in the observation group became nondental anxiety compared with 28.57% in the control group. In the monitoring of heart rate and blood pressure, this study found that the heart rate and blood pressure of the observation group did not change significantly before, during, and after injection, while the blood pressure of the control group increased significantly during local anesthesia injection compared with before and after injection. By reducing patients' injection pain and anxiety, changes in blood pressure and heart rate can be effectively controlled, so as to reduce the incidence of hypertensive crisis as much as possible. In the future, other applications related to local anesthesia in computer-assisted surgery will continue.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 4,390.4 | 2021-11-13T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Releaf: An Efficient Method for Real-Time Occlusion Handling by Game Theory
Receiving uninterrupted videos from a scene with multiple cameras is a challenging task. One of the issues that significantly affects this task is called occlusion. In this paper, we propose an algorithm for occlusion handling in multi-camera systems. The proposed algorithm, which is called Real-time leader finder (Releaf), leverages mechanism design to assign leader and follower roles to each of the cameras in a multi-camera setup. We assign leader and follower roles to the cameras and lead the motion by the camera with the least occluded view using the Stackelberg equilibrium. The proposed approach is evaluated on our previously open-sourced tendon-driven 3D-printed robotic eye that tracks the face of a human subject. Experimental results demonstrate the superiority of the proposed algorithm over the Q-leaning and Deep Q Networks (DQN) baselines, achieving an improvement of 20% and 18% for horizontal errors and an enhancement of 81% for vertical errors, as measured by the root mean squared error metric. Furthermore, Releaf has the superiority of real-time performance, which removes the need for training and makes it a promising approach for occlusion handling in multi-camera systems.
Introduction
Occlusion stands as a pervasive challenge in computer vision, where the relative depth order within a scene can obstruct, either partially or completely, an object of interest [1].The repercussions of occlusion are significant, limiting the information extractable from images.Full occlusion denotes the scenario where the camera's detection algorithm loses track of the target entirely, while partial occlusion arises when a segment of the target is obstructed in one of the images (see Figure 1).The consequences of occlusion extend to the potential misalignment of the tracking system, leading to the erroneous tracking of objects or even total loss of the target [2][3][4].
In the context of multi-camera setups, robust occlusion detection and handling are imperative.Prior research has explored various methods, including convex optimization [5], disparity map with uniqueness and continuity assumptions [6], optical flow divergence and energy of occluded regions [7], constraint on possible paths in 2D matching space [8], target motion dynamics modeling [9], image likelihood threshold for removing low image likelihood cameras [10], and target state prediction with Gaussian Mixture Probability, Hypothesis Density, and location measurement with game theory [11] to address occlusion challenges.While these methods exhibit promising results, they predominantly focus on occlusion either through target modeling-posing challenges for generalization across different targets-or involve computationally expensive algorithms for image analysis.In this paper, we introduce a novel approach to occlusion handling in multi-camera systems, shifting the focus from the target to the viewers (cameras or eyes), rendering our method more generic and practical.Leveraging mechanism design, we initiate a game between the eyes (cameras) as a means to address occlusion dynamically (see Figure 2).Recognizing that the traditional separation of occlusion detection and handling phases can constrain occlusion to specific categories, such as partial occlusion [12,13], we propose an integrated approach.Our mechanism treats the cameras as rational agents, generating game outcomes and categorizing video streams into three categories: full occlusion, partial occlusion, and no occlusion.Each category is assigned a cost, determining game outcomes and updating the system cameras periodically (three steps in our case).
To evaluate our proposed method, we implemented it on an open-source 3D-printed robotic eye [14].Experimental results showcase the superiority of our algorithm over Q-learning and DQN baselines by 20% and 18%, respectively, in terms of horizontal errors.Moreover, we observe an 81% improvement in vertical errors, as measured by the root mean squared error metric.The proposed algorithm, named Releaf, augments our previously introduced leader finder algorithm [15] by incorporating real-time performance, thus enhancing practicality through the elimination of training requirements.
The contributions of this paper include: • Using game theory, especially mechanism design and Stackelberg equilibrium, for camera role assignment in multi-camera systems.
•
Improved performance over Q-learning and DQN baselines.
•
Realtime operation without the need for training, making it efficient for practical applications.
Related Works
Unlike multi-object single-camera tracking, multi-camera people tracking (MCPT) presents greater challenges, especially when attempting to control cameras in a feedback loop via visual input.Consequently, most previous literature has addressed these challenges through offline methods, which can only process pre-recorded video streams and are therefore less practical for real-time applications.Our work specifically tackles the real-time challenges of MCPT, focusing on the problem of occlusion when tracking a single subject with multiple cameras.MCPT systems are particularly suitable for advanced surveillance applications.
Online MCPT methods typically use past frames to predict tracking in the current frame, while offline methods utilize both past and future frames, making them impractical for real-time scenarios where future frames are unavailable.Occlusions can result in assigning multiple IDs to the same subject in multi-subject tracking.To address this issue, ref. [16] proposes cluster self-refinement to periodically cleanse and correct stored appearance information and assigned global IDs, enhancing the utilization of pose estimation models for more accurate location estimation and higher quality appearance information storage.Unlike their periodic enhancement of visual features for multi-subject tracking, our approach focuses on tracking a single subject while ensuring proper visual tracking in both cameras.We handle occlusion using high-level bounding box information, specifically the center, through a decision tree that leverages previous information.
The researchers of ref. [17] employ human pose estimation to tackle occlusion, combining real and synthetic datasets to evaluate their method.Their approach follows the traditional MCPT pipeline of detection, bounding box assignment, and subject identification by a centralized algorithm, combining pose estimation with appearance features to estimate the positions of occluded body parts.Instead of combining pose estimation with appearance features, which can be computationally expensive, our algorithm enhances performance by analyzing the bounding box center with game theory, using available information more efficiently.
In real-world applications, using multiple cameras necessitates scaling the tracking scenario to multiple cameras.However, appearance variances and occlusions complicate subject tracking.The usual practice in tracking multiple subjects involves linking segments of moving objects.In this paper, we use the centroid of the motion object blob to formulate single-subject tracking through game theory.
Aligned with the classifications in [18], we address multi-camera tracking (MCT) of a single object using a network of two homogeneous cameras (robotic eye) focused on the same scene, with completely overlapping fields of view.We prioritize camera motion over subject motion, proposing a global MCT approach that incorporates inter-camera tracking to select the leading camera for tracking.
Various works are implemented on the robotic eye systems.For instance, Shimizu et al. [19] encourage the human subjects to smile through a moving eyeball.Hirota et al. [20] present Mascot, a robotic system designed for casual communication with humans in a home environment.It comprises five robotic eyes, which, along with five speech recognition modules and laptop controllers, are connected to a server through the internet.Users communicate with the robot through voice, and the robot responds by moving its eyes.The system is used to express intent and display the importance and degree of certainty for content through eye movements [21].
Implementing algorithms on robotic eye systems is similar to working with multi-view cameras.Previous literature in this area [22,23] has focused on tracking multiple objects, while tracking a single object with multiple cameras and occlusion prevention remains an important research gap.Our proposed algorithm targets single-object tracking by multiple cameras, preventing target loss with a real-time, linear-time complexity algorithm.
Recent works have demonstrated the potential of game theory in addressing occlusion handling challenges in computer vision and robotics.For instance, ref. [24] proposes a voxel-based 3D face recognition system combining game theory with deep learning to increase occlusion handling robustness, while [25] focuses on optimal sensor planning to address full occlusion as the worst-case scenario.
To the best of our knowledge, our algorithm is the first to use mechanism design for occlusion handling.When multiple equilibria exist in a game, mechanism design can alter the game rules to achieve a unique equilibrium.Our proposed algorithm employs mechanism design to specify game characteristics that prevent random occlusions by increasing the cost of occlusion conditions.This incentivizes both players to minimize their costs, resulting in the eye with the unobstructed image becoming the leader.
Occlusion
In this paper, we categorize occlusions into four types as described in [9]: nonocclusion, partial occlusion, full occlusion, and long-term full occlusion.During nonocclusion, all of the features necessary for object tracking are visible to the camera sensor.In partial occlusion, some of these features are obscured, while in full occlusion, all of the features are hidden [9].Long-term full occlusion is a variant of full occlusion that persists over an extended period.
Our experiments are designed to span all the major occlusion scenarios by including cases of no occlusion, partial occlusion, and full occlusion.These scenarios are selected based on their practical relevance and their ability to comprehensively evaluate the algorithm's performance across the spectrum of potential real-world conditions.While long-term full occlusion is a temporal extension of full occlusion, testing full occlusion inherently allows us to assess the algorithm's capability to handle longer periods of occlusion.This ensures that our experimental design effectively covers the most significant and challenging conditions faced in real-time tracking applications.
Game Theory
The field of game theory in mathematics examines decision making in games played by rational players.A game, according to game theory, is defined by its players, rules, information structure, and objective [26].Rational players are self-interested agents who prioritize their own interests over those of their opponents.This paper incorporates a branch of noncooperative game theory known as two-player games, in which each player aims to minimize their cost while ignoring the other player [26].
Game theory studies decision making in strategic situations where multiple players are involved.The Nash equilibrium (NE) is an optimal strategy for all players in a game, where no player can improve their outcome by unilaterally changing their strategy, given that all other players' strategies remain unchanged.A Nash equilibrium can be found in a two-player game by finding a pair of policies represents the available action space for each player, γ (i) represents the player's strategy, and J i γ (1) , . . ., γ (N) is the player's outcome, for N available strategies.The pair represents the Nash outcome of the game.In this paper, we incorporate a branch of noncooperative Game theory called two-player games, where each player minimizes their cost regardless of the other player's interests.
Nash equilibrium (NE) is a concept in game theory that refers to an optimal strategy for both players, such that if one of them changes their strategy, they will not benefit more if the other player's strategy remains unchanged.In a two-player game, a pair of policies considered a Nash equilibrium if the following conditions are satisfied: In Equation ( 1), −i refers to all players except player i, γ (i) represents a player's strategy, Γ (i) is the available action space for each player, J i (γ (1) , ..., γ (N) ) is the player's outcome, and N is the number of available strategies.In game theory, the term "strategy" is often used interchangeably with "policy" [26].
Games can be represented in either a matrix or a tree form.The tree form representation is particularly advantageous as it incorporates the notion of time, and is more suitable for games involving more than two players.In the tree form representation, players are represented by nodes, possible actions by edges, and outcomes by leaves.The equilibrium in the tree form representation is referred to as Stackelberg equilibrium.Stackelberg equilibrium is used in many practical approaches [27,28].To find the Stackelberg equilibrium in tree form, a procedure called backward induction is used.This involves starting at the bottom-most leaf of the game (the outcome) and asking its node (player) to select the best outcome (see Figure 3).The selected outcome is then transferred to the higher level edge for the next player, until the root player selects her outcome, which is the Stackelberg equilibrium of the game [29].In game theory, the theory of mechanism design (also known as inverse game theory) focuses on the design of game configurations that satisfy certain objectives [30].Mechanism design provides a theoretical framework for designing interaction rules that incentivize selfish behavior to result in desirable outcomes [31].
Methods
In our experiments, we used OpenCV's pre-trained classifiers for face detection and drawing a bounding box around the face in a video stream of size 480 × 480 broadcasted over a robot operating system (ROS) message.This algorithm scans the input video for face localization by analyzing pixel patterns that match human facial features.The detection algorithm includes a confidence measure to determine the success of detection.A face is considered successfully detected with a confidence level of 0.4 or higher.
The proposed algorithm defines different occlusion states based on the confidence level: full occlusion occurs at confidence levels below 0.4, partial occlusion is identified at confidence levels between 0.4 and 0.8, and no occlusion is recognized when the confidence level exceeds 0.8.Thus, the confidence interval is divided into three sections: [0, 0.4] for full occlusion, (0.4, 0.8] for partial occlusion, and (0.8, 1.0] for no occlusion.Then the confidence level for each eye (σ r and σ l ) is compared to these levels by Equation (3).Our objective is to find a strategy by Stackelberg's backward induction method to minimize this cost.All experiments were conducted with a single face in the scene.Scenarios involving multiple faces would require a different approach and formulation than the one used in the Releaf algorithm.In cases where multiple faces are detected, the algorithm follows a predefined heuristic to select and track the first detected face as the leader.
σ r <= P and σ l > P or σ l <= P and σ r > P 700 σ r <= F and σ l > F or σ l <= F and σ r > F 1000 Otherwise In Equation (3), the empirical value for P and F (Partial and Full occlusion) in our experiments are 0.8 and 0.0, respectively.σ r and σ l are the confidence levels for right and left eyes.We fill the game tree by this equation.Whenever partial or full occlusion occurred, the player will face a 700 or 1000 cost respectively.These values are considered high costs as they surpass the highest possible payoff of 480 √ 2 = 676.8pixels.This increases the cost of occlusion for the player, and decreases the possibility of their selection as the leader.
In sequential games, the leader is the player who begins the game.By representing the game in tree form, with each player as the leader, we could compare the Stackelberg equilibrium for each tree, found by backward induction, and determine the leader with the least outcome.This algorithm, called the Leader Finder Algorithm (Leaf Algorithm), was applied in real-time, resulting in the Releaf algorithm shown in Algorithm 1.The Releaf algorithm considers two bi-player game trees at each time step, where both players are minimizers.The payoffs are initialized with a high cost, 1000 pixels, to be ignored in comparison to each available payoff by the players.An example of the game trees solved by the Releaf algorithm is illustrated in Figure 3.This algorithm finds the leader eye in O(1).
The results of each game tree, where one player assumes the role of the leader, are assessed and contrasted to identify the player (eye/camera) set to guide the movement with a superior outcome.To achieve similar errors in both vertical and horizontal, we changed the dimension of the video stream from 640 × 480 to 480 × 480.By changing the dimension of the video stream we achieved comparable errors for both directions (Equation ( 2)).This modification allowed our code to distinguish between strategies based on the sign of the errors, as demonstrated in Figure 4. Referring to a game tree wherein one of the cameras assumes the role of the leader, filled by Equation ( 3), we identify a single equilibrium for each game tree (lines 3 and 4 in Algorithm 1).The equilibrium points (θ 1 and θ 2 ), which are similar to the example values shown over the leaves of the tree in Figure 3, have one value for each of the players.This value is used to calculate the relevant cost for the leading player (lines 5 and 6 in Algorithm 1).Then the costs J 1 and J 2 are compared to specify the player that can lead the movement with the least cost (lines 7 to 11 in Algorithm 1).Algorithm 1: Real-time Leader Finder (Releaf) Algorithm
Experimental Setup
To validate the proposed algorithm, a test subject moves in front of the eyes while the face detection and tracking algorithm is running.To simulate occlusion scenarios, the test subject either fully or partially covers their face with their hand or crosses the corners of the image (Figure 5).Our experimental setup consists of a robotic eye (Figure 6), which represents a 3Dprinted robot comprising two cameras.This open-source robot hosts eyeballs actuated by tendon-driven muscles as proposed in the work by Osooli et al. [14].
To fulfill the requirement for baseline methods to evaluate our work, we employ data obtained from our experiment with the robot to train two models.We employ Q-learning [32] from reinforcement learning as our initial learning model, and Deep Q Networks (DQN) [33], a technique from deep reinforcement learning, serves as our second baseline.
Our models utilize error lists for various actions, including vertical movement (up and down) and horizontal movement (right and left).In our experiment, wherein the number of frames serves as both the steps and the states, the agent must learn to choose the minimum error for each frame.The agent will receive a +1 reward for choosing the minimum error and will be penalized with −1 for selecting a higher error.We train the agent for 1000 episodes and repeat each experiment 100 times.The average value of the selected errors across 100 experiments is considered as our baseline.
In Q-learning, we employ a learning rate (α) of 0.8 and a discount factor (γ) of 0.9.Our DQN model comprises two hidden layers with 64 and 32 perceptrons, respectively.The size of the replay memory is set to 10,000 with a batch size of 32, while α is assigned a value of 0.001, and γ is set to 0.9 for the DQN.
Results & Discussion
The Releaf algorithm is implemented and evaluated on the robotic eye to assess its efficacy in real-time selection of the leader eye.The results demonstrate that in instances where one eye loses sight of the target due to occlusion, our method automatically switches to the other eye capable of perceiving it (as illustrated in Figure 7).We evaluated the performance of the proposed Releaf algorithm against two baseline methods: Q-learning and Deep Q-Network (DQN).Figure 8 illustrates that Releaf consistently outperforms the baselines in terms of stability, particularly in maintaining a lower error rate.Quantitative analysis using the root mean squared error (RMSE) metric reveals that Releaf achieves a significant improvement, reducing horizontal errors by 20% and 18% compared to Q-learning and DQN, respectively.Furthermore, Releaf demonstrates an 81% reduction in vertical errors when compared to both baselines.It is noteworthy that both baseline methods had fully converged during training on the movement datasets derived from experimental videos, yet Releaf still exhibited superior performance in both error dimensions.The experimental outcomes employing the proposed Releaf algorithm are summarized in Tables 1 and 2.Over the 90-s experiment duration, the left eye encountered three full occlusions and six partial occlusions, while the right eye experienced five full occlusions and six partial occlusions.In response to occlusion events involving the leading eye, Releaf attempts to switch to the unobstructed eye capable of tracking the target.However, in certain situations, indicated by (*) in the tables, Releaf fails to accurately select the correct leader eye.Referencing the accompanying video (Accompanying video: https://youtu.be/u45OlIS9fsA accessed on 25 August 2024), it is observed that in the first (*) scenario, during the partial occlusion of the left eye (28.80 → 29.06), the right eye is not selected as the leader due to its concurrent partial occlusion (28.80 → 29.33).A similar situation arises for the right eye at the interval 37.33 → 37.86.Additionally, very short intervals of full occlusion following a partial occlusion pose challenges for the algorithm, exemplified by the 84.26 → 84.53 full occlusion interval for the right eye, occurring after the 83.73 → 84.26 partial occlusion interval of the left eye.
These (*) selections, while noticeable, are deemed minor drawbacks.Their occurrence is primarily associated with brief intervals where the game tree lacks sufficient information.Such short intervals are negligible in active vision systems, as the alternative eye swiftly assumes the leading role within a second.Thus, we observe that Releaf outperformed in managing long-term occlusions (intervals of a second or more) compared to shortterm occlusions.
The movements considered in our experiments primarily involve horizontal motion in front of the camera, leading up to the point of occlusion.During these movements, the horizontal displacement of the face in the image is significantly greater than the vertical displacement.This larger horizontal movement results in higher errors in horizontal tracking compared to vertical tracking.Consequently, the errors observed during horizontal movements are greater than those during vertical movements, which explains why vertical occlusions result in fewer errors in our experimental setup.
The real-time implementation enabled the robotic eye to handle occlusions automatically while being selective and intelligent about occlusion conditions.The robot movements were ignored in the experiment to demonstrate the effect of the human subject's translocation before the cameras.The observed results underscore the potential efficacy of the Releaf algorithm in addressing occlusion challenges within multi-camera systems.
Conclusions
This paper presents a mechanism design procedure that mimics human occlusion handling behavior.The proposed method employs Real-time Leader Finder (Releaf), a game theoretical algorithm proposed by the authors that uses backward induction to find Stackelberg equilibrium in a game tree.The algorithm coordinates camera movements by assigning leader and follower roles to the cameras.Implementation of the proposed method on a robotic eye demonstrates that it can handle occlusion in a similar way to the human eye.When one camera faces occlusion, the leader role is assigned to the other camera, which has a less occluded picture of the target.The new leader directs the camera movements, while the follower camera follows the leader's path.Releaf's performance in handling occlusions longer than a second is superior to short-term occlusions.The proposed method is selective and effective in handling occlusion conditions.Future work could focus on enhancing Releaf by revising the current formulation to incorporate the capability of tracking multiple subjects (faces) simultaneously across cameras.
Figure 1 .
Figure 1.Different types of occlusion in a two camera setup: The left image illustrates a case of full occlusion, where the blue circle is completely obstructed in the left camera.In contrast, the right image demonstrates partial occlusion in the left camera, wherein the blue circle is partially blocked.The green circles indicate objects that are fully visible or unobstructed in both camera views.
Figure 2 .
Figure 2. Overview of the occlusion handling procedure by our proposed algorithm, Releaf.The camera with the longest uninterrupted view of the target leads target tracking.
Figure 3 .
Figure 3. Tree representation of our proposed game, showing possible actions for each player (Up (U), Down (D), Left (L), Right (R)) and their outcomes.Nodes represent cameras (players), edges represent actions, and leaves represent outcomes.Similar colors indicate simultaneous actions by the players.R-Eye and L-Eye denote the Right and Left Eyes, respectively.The selection process for the leader eye is demonstrated by tracing selected values from the leaves to the root.
1Figure 4 .
Figure 4. Illustration of the sign of the horizontal and vertical differences in each quarter of the image, as well as the locations of the detected action, face center (p x , p y ), and image center (c x , c y ).
Figure 5 .
Figure 5.An instance of the face tracking running on the robotic eye, with the tracked face highlighted by a green bounding box.The experiment involved the test subject moving before the eyes and crossing the corners of the image at six different locations, with full and partial occlusion occurring in the first and last rows of the figures, respectively.
Figure 6 .
Figure 6.Tendon-driven, 3D printed robotic model of the human eye (robotic eye).The details of the interior structure of the robotic eye, including its design and functionality, are thoroughly discussed in[14].
Figure 7 .
Figure 7. Illustration of the horizontal and vertical errors of a human subject's face while moving in front of the robotic eye, with frequent obstruction of the vision of one eye.The figure also highlights the switching behavior of the proposed algorithm between the cameras, as indicated by the blue line (leader eye) switching between the green (left eye) and green (right eye) lines.Dotted lines below and on top of each diagram shows the partial and full occlusion occurrences, respectively.The occlusions of the left eye are depicted with red dotted lines, while occlusions of the right eye are shown with green dotted lines.
Figure 8 .
Figure 8. Performance Comparison of the Proposed Method, Q-learning, and DQN Baselines.The plot illustrates the errors across various scenarios, with the proposed method showcasing superior performance compared to baseline methods.The 99% confidence interval shadows represent the standard deviation of average values for the baselines.Partial and full occlusion occurrences are denoted by dotted lines below and above each diagram, respectively.The distinction between right and left eye occlusions is highlighted using green and red colors. | 5,871.6 | 2024-09-01T00:00:00.000 | [
"Computer Science"
] |
Multiplex biomarkers in blood
Advances in the field of blood biomarker discovery will help in identifying Alzheimer's disease in its preclinical stage, allowing treatment to be initiated before irreversible damage occurs. This review discusses some recent past and current approaches being taken by researchers in the field. Individual blood biomarkers have been unsuccessful in defining the disease pathology, progression and thus diagnosis. This directs to the need for discovering a multiplex panel of blood biomarkers as a promising approach with high sensitivity and specificity for early diagnosis. However, it is a great challenge to standardize a worldwide blood biomarker panel due to the innate differences in the population tested, nature of the samples and methods utilised in different studies across the globe. We highlight several issues that result in the lack of reproducibility in this field of research currently faced by researchers. Several important measures are summarized towards the end of the review that can be taken to minimize the variability among various centres.
Introduction
Th e pathology of Alzheimer's disease (AD) accumulates decades before the clinical symptoms start to appear. Extra cellular amyloid deposits and intracellular neurofi brillary tangles are the classic hallmarks of AD. Th ere are well established genetic markers for early onset AD but more than 95% of AD patients suff er from the sporadic form. Th e aetiology of the sporadic form of AD has been understood to be multifactorial and is infl uenced by various genetic, biochemical and environmental factors. Prediction of future pathological cognitive decline in AD is of critical importance as it would allow for current and future prevention and treatment strategies to be initiated when they are likely most eff ective -and would also have applications in monitoring of medical and lifestyle interventions. It has been demonstrated earlier that AD biomarkers can detect the disease long before the clinically obvious symptoms appear [1]. A biomarker is objectively measured and evaluated as an indicator of a pathological process or pharmacological response to a therapeutic intervention. Th e sensitivity, specifi city and ease-of-use are the most important factors that ultimately defi ne the diagnostic utility of a biomarker. Th ey are important avenues to disease diagnosis and identifying individuals at risk. Identifi cation of such reliably validated biomarkers has led to the introduction of a diagnostic preclinical phase where the biomarkers are present in asymptomatic individuals [2].
Whilst there have been major advances in neuroimaging, particularly amyloid beta (Aβ) imaging, its use as a routine diagnostic test is cost prohibitive. As such, attention has switched to the periphery and readily accessible biological material for AD biomarker research. Over recent years, cerebrospinal fl uid (CSF) has been the major focus of proteomic biomarker discovery studies; however, CSF collection is a highly invasive procedure that is diffi cult to implement in the clinical routine and in clinical trials. Th erefore, a strong interest exists for less invasive diagnostic approaches for AD, such as bloodderived biomarkers. An ideal AD blood biomarker (or panel) should represent the associated pathological and biochemical changes occurring in the brain. AD blood biomarker research is still at an early stage of development and clinical evaluation before it can be integrated into clinical practice as a key diagnostic tool. Th e measurement and reliability of these blood biomarkers is limited by the physiology of the blood brain barrier. Moreover, the biomarkers closely associated with disease pathology are found in very low concentrations in blood, which is furthermore compromised by the complex biochemical nature of the fl uid [3]. A major limitation of blood biomarker studies is the lack of reproducibility of the results. Th is review discusses the current knowledge on blood biomarkers in AD, focussing on the multiplex approach with discussion on novel strategies for biomarker discovery.
Abstract
Advances in the fi eld of blood biomarker discovery will help in identifying Alzheimer's disease in its preclinical stage, allowing treatment to be initiated before irreversible damage occurs. This review discusses some recent past and current approaches being taken by researchers in the fi eld. Individual blood biomarkers have been unsuccessful in defi ning the disease pathology, progression and thus diagnosis. This directs to the need for discovering a multiplex panel of blood biomarkers as a promising approach with high sensitivity and specifi city for early diagnosis. However, it is a great challenge to standardize a worldwide blood biomarker panel due to the innate diff erences in the population tested, nature of the samples and methods utilised in diff erent studies across the globe. We highlight several issues that result in the lack of reproducibility in this fi eld of research currently faced by researchers. Several important measures are summarized towards the end of the review that can be taken to minimize the variability among various centres.
Individual blood biomarkers
Th e quest for fi nding biomarkers for AD started with traditional approaches involving a single biomarker, such as Aβ [4-6], but the drawbacks included large inter-and intra-person variability and results were not consistent with the sporadic form of AD [7,8]. Th e results have been confl icting as Aβ present in plasma is also derived from peripheral tissues, non-neural systems and blood components, thus constantly allowing dynamic interchange of Aβ between brain and periphery. Th is might be one of the reasons for failure of anti-amyloid interventions in AD, so there is a need to determine the signifi cance of various sources of Aβ in plasma. In addition, Aβ binds avidly to various plasma proteins and membranes. Several longitudinal and cross-sectional studies on plasma Aβ40 and Aβ42 show wide variations within and among individuals as well [9,10]. Several other factors also contribute to the levels of Aβ in plasma, such as diet, medication, stress and circadian rhythm [11].
Lately, many candidate biomarkers have been studied individually, such as apolipoprotein E (ApoE), apoJ, α-1 antitrypsin, complement factors, cytokines, apoA-1 and many more [12]. Padovani and colleagues [13] reported altered levels of amyloid precursor protein in AD patients, showing a reduced ratio of higher to lower molecular weight isoforms. Th e ratio was associated with disease severity and progression with 80 to 90% sensitivity and specifi city. Our lab reported levels of plasma apoE in AD in the baseline Australian Imaging Biomarkers Lifestyle (AIBL) cohort, which indicated a strong relationship between apoE levels, AD and apoE4 status, which is known to be the greatest risk factor for AD [14]. Interestingly, lower levels of apoE in AD were also observed irrespective of apoE4 genotype, that is, in non-apoE4 allele carriers. Another study [15] comparing plasma and CSF levels of apoE in AD and control subjects showed dependence of plasma apoE levels on apoE genotype. Further, plasma apoE levels did not correlate with CSF apoE levels, but CSF apoE did correlate with CSF Aβ42 levels. Th is raises the question of validation and interpretation of peripheral biomarkers, whose production and clearance may be relatively independent in the periphery and in the brain.
In addition to protein biomarkers, evidence on the role of cholesterol and cholesterol metabolism in AD pathology indicates that hypercholesterolemia is closely associated with mild cognitive impairment (MCI) and AD [16,17]. Studies suggest that lipid lowering agents and statins reduce the risk of AD [18,19]. 24S-Hydroxycholesterol, a cholesterol metabolite, refl ects brain homeo stasis, that is, the balance between the intra-and extra-cerebral pools of cholesterol [20]. Certain studies have shown signifi cant reduction in levels of 24Shydroxy cholesterol in plasma [21] while others revealed inconsistent increases of the same compound in plasma [22,23] with weak correlation to CSF levels [24].
AD has a complex pathology involving several molecular pathways, such as amyloid deposition, taupathy, oxidative damage, infl ammation and metabolic changes. Th e markers of underlying pathology in all these pathways can serve as markers for AD. A broad range of markers have been studied extensively in correlation with AD disease pathology, conversion and progression. Grow ing evidence suggests that oxidation plays a crucial role in AD pathogenesis. Markers of oxidative damage are found in AD brain, including protein, lipid and nucleic acid oxidation products [25,26]. Isoprostanes, products of lipid peroxidation, have been associated with AD in many studies [27,28]. Results have been promising with CSF; F2-isoprostanes seem to increase during conversion from MCI to AD [29], closely associated with imaging and memory parameters with good sensitivity and specifi city [30]. Results have been inconsistent with regard to levels in plasma as a few studies have reported increased levels [31,32] while others have reported no signifi cant diff erence [33,34]. One possibility for the discrepancies may be the presence of vascular risk factors that can alter the levels of F2-isoprostanes [35]. It is now well proven that infl ammation also plays a vital role in AD pathology. Astroctyosis, microgliosis, complement activation and upregulation of acute phase proteins are infl ammatory responses elicited by amyloid deposition in brain. Measurement of these markers in blood is unclear as these proteins may not cross the blood brain barrier. Th ese makers include C-reactive protein, IL-1β, tumour necrosis factor-α, IL-6, IL-6 receptor complex, α1antichymotrypsin and transforming growth factor-β, and cytokines such as IL-12, interferon-α, and interferon-β [36]. Despite a plethora of blood biomarker literature in AD, these are unlikely to be diagnostically suffi cient individually as they lack the required sensitivity and specifi city to be potential AD biomarkers.
Multiplex approach
Th ere is a defi nite need for a holistic approach for standardizing blood biomarkers for AD. It is crucial to understand the relationship between various individual biomarkers and move away from the traditional approach of investigating levels of single candidate biomarkers at a time. Many studies have formulated panels of biomarkers to distinguish between healthy and AD participants and evaluated broad ranges of proteins in diff erent combi nations to yield high sensitivity and specifi city [37,38]. Th ere has been considerable development in the discovery of cost-eff ective plasma protein biomarkers for AD [39]. In a panel of 120 signalling proteins, 18 proteins had 82% specifi city in diff erentiating AD from healthy subjects and predicting the conversion from MCI to AD [40]. Tuenissen and colleagues [36] evaluated 29 serum biomarkers that can diff erentiate AD from healthy parti cipants. Th ese included infl ammatory biomarkers such as IL-6 and metabolic biomarkers such as cholesterol metabolites, cysteine and homocysteine. Doecke and colleagues [41] reported on AIBL baseline plasma screening of 151 analytes combined with targeted biomarker and clinical pathology data in a total of 961 participants. An initial plasma biomarker panel consisting of 18 biomarkers was identifi ed that distinguishes individuals with AD from cognitively healthy controls with high sensitivity and specifi city. A fi nal signature panel of eight proteins (beta 2 microglobulin, carcinoembryonic antigen, cortisol, epider mal growth factor receptor, IGFBP-2, IL-17, PPY and VCAM-1) was identifi ed that showed increased prediction accuracy when validated in an Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. A similar study [42] reported on the measured levels of 190 plasma proteins in a total of 600 participants. An initial panel of 17 analytes associated with the diagnosis of very mild dementia/MCI or AD was identifi ed. Th eir analysis yielded a set of four plasma analytes (ApoE, B-type natriuretic peptide, C-reactive protein, pancreatic polypeptide) that were consistently associated with the diagnosis of very mild dementia/MCI/AD when validated across the ADNI cohort. A comparison among panels of analytes derived from such similar studies reveals very few common blood biomarkers for AD. Despite having similar analytical platforms and common validation cohorts, there are discrepancies in the numbers of plasma biomarkers identifi ed by these studies. Th e likely reasons for this could be variation in pre-analytical variable selection, which could lead to diff erential interaction between analytes of interest, diff erences in innate characteristics of a cohort based on region and diff erent statistical approaches employed by the diff erent groups.
Th ere are diff erent methods for identifying biomarkers in blood (Table 1); hence, it is important to standardize the methods of generation of proteomic data and the entire workfl ow. In order to standardize a panel of biomarkers for AD diagnosis, consensus on protocols and ultra sensitive analytical methods are required through multi centre studies. Proteins in a sample can be separated using two-dimensional polyacrylamide gel electrophoresis or high performance liquid chromato graphy [43]; surface chromatography by adsorbing proteins to activated surfaces (surface-enhanced or matrix-assisted laser desorption-ionization protein chip array technology) [44]; and peptide ionization procedures for analysis of proteins from gels or protein chips by mass spectroscopy (MS). Each technology has its own advantages and limitations. For example, researchers use two-dimen sional gel electrophoresis-MS for plasma biomarker analysis because of its remarkable resolving power, increased sensitivity and high throughput proteome analy sis capabilities [37,45], and although this technology is usually accessible to most of the researchers, it is laborious and not applicable for small and hydrophobic peptides. In addition there is a limited dynamic range for quantitative measurement. Recent studies have been exploring liquid chromatography-MS because it requires only small amounts of sample and is highly sensitive. Complex quantifi cation analysis and sensitivity for interfering compounds are the drawbacks for this technique. Surface enhanced laser desorption/ ionization-time of fl ight MS is a newly introduced protein identi fi cation technique with better resolution and quantifi ca tion and selective capture of proteins under native conditions, although the post-processing is a complex procedure and reproducibility is still problematic. Enzyme-linked immunosorbent assay (ELISA) is one of the major proteomic techniques used worldwide for quantifi cation of proteins but the major disadvantage is the availability of specifi c antibodies.
Challenges associated with standardization and validation of the results
Although an overwhelming volume of research has been done in the fi eld of AD blood biomarkers so far, there is a clear lack of reproducibility of the results obtained across diff erent studies. Firstly, diff ering methods of collection, transport and storage of samples may be one of the reasons for the observed diff erences. AIBL study protocol involves overnight fasting for the participants; the same is not the case, however, for other well characterized cohorts such as the Texas Alzheimer's Research and Care Consortium (TARCC). Long-term storage of the samples in liquid nitrogen versus -80°C freezer has an impact on the levels of certain protein biomarkers. Secondly, variations among the assay and interpretation methods could be another factor. Changes in the biomarker panel have been observed when alternative methods are used (for example, MS versus ELISA). Th irdly, selection criteria of the cohort could be another important factor. Th e participants recruited in diff erent studies might be at diff erent stages of disease pathology though the clinical symptoms are still concealed. Standardized neuropsychological assessments across populations to obtain uniformity in recruited cohorts is lacking.
Recommendations and conclusion
AD is a multifaceted disease and biomarkers need to be visualized in a broader range that can correlate to the underlying neurodegenerative phenomenon. As AD is multifactorial, no single biomarker will be able to explain the progression or pathology of AD and hence single biomarker approaches have been unsuccessful in predicting the disease pattern. Proteomics has gained the interest of researchers as a promising way to decode the biomarker mystery. However, the close interaction of various fi elds, such as lipidomics, genomics and proteomics, is required to achieve an optimal AD biomarker panel. Th is kind of 'multi-omic' interdisciplinary approach will strikingly advance further biomarker discovery.
Further, diff erent blood fractions may be appropriate to study particular sets of biomarkers because of the diff erences in the distribution of blood-based proteins. Th e source of the biomarker (plasma versus serum) can have a large impact on the observed concentration of some proteins, including the ones of great interest in AD pathophysiology [46]. Platelets are becoming increasingly popular in blood biomarker research because of their homogenous and compartmentalized nature. Both plasma and serum are very heterogenous in nature and have complex and abundant pools of proteins such as albumin and IgG that can potentially interfere in achieving the required sensitivity for the assay.
Researchers tend to use the general term ' AD blood biomarker' for an early AD diagnosis; however, there exists a huge need to have a separate set of signatures to identify diff erent stages of AD, such as pre-clinical, prodromal and clinical. A unique set of blood analytes is required to successfully predict the conversion of preclinical AD participants and also to diff erentiate controls from MCI progressors and those who do not progress to further cognitive decline. Th ese sets of biomarkers should then be validated against other established clinical correlates such as the t-tau/Aβ42 ratio from CSF and neuroimaging so that they can be integrated into clinical practice. Th is will help in the speedy and accurate diagnosis of sporadic AD, should be able to detect disease progression, and have an impact on therapeutic intervention, the classifi cation of diff erent stages of AD and the diff erentiation of AD from other dementias.
Th e following are more selected recommendations for multiplex biomarker researchers. First, there is a need for extensive longitudinal studies with the aim of studying biomarkers along the course of the disease spectrum. Th e longitudinal change in biomarkers should be examined as a putative biomarker itself, as has been done with cognitive markers. Second, well defi ned and characterized AD cohorts need to be established and used for biomarker discovery. Non-AD dementia cohorts should be studied in parallel to determine the overlapping and non-overlapping biomarker profi les between dementia (in general) and AD. Th ird, variations in biomarker measurements among diff erent labs need to be overcome by establishing a consensus among experts involved in biomarker research -the 'Delphi method' . Th is will facilitate identifi cation of the challenges associated with standardization of the protocols and disparities in techniques. Fourth, multicentre studies such as ADNI and E-ADNI are needed. Th ese studies should adopt standardized neuropsychological assessments, identical protocols, and uniform methods of analysis and interpretation of data. Fifth, combinations of blood biomarkers, risk factors, imaging, neuropsychological measures and clinical data should be critically evaluated.
Th e major benefi t from a successful multiplex blood biomarker approach in AD would be to provide an inexpensive and minimally invasive diagnostic test capable of monitoring changes over time and responses to clinical interventions. Watt et al. [47] Copper immobilized metal affi nity capture and SELDI Three candidate biomarkers in blood Ray et al. [40] Filter based, arrayed sandwich-ELISA Chemokines, growth factors, and infl ammation markers Zhang et al. [48] Multidimensional LC in combination with one-and Serum-based biomarkers two-dimensional PAGE MALDI and ESI-MS Infl ammatory response mediators ELISA multiplex platforms Amyloid beta Henkel et al. [49] Anion exchange and reverse phase chromatography 12 high-abundance proteins from plasma Choi et al. [50] Two-dimensional PAGE, western blot, and MALDI-MS Fibrinogen gamma chain and alpha1 antitrypsin Lopez et al. [51] Affi nity chromatography, spin columns and MALDI-MS Pattern of unidentifi ed proteins in serum ESI-MS, electrospray ionization-mass spectrometry; LC, liquid chromatography; MALDI, matrix-assisted laser desorption/ionization; MAP, multi-analyte profi ling; RBM, rules based medicine; SELDI, surface enhanced laser desorption/ionization.
This article is part of a series on Peripheral Biomarkers, edited by Douglas Galasko. Other articles in this series can be found at http://alzres.com/series/biomarkers Abbreviations Aβ, amyloid beta; AD, Alzheimer's disease; AIBL, Australian Imaging Biomarkers Lifestyle; apo, apolipoprotein; CSF, cerebrospinal fl uid; ELISA, enzyme-linked immunosorbent assay; IL, interleukin; MCI, mild cognitive impairment; MS, mass spectroscopy.
Competing interests
The authors declare that they have no competing interests. | 4,543 | 2013-06-25T00:00:00.000 | [
"Biology"
] |
Ising Model on Twisted Lattice and Holographic RG flow
The partition function of the two-dimensional Ising model is exactly obtained on a lattice with a twisted boundary condition. The continuum limit of the model off the critical temperature is found to give the mass-deformed Ising conformal field theory (CFT) on the torus with the complex structure $\tau$. We find that the renormalization group (RG) flow of the mass parameter can be holographically described in terms of the three-dimensional gravity including a scalar field with a simple nonlinear kinetic function and a quadratic potential.
Introduction
The AdS/CFT correspondence [1,2,3] has been giving valuable informations for various field theories especially in the strong coupling regions. In particular, the correspondence between the classical gravities and the large N (gauge) field theories has been most intensively studied.
Among many tests on this correspondence, the exactly soluble models in low dimensions, say, the two-dimensional conformal field theories, are expected to shed light. The relation of AdS gravity with the Virasoro algebra of CFT is based on the pioneering work of Brown and Henneaux [4], and the minimal conformal field theories in two dimensions have been discussed in relation to the AdS gravity in three dimensions [5,6]. Subsequently extensive analyses have been done for the W N minimal conformal field theories and many evidences are presented for the relevance of the higher spin field theories together with gravity for these minimal models in the large N limit [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22].
As for the recent progress in the stronger statement of the AdS/CFT correspondence, that is, the correspondence between the "quantum" gravity or the string theory and the finite N (gauge) field theory, the finite N effect of the one-dimensional supersymmetric gauge theory with 16 supercharges (the finite N BFSS matrix model [23]) has been directly examined using the Monte Carlo simulation [24,25], where the α ′ -corrections to the Type IIA supergravity [26] are reproduced from the gauge theory. This strongly motivates to regard field theories in some category as candidates of the quantum gravity at least in the semi-classical meaning.
In fact, in Ref. [27], it has been proposed that the two-dimensional minimal models are holographically dual to three-dimensional quantum gravities. In particular, the simplest minimal model, namely the Ising conformal field theory, is conjectured to be dual with the three-dimensional Euclidean pure quantum gravity.
We here briefly review the argument in Ref. [27]. Let us consider the three-dimensional quantum gravity with negative cosmological constant. The "quantum" here means that we integrate over all the possible three-dimensional metric. In this path-integral, the boundary of the three-dimensional geometry is fixed to the 2-torus with the complex structure τ , and thus the partition function of the gravity is a function of τ andτ . The important fact is that the smooth classical solutions of the three-dimensional pure gravity are restricted to the locally AdS geometries, which are obtained by the SL(2, Z) transformation from the thermal AdS geometry. Thus, if we can use the semi-classical approach, the partition function of the quantum gravity is obtained by summing up the classical contributions of each solution with some quantum corrections. The expansion parameter of the saddle point approximation is the inverse of the quantity, where L is the AdS radius and G N is the Newton constant. The quantity c equals to the central charge of the two-dimensional Virasoro algebra which appears at the boundary of the asymptotically AdS geometry [4]. Therefore the semi-classical approximation is usually expected to work only for large c region. The most important assumption in Ref. [27] is that the partition function can be evaluated by summing up the classical geometries even in the strong coupling region, c ∼ 1. Once this assumption is accepted, the partition function can be written as Z grav (τ,τ ) = γ∈SL(2,Z)/Γc Z vac (γτ, γτ ), (1.2) where Γ c is a subgroup of SL(2, Z) which does not change the topology of the threedimensional geometry and Z vac (τ,τ ) is the contribution from the thermal AdS geometry.
The points are that Z vac (τ,τ ) can be calculated explicitly and that the symmetry Γ c is enhanced at specific values of c < 1 where the summation of (1.2) becomes a finite sum.
Note that these values exactly equal to the central charges allowed for the two-dimensional minimal models. In particular, when c = 1 2 , the partition function of the gravity (1.2) reproduces that of the two-dimensional c = 1 2 conformal field theory, that is, the Ising conformal field theory. This result suggests the duality between the c = 1 2 minimal model and the three-dimensional quantum pure gravity.
This argument actually gives the correspondence between the c = 1 2 conformal fixed point of the theory space of the two-dimensional field theory and the three-dimensional quantum gravity. Then it is natural to expect that there will be a gravity description at least in the 2/27 vicinity of the conformal fixed point in the theory space of the two-dimensional quantum field theories. In the context of the AdS/CFT correspondence, the renormalization group (RG) flow of a coupling constant in the field theory can be identified with the classical trajectory of the corresponding bulk field in the asymptotically AdS geometry, which is called the holographic RG [28,29,30,31,32,33,34,35,36,37,38] (see also [39]). In this scheme, the radial coordinate of the AdS gravity can be identified with the RG parameter and the AdS boundary corresponds to the conformal fixed point. In the case of the Ising model, although the conformal fixed point corresponds to the "quantum" gravity, we can expect that the central assumption in Ref. [27], namely, only the classical solutions contribute to the partition function of the quantum gravity, still works at least in the vicinity of the AdS solution even after adding an additional field in the bulk theory. Then we can examine the RG structure around the c = 1/2 conformal fixed point using the traditional technique of the holographic RG.
The purpose of our paper is to work out the exact solution of the Ising model partition function on the lattice corresponding to the torus with the generic complex structure τ , and study the corresponding holographic RG structure of the continuum theory around the conformal fixed point. Usually the partition function of Ising model in two dimensions is obtained for rectangular lattices with the periodic boundary condition, and one obtains the Ising conformal field theory on a rectangular torus (τ = i) by taking the continuum limit at the critical temperature [40,41,42]. However, the most general torus has a complex structure with the parameter τ , representing the shape of the torus. In the literature, we have found no explicit solution of the Ising model partition function on the lattice corresponding to the torus with the generic complex structure, although there have been works to obtain finitized conformal spectrum of Ising model on manifold of various topology using cornertransfer matrix, Yang-Baxter technique, or thermodynamic Bethe Ansatz [43,44,45,46]. We explicitly compute the partition function of the Ising model on the twisted lattice, that is, a lattice with such a boundary condition that the "space" position is shifted to some amount when one goes around the "time" direction, and show that it ends up with the torus with the complex structure in the continuum limit. By taking the continuum limit at off-critical temperature with an appropriate scaling, we identify the deviation parameter as the mass of the Ising conformal field theory on the torus with the complex structure. We also work out the classical solution of the three-dimensional Einstein gravity with a single scalar field. We find that a simple nonlinear kinetic function and a simple quadratic potential for the scalar field can capture the RG flow from the Ising field theory with the central charge c = 1/2 at the ultraviolet fixed point towards the c = 0 case at the infrared by using the technique of the holographic RG via the Hamilton-Jacobi equation of the gravity. 3/27 This paper is organized as follows: In the next section, the partition function of the twodimensional Ising model is obtained on the twisted lattice representing the discretized version of the torus with the complex structure. In section 3, the continuum limit of the partition function is obtained retaining the deviation from critical temperature, which results in a mass term for a free Majorana fermion. In section 4, the holographic description of the renormalization group flow is worked out for the massive Majorana fermion in terms of the Hamilton-Jacobi equation. Section 5 is devoted to a summary of our results and a discussion.
Some technical details in computing the partition function of the Ising model is summarized in Appendix A. Some details of partition function of two-dimensional massive Majorana fermion on the torus is given in Appendix B.
Partition function of the 2D Ising model on the twisted lattice
Let us consider the 2D Ising model on a rectangular lattice with the size n × m. There is a "spin" s(x) = ±1 at each site x = (x 1 , x 2 ) (x 1 = 1, . . . , n, x 2 = 1, . . . , m) and the Hamiltonian of the system is given by where J 1 (J 2 ) and1 (2) are the coupling constant and the unit vector in the "space" ("time") direction, respectively. As for the boundary condition, we impose the periodic boundary identify Fig. 1 The spin degrees of freedoms are on the sites of the n × m lattice. In our case, we take the usual periodic boundary condition for the x 1 direction but we adopt the twisted boundary condition for x 2 , namely, we identify (x 1 + p, x 2 + m) with (x 1 , x 2 ).
condition to the space direction and the twisted or shifted boundary condition with an integer parameter p ∈ Z to the time direction, namely, the space position is shifted by p 4/27 when one goes around the time direction (see Fig. 1): In the following, we evaluate the partition function, under this boundary condition.
Let us first define the matrices with the size of 2 n , where the Pauli matrices σ i (i = 1, 2, 3) are placed at the k-th position (k = 1, . . . , n). Using these matrices, the transfer matrix of this system is written as where a and b are given by a ≡ βJ 1 and b ≡ βJ 2 andã is defined by Note that the system is in the ordered phase forã < b and is in the disordered phase for a > b. We further define the "shift matrix" Σ whose components are explicitly given by Σ k,2k−1 = Σ 2 n−1 +k,2k = 1, (k = 1, · · · , 2 n−1 ) 0, (others) We see that Σ has the property, We can then write the partition function in Eq. In order to estimate the transfer matrix, we here construct the Dirac matrices of Spin(2n) and which satisfy the Clifford algebra, As usual, the generators of Spin(2n) in the chiral and anti-chiral representations are defined as where Using them, we can also divide V a , V b and Σ into the chiral and anti-chiral sectors as In the following, we use the notation, For Σ ± , we can easily see Then the partition function Eq.(2.3) can be written as where Tr ± denote the trace over the chiral and anti-chiral sectors of the spin representation of Spin(2n), respectively.
In evaluating (2.18), the following fact is useful: Suppose that ∃ A ∈ Spin(2n; C) in the fundamental representation is transformed into the "canonical form" by T ∈ O(2n) as whereĴ µν are the generators of Spin(2n; C) in the fundamental representation and R(θ) is the two-dimensional rotation matrix with the (complex) angle θ, Then we can write Tr ± (A) as Thus, we first express H m ± Σ p ± in the fundamental representation and then transform them into the canonical form using appropriate matrices T ± ∈ O(2n).
6/27
The computation along this strategy is straightforward and we summarize it in Appendix A. As a preparation to show the result, we define the quantity γ I (I = 1, · · · , 2n) as the positive solution of the equation, cosh γ I = cosh 2ã cosh 2b − cos πI n sinh 2ã sinh 2b, (2.22) andγ Note thatγ I satisfy the relation: In addition, we will often use γ 0 = γ 2n in the following. Combining the above consideration and the results (A24) and (A25), we obtain the partition function of the two-dimensional Ising model in the ordered phase with the twisted boundary condition in Eq.(2.2); · 1 + e −mγ0 P 3 , where we have used the reflection property (2.24) and introduced 2 cosh m 2 γ n (n : odd, p : even) 2 sinh m 2 γ n (n : odd, p : odd) 2 sinh m 2 γ n (n : odd, p : even) 2 cosh m 2 γ n (n : odd, p : odd) 2 cosh m 2 γ n (n : even, p : even) 2 sinh m 2 γ n (n : even, p : odd) 1 (n : odd) and (2.28) The result Eq.(2.26) constitutes our new result of partition function of the two-dimensional Ising model on a twisted lattice that gives the discretized version of torus with the complex structure τ (2.29) whose continuum limit gives the torus with the complex structure τ as described in the next section.
Continuum limit of the partition function
We next consider the continuum limit, that is, the limit of m, n, p → ∞ with fixing the ratios τ 1 , τ 2 in Eq.(2.29). In taking this limit, we also tune the coupling constant so that the theory properly reaches to a continuum theory around the critical point. For simplicity, we consider the case of J 1 = J 2 , that is, a = b ≡ K. Under this condition, we defined the parameter µ through the relation Recalling that the critical temperature K * is given by sinh 2K * = 1, the parameter µ expresses a deviation from the critical temperature for a finite n [42]. Note that we take the continuum limit n → ∞ with fixing µ = O(1). This means that the temperature approaches to the critical value as K − K * = O(n −1 ) in taking the continuum limit.
In this parametrization, γ I can be expressed as Since we are interested in the continuum limit, only the region I ≪ n is relevant. Then, in such a region, γ I can be expanded for n ≫ 1 as Substituting it to Eq.(2.28) and taking the limit of n → ∞, P 1 , · · · , P 4 become where we have used the reflection property ofγ I (2.24). We can also show and Combining Eqs.(3.4), (3.5) and (3.6), we see that the partition function in the continuum limit can be expressed as where C is an irrelevant constant and Z cont.
i ≡ lim n→∞ Z i are given by which exactly agree * with the result in Eq.(B10) of continuum field theory of a free massive Majorana fermion where the labels µ, ν = 0, 1 2 of Z µ,ν specify the boundary condition of the fermion as described in Appendix B. This means that the continuum limit of the two-dimensional Ising model with the boundary condition in Eq.(2.2) is the two-dimensional massive fermion theory on the torus. In particular, the combination p n + i m n becomes the complex structure τ = τ 1 + iτ 2 of the continuum torus as shown in Eq. (2.29), and the parameter µ in Eq.(3.1) representing the deviation from the critical temperature is nothing but the mass parameter of the fermion in the continuum limit.
Before closing this section, it is worth looking at the RG structure of the Ising model. Fixing parameters τ 1 and τ 2 in Eq.(2.29) which specify the geometry of the torus, the parameter space of the Ising model is spanned by (n, µ), and the procedure of taking the continuum limit is nothing but taking the limit of n → ∞ with fixing µ. As we have seen, after taking this limit, the parameter µ is precisely identical to the mass parameter of the two-dimensional massive free fermion and the limit µ → 0 corresponds to the massless Majorana fermion, that is, c = 1 2 CFT. Therefore the flow parametrized by the parameter µ starting from the conformal fixed point exactly equals to the mass deformation of the c = 1 2 CFT (Fig. 2). The curved lines express the RG flows of an irrelevant operator associated with 1 n . After taking the continuum limit, the parameter µ is identical to the mass of the two-dimensional free fermion.
Holographic RG flow of single scalar field
In this section, we holographically describe the RG flow starting from the conformal fixed point along the parameter µ.
In the spirit of the AdS/CFT correspondence, the source (coupling constant) of an operator in the boundary field theory is identified with a field in the bulk gravity. The value of the 10/27 coupling constant of the boundary field theory varies towards infrared (IR) as a RG flow. On the other hand, the classical solution of the bulk gravity provides a trajectory of the bulk field along the radial direction of an asymptotic AdS geometry. This trajectory is regarded as the RG flow of the coupling constant of the boundary field theory away from a CFT at ultraviolet (UV) fixed point. As mentioned in the introduction, the two-dimensional c = 1 2 CFT is conjectured to be dual to the "quantum" pure gravity under the assumption that the path integral over the metric of 3D space-time is localized to the classical solutions, that is, the BTZ black holes. We here assume that the same is true at least in the neighborhood of the CFT fixed point; the classical solutions are dominant even in the presence of additional fields in evaluating the partition function of the quantum gravity.
In the analysis of the previous section, we found that there are two independent flows from the conformal fixed point parametrized by µ and 1/n. We expect that the parameter 1/n in the boundary theory corresponds to certain discretized version of bulk gravity, which is difficult to work out at present. In the following, we explore the continuum bulk gravity to study the RG flow of the parameter µ, which can be identified as the mass of the fermion of the boundary field theory.
Since the parameter µ couples to the operatorΨΨ at the boundary field theory, it is plausible to assume that the corresponding field in the gravity is a real scalar field φ, which may be considered as the minimum number of degrees of freedom to describe the RG flow of a single parameter µ. We thus consider the following action of three dimensional Euclidean gravity with a real scalar field φ; † where G N is the Newton constant of three-dimensional gravity, g µν (µ, ν = 1, 2, 3) is the metric, R is the Ricci scalar, g = det(g µν ), and K(φ) is a function of φ, which describes the nonlinearity of the kinetic term of scalar field φ. Since the AdS 3 geometry with the radius L should be a solution of this system when φ = 0, we demand V (φ) to satisfy Since φ should have a nonsingular kinetic term at least for small φ, we further require that K(φ) is regular at φ = 0. Without loss of generality, we can fix by choosing the normalization of the field φ. Our final task is to determine functions V (φ) and K(φ) by requiring the solutions φ, g µν to describe the holographic RG flow of the Ising model off critical temperature. † We here simply omit to write the boundary terms of the gravity action.
11/27
4.2. Solution of the gravity corresponding to the mass deformation Since we are interested in the evolution of the scalar field along the radial direction of an asymptotic AdS geometry, we set the following ansatz for the metric and the scalar field: where x i (i = 1, 2) express the two-dimensional transverse directions, and r ∈ [0, ∞) is the radial coordinate, which may be regarded as the Euclidean time. We have also assumed that the functions h(r) and φ(r) depend only on r. Note that we have fixed the gauge by setting Eq.(4.4). Since the geometry is asymptotically anti de Sitter space, e 2h must be expandable around r = ∞ as In this setup, the independent field equations are given by ‡ where the dot˙expresses the derivation with respect to r. Our first task is to obtain h(r) and φ(r) for given fixed functions K(φ) and V (φ) by solving Eqs. (4.6) and (4.7). Using the boundary condition in Eq.(4.5), we can integrate Eq.(4.6) to obtain h(r) in terms of φ(r) and K(φ) as e 2h(r) = L 2 r 2 e − ∞ r sK(φ(s))(φ(s)) 2 ds . (4.8) The standard way to solve the equations would be plugging this to Eq.(4.7) and solve the obtained single equation of φ(r) for given K(φ) and V (φ). However we proceed the opposite way: We first fix the behavior of φ(r) from a physical requirement and determine the relation between K(φ) and V (φ).
Since the mass deformation keeps the theory free, the scaling of the mass parameter should be trivial. Then we can assume where a is the length scale of the (boundary) field theory and µ 0 is the mass at the reference length scale a 0 . We should also recall that, in the context of the holographic RG, the radial coordinate r is identified with the RG parameter, namely the "Euclidean time".
The evolution of a bulk field along this radial direction is identified with the RG flow of the corresponding coupling constant. In our choice of gauge in Eq.(4.4), the radial coordinate r ‡ The field equation of the scalar matter φ can be derived from those of the metric.
12/27
can be identified as the length scale a of the two-dimensional boundary field theory. Then the behavior of the field φ(r) can be regarded as the scaling behavior of the corresponding coupling constant [37,38]. In the present context, we are looking for such a solution that corresponds to the mass parameter µ of the free fermion theory which exactly behaves as Eq.(4.9). To achieve this goal, we should fix the solution of the scalar field as a function of radial coordinate r (length scale parameter) with a constant φ 0 , instead of solving φ(r) for given fixed K(φ) and V (φ). Eq.(4.10) allows us to change a variable from r to φ, and Eq. .7), we obtain a relation between K(φ) and This is a necessary condition for the bulk scalar field φ to become the holographic dual to the mass parameter of the free fermion.
Holographic RG flow in terms of Hamilton-Jacobi equation
Even after imposing the exact scaling behavior of the mass parameter in Eqs.(4.9) and (4.10), we still have a freedom to choose one function of φ in the action, either K(φ) or V (φ), in order to represent the holographic RG flow of our system. In fact, with the requirement of the scalar field behavior φ(r) in Eq.(4.10), the bulk geometry is uniquely determined by Eq.(4.8) for any choice of either K(φ) or V (φ) which are related by Eq.(4.12) as a consequence of field equations.
In order to fix it, let us further consider the holographic RG structure of this system based on the Hamilton-Jacobi equation of the bulk gravity [37,38]: where G ij;kl is defined by G ij;kl ≡ g ik g jl − g ij g kl , and R is the two-dimensional scalar curvature constructed from g ij (x). The classical action S cl = S cl [g ij (x), φ(x)] is a functional of the boundary values of the metric g ij (x) (i, j = 1, 2) and the scalar field φ(x) on the twodimensional surface at a specific value of the radial coordinate, say r = r 0 , and is obtained by substituting the classical solution into the bulk action in Eq.(4.1).
The momentum constraint of the bulk gravity insures that the classical action is invariant under the diffeomorphism of the two-dimensional boundary. We can then expand the classical 13/27 action S cl in powers of derivatives § in the transverse directions: where · · · includes terms with the derivatives of φ and the two-dimensional curvature tensors, which vanish when φ is independent of x i and the transverse geometry is flat. The Hamilton-Jacobi equation in Eq.(4.13) can also be expanded in powers of derivatives, and gives the following relation among W (φ), K(φ) and V (φ) at the leading order of the expansion in powers of derivatives [37,38] By combining this relation with Eq.(4.12), we can determine W (φ) in terms of K(φ) as Also, repeating the argument given in Refs. [37,38], we can obtain the β-function of φ as , (4.17) and the holographic c-function which is a monotonically decreasing function when the coefficient of the kinetic term K(φ) is positive definite. Note that the c-function (4.18) is defined as the coefficient of the scalar curvature appearing in evaluating the expectation value of the energy-momentum tensor.
This kind of the c-function has also been proposed using functional renormalization group equation [48].
The gauge/gravity correspondence asserts that the classical action of the bulk gravity is regarded as the (regularized) free energy of the dual boundary field theory. In our case, we have considered the three-dimensional gravity with a single bulk scalar field φ as a candidate of the dual description of the massive two-dimensional free fermion. Therefore one may naively expect that e −Scl should be equal to the partition function of the two-dimensional massive free fermion in Eq.(3.7), when φ is independent of the transverse coordinates x i and the transverse geometry is flat. However, we should recall that the Ising field theory corresponds to the three-dimensional "quantum" gravity [5,6]. The three-dimensional gravity with appropriate boundary conditions possesses the conformal symmetry with the central charge 1 2 at the boundary [4]. In Ref. [27], as mentioned in Introduction, the threedimensional pure gravity is considered. It has been show that the integration over all the § In Refs. [37,38], the authors divide the classical action into the local and the non-local parts as S[φ(x), g ij (x)] = S loc [φ(x), g ij (x)] + Γ[φ(x), g ij (x)], and show that the non-local part Γ satisfies the RG equation of the boundary field theory. In this paper, we further expand Γ[φ(x), g ij (x)] in powers of derivatives.
14/27 possible three-dimensional metric with fixing boundary condition is localized to the classical three-dimensional geometries (BTZ black holes) and the partition function of the quantum gravity turns out to that of the Ising field theory by relying on the boundary conformal symmetry. Our analysis in this paper is based on the same assumption that this quasi-semiclassical approach still makes sense even after adding the mass term to the boundary field theory. Therefore, the partition function of the massive fermion as a sum of functions of the moduli parameter τ in Eq.(3.7) should be obtained after summing up all the possible classical geometries.
However it is actually hard to carry out this procedure explicitly, since we look at only one solution of the classical field equations and the conformal symmetry is broken by introducing nonvanishing values of the scalar field φ to represent the mass term of the boundary fermion.
However it is still possible to determine the scalar potential V (φ) of the bulk gravity as follows. We observe that the boundary condition of the fermion becomes irrelevant and that the partition function becomes τ -independent when the geometry of the boundary becomes R 2 . In this case, the partition function becomes extremely simple and we can expect that every solution gives the same contribution to the partition function even if there are several solutions in the bulk with the same boundary condition. When the geometry is R 2 , the free energy of the massive fermion is given by where µ is the mass parameter. In order for the free energy in Eq. (4.19) to be well-defined, we need to regularize both UV and IR divergences. Although the result depends on details of the regularization, the regularized free energy in general takes the form F reg = a + bµ 2 + cµ 2 log µ, (4.20) where a, b and c are some constants.
As mentioned above, we regard the scalar field φ at the boundary as the mass parameter of the boundary fermion and we identify the regularized free energy with the classical action S cl in Eq.(4.14) at the boundary which is obtained from the bulk three-dimensional gravity.
In addition, W (φ) in Eq.(4.14) is proportional to the free energy because φ does not have x i -dependence. Therefore W (φ) can be expressed as where A, B and C are some constants. Using Eqs.(4.16) and (4.15), we can determine K(φ) and V (φ) in the following.
Suppose C = 0, we observe that W (φ) is dominated by the last term Cφ 2 log φ at small values of φ. This implies that the RG flow at φ → 0 is discontinuous with the AdS solution in Eq.(4.16) when C = 0. Since we identify the evolution of φ from r = 0 to r = ∞ along the 15/27 radial direction (the Euclidean "time evolution") as the RG flow from the UV fixed point to IR, this behavior is not acceptable as the RG flow of the mass parameter. Therefore, we should adopt a regularization scheme in the holographic renormalization group by imposing the condition Note that the dimensional regularization realizes this condition for example. With this choice of W (φ), we obtain K(φ) and W (φ) from Eq.(4.16) and the normalization condition Eq.(4.3), and Incidentally, we obtain the β-function (4.17) as 25) and the c-function from (4.18) as (4.26) The beta-function (4.25) is consistent with the notion that the bulk scalar field φ corresponds to the mass parameter of the boundary field theory, and the c-function (4.26) is a monotonically decreasing function of φ ∈ [0, ∞) from 1 2 to 0 as expected. Substituting this result into Eq.(4.12), we obtain which is monotonically decreasing V → −∞ as φ → ∞ (see fig.3). This is also consistent with the expectation that mass perturbation deforms the c = 1/2 Ising CFT to flow to a system with less degrees of freedom as dictated by the c-theorem. Namely it is likely to flow to nothing (c = 0) in the IR.
Summary and Discussion
In this article, we have obtained the partition function of the Ising model on Euclidean two-dimensional lattice with the twisted boundary condition representing the torus with a complex structure τ in a discretized version. We have taken an appropriate scaling limit to obtain a continuum limit for the torus with the complex structure τ , retaining a deviation from the critical temperature. The resulting continuum partition function agrees with that of the mass-deformed Ising CFT, namely the continuum field theory of massive Majorana fermion on the torus with the complex structure τ . We have also discussed the RG flow of the Ising model off critical temperature in terms of the three-dimensional AdS gravity Let us comment on the relation of our results to those in Ref. [47] where a similar setup has been used to look for solutions interpolating two AdS geometries corresponding to two different conformal field theories. They assumed the canonical kinetic term K(φ) = 1 for the scalar field φ, which is achieved by a field redefinition from our case. They required that the metric component G tt has a double zero at the horizon r 0 (IR), at which the metric becomes another AdS geometry in addition to an AdS geometry at r = ∞ (UV). Near r → ∞ at UV, their solution exhibits a chirally asymmetric Virasoro algebra, where the excitation spectra In principle, one should be able to describe these data, which are worth studying. However, it is likely that one needs to overcome the problems associated with the strong coupling, or quantum effects in gravity in order to describe these data quantitatively in the presence of matter fields. 17/27 Recent applications of AdS/CFT to condensed matter physics provide many interesting insights into possible phase structures of strongly coupled system. In particular, introducing periodicity into the system to mimic lattice structures seems to be a crucial feature of these applications [51,52,53,54,55]. Since many realistic models of statistical physics or condensed matter physics are built on some discretized system, it is desirable to obtain discretized models as the boundary of some discretized model for the bulk in discussing the AdS/CFT correspondence. We hope that the continuum space-time would emerge by taking the continuum limit of the discretized system in the bulk and the gravity would be realized as a sort of a cooperative phenomenon. This ambitious objective is of course still quite hard to achieve. How to obtain a discretized version of (quantum) gravity realizing the given discretized model on the boundary is the key question which is worth pursuing. With this philosophy in mind and taking our result quite optimistic, we may expect that the usual two-dimensional Ising model would be a candidate of the discrete boundary model related to a discrete version of quantum gravity in the above sense, and our result may be a first step toward this direction, since the deviation parameter µ from the critical temperature is actually included in the original discretized Ising model. However our discussion still remained within the continuum theory. In this sense, it would be a great step if one finds a gravity description of the RG flow parametrized by 1 n in Fig. 2. Such an attempt is interesting in its own right, and furthermore can provide a concrete starting point to incorporate lattice structures into the AdS/CFT correspondence, which may play a vital role in condensed matter physics applications.
A. Transformation of H ± and Σ ± Let us express the generators of Spin(2n) in the fundamental representation as Let us consider H ± and Σ ± defined in (2.16) in the fundamental representation, which are matrices with the size of 2n and we write them asĤ ± andΣ ± , respectively, in the following. 18/27 They are written aŝ where x and y are 2 × 2 matrices, x ≡ cosh 2ã cosh 2b i sinh 2ã cosh 2b and the overall factor α ± in the definition ofΣ ± is given by which are necessary in order to reproduce (2.8) in the spin representation.
Our goal in this appendix is explicitly transforming the matricesĤ ± andΣ ± into this canonical form. Then we can easily estimate Tr ± H m ± Σ p ± in the spin representation by using (2.21).
We first introduce matrices Ω ± ∈ SO(2n) whose (i, j) blocks are given by respectively, where R(θ) is defined in (2.20). The similarity transformations of H ± and Σ ± by Ω ± become , 19/27 and where M I and N I (I = 1, · · · , 2n) are matrices, with A I = cosh 2ã cosh 2b − cos πI n sinh 2ã sinh 2b, B I = sinh 2ã cosh 2b − cos πI n cosh 2ã sinh 2b, Note that I runs odd (even) numbers for the matrices with the index + (−). It is easy to see that M I and N I satisfy Since A I , B I and C I satisfy we can uniquely determine the parameters γ I > 0, θ I ∈ [0, π 2 ] and ǫ r = ±1 by A I ≡ cosh γ I , B I ≡ ǫ I sinh γ I cos θ I , C I ≡ ± sinh γ I sin θ I , where the sign in the definition of C I takes + for 1 ≤ I ≤ n and − for n + 1 ≤ I ≤ 2n.
Note that the γ I appearing in (A12) is the same one defined in (2.22). We also note that 20/27 N 0 = N n = 0 and M 0 and M n are given by where the sign appearing in the expression of M 0 takes + in the disordered phase (ã > b) and − in the ordered phase (ã < b).
We can rearrange the matrices (A6) and (A7) by permuting the elements properly. To this end, we introduce the matrices which generate the transportations, . . .
(A22)
We finally consider the combinations, which transform H m ± Σ p ± in the fundamental representation into the canonical forms, respectively: 22 where we have used (A13). We can easily see det T ± = 1. These results motivate to introduce (2.23). Note that, when we consider the continuum limit, we should approach to the critical temperature from the ordered phase. Thus the sign appearing in R(±imγ 0 ) is chosen as +.
B. Partition function of 2D massive fermion on the torus
Let us consider 2-torus with periods ω 1 , ω 2 ∈ C and free Majorana fermion with mass M on it: where Ψ is a two-component spinor, Ψ= ψ,ψ T and D is the Dirac matrix given by In the following, we evaluate the partition function, 23/27 where PfD denotes the Pfaffian of the Dirac operator D. | 8,909.6 | 2015-07-23T00:00:00.000 | [
"Physics"
] |
Streaming Potential with Ideally Polarizable Electron-Conducting Substrates
With nonconducting substrates, streaming potential in sufficiently broad (vs Debye screening length) capillaries is well known to be a linear function of applied pressure (and coordinate along the capillary). This study for the first time explores streaming potential with ideally polarizable electron-conducting substrates and shows it to be a nonlinear function of both coordinate and applied pressure. Experimental manifestations can be primarily expected for streaming potentials arising along thin porous electron-conducting films experiencing solvent evaporation from the film side surface. Model predictions are in good qualitative agreement with literature experimental data.
■ INTRODUCTION
Foundations of the classical theory of streaming potential were laid down more than a century ago. 1 The initial and consequent models have considered non-electron-conducting substrates. 2 Several studies considered ion-conducting, namely, porous substrates (see, for example, refs 3−5) but the physics in this case is essentially different due to the relatively low conductivity of such substrates and lack of ideal polarizability of interfaces between them and electrolyte solutions. Recently, a new interesting context for electrokinetic phenomena of streaming potential (and streaming current) has arisen in capillarity-driven energy harvesting from evaporation with (nano)porous materials (see, for example, refs 6−9; the state of the art of this emerging field has very recently been critically reviewed in ref 10). In this case, hydrostatic pressure drops can be very high being ultimately controlled by capillary pressures in nanopores. At the same time, several relevant experimental studies used electron-/hole-conducting nanoporous substrates. 8,9,11 Below, we will see that the combination of large hydrostatic pressure drops with solid-substrate electron conductance can make steaming potential essentially different from the classical case.
Some transport phenomena (membrane potential, electrical conductance, and pressure-driven salt rejection) in electrolytefilled nanopores with electron-conducting walls have been recently explored by Ryzhkov et al. 12−17 These phenomena are nontrivial only when there is a noticeable overlap of diffuse parts of electric double layers (EDLs). Such systems afford only numerical analysis. Besides, principal emphasis was made on the impact of an external bias while the role of redistribution of electron charges in floating (ungrounded) systems was less explored. Electrokinetic phenomena were not considered.
In this study, for the first time, we account for electron/hole conductance of matrices of porous materials experiencing flowinduced streaming potential. To obtain simple analytical results, we consider the limiting case of sufficiently broad capillaries without any appreciable overlap of diffuse parts of EDLs and surface conductance phenomena as well as neglect the existence of the so-called Stern layer. 18 The existence of the latter may be important in more concentrated solutions and close to strongly charged surfaces. Below, we will see that strong surface charges (chemical plus induced) may well arise close to the channel exit under strongly nonlinear conditions. Therefore, accounting for the Stern layer is an essential next step that will be made in future studies.
We also discuss scenarios of possible experimental manifestations and demonstrate that they can be expected for rather large hydrostatic pressure differences and in sufficiently dilute electrolyte solutions. In combination with the requirement of negligible EDL overlap (implying relatively large pore size), this may be difficult to achieve in pressuredriven processes. However, we will see that situation can be different in systems where large hydrostatic pressure gradients are induced by capillarity in water evaporation from hydrophilic nanopores.
■ THEORY
Streaming currents arise as a result of the advective movement of electrically charged liquids close to "charged" solid/liquid interfaces in electrolyte solutions. Strictly speaking, the total electric charge of the interface region is zero; however, a charge is "bound" to the surface while its "counter-charge" can move with and/or relative to the liquid. Advective movement of electrolyte solution through a capillary with "charged" walls gives rise to a convective current. In streaming potential mode, external circuit is open, so the net electric current must be zero in any capillary cross section. Streaming potential is the voltage arising to compensate exactly the convective streaming current by an electromigration current in the opposite direction. The local density of convective current is equal to the product of local electric charge density and fluid velocity. Expressing the space-charge distribution via electrostatic potential by the Poisson equation, using the Stokes equation with the standard boundary condition of no slip on the capillary wall and taking into account the zero-current condition, for sufficiently broad (compared to the Debye screening length) capillaries, one can obtain this celebrated Smoluchowski formula 2 where φ is the electrostatic potential in the central part of the capillary (far away from its walls), ζ is the potential drop within diffuse parts of EDLs occurring at the surface (the so-called ζpotential), εε 0 is the fluid dielectric constant, η is the fluid viscosity, g is the (bulk) electrical conductivity of electrolyte solution. The potentials and the system of coordinate are schematically shown in Figure 1. With electron-conducting substrates, the electrostatic potential of the conductor surface must be the same all the way along the capillary. Let us denote its constant value by Φ + = By substituting eq 2 into eq 1, we obtain In this simple analysis, we neglect the influence of electrokinetic phenomena on the volume flow (this is justified in sufficiently broad capillaries). Therefore, the hydrostatic pressure is independent of electrostatic potential and its profile is linear. Accordingly, eq 3 can be easily integrated along the capillary to yield where L is the channel length is the hydrostatic-pressure difference along the capillary. From eqs 4 and 5, we obtain The hydrostatic pressure profile is linear where ξ ≡ x/L is the dimensionless coordinate along the channel. Taking this into account, from eq 4, we obtain In contrast to the classical case of dielectric substrates, the electrostatic potential profile is nonlinear. The extent of nonlinearity is controlled by parameter A.
This analysis assumes that there are some fixed charges on the capillary walls (sometimes referred to as "chemical charge") arising due to preferential ion adsorption or dissociation of ionogenic groups. Therefore, there is a nonzero ζ-potential at zero volume flow. Under flow conditions, additional electrostatic potential arises outside EDLs and at the capillary walls owing to the appearance of net electric charges at the capillary edges. Physically, the constancy of surface electrostatic potential is ensured by the appearance of polarization electron/hole charges at the capillary surface. Together with the initially present "chemical" charges, these polarization charges give rise to a position-dependent ζpotential that can be found from the condition of constancy of full surface electrostatic potential (eq 2) and distribution of electrostatic potential outside the EDLs (eq 9) We consider the conductor ungrounded. Therefore, the total induced electron/hole charge must be zero. There is this wellknown relationship between surface-charge density and equilibrium electrostatic potential at a charged surface 2 so surface-charge density is proportional to the hyperbolic sinus of ζ-potential. For simplicity, let us initially assume that the "chemical" charge remains unchanged under flow conditions (constant charge approximation). Taking into account this and the fact that the total surface charge under flow conditions must be equal to the "chemical" charge, we obtain where ·
( )
RT c 2 2 sinh 0 0 2 0 is the (coordinate-independent) density of "chemical" surface charge and ζ 0 is the ζ- can be rewritten as Substituting eq 10 for the distribution of ζ-potential, we obtain The integral in the right-hand side of eq 14 can be taken to yield where Shi is the integral hyperbolic sinus. Equation 15 is a transcendental equation for the determination of (Φ − φ(0)) as a function of parameter A, which is proportional to the hydrostatic pressure drop along the channel. Φ and φ(0) enter eq 15 only in combination, (Φ − φ(0)), so they cannot be determined separately. However, the pressure dependence of streaming potential (eq 6) and distribution of ζ-potential (eq 10) depend only on this combination. When parameter A is small, by developing eq 15 in Taylor series in A, we can see that Φ − φ(0) ≈ ζ 0 . By substituting this into eq 9 and developing the exponential function in series for small A, we obtain which is the same behavior as in the classical case of dielectric substrates. Above we have considered the simplest case of constant "chemical" charge density independent of ζ-potential. Given that this charge is a result of preferential ion adsorption or dissociation of ionogenic groups, it typically depends on the concentration of some ions at the surface. This, in turn, is affected by electrostatic attraction/repulsion. Therefore, generally, the density of "chemical" surface charge should be considered a function of ζ-potential, σ(ζ). In the case of electron-conducting substrates, this potential is controlled not only by the "chemical" charge but also by the electron/hole polarization charges. However, whatever the mechanism of "chemical"-charge formation, the total polarization charge must be zero for ungrounded conductors. Therefore, the right-hand side of eq 12 should still be equal to the total "chemical" charge. With a charge regulation, the latter becomes dependent on ζ-potential, which changes with coordinate according to eq 10. Hence, on the left-hand side of eq 12, we should average the "chemical" charge density over the capillary length to obtain For the distribution of ζ-potential, we can still use eq 10, so where we have taken the integral from the right-hand side of eq 17. As previously, (Φ − φ(0)) can be found from eq 18 solved as a transcendental equation.
Within the scope of the popular charge regulation model, 19 the surface charge is described by the so-called Langmuir− Stern isotherm, which gives where Z p is the charge (in proton-charge units) of potentialdetermining ions and σ 0 is the maximum surface-charge density corresponding to full dissociation. Constant K is proportional to the bulk concentration of potential-determining ions and, thus, is a function of solution pH, for example, in the case of weakly acidic groups. The term with the exponent in the denominator reflects the fact that the surface concentration of ions is different from their bulk concentration due to electrostatic repulsion/attraction. Thus, for instance, an increase in the negative surface-charge density with increasing pH is accompanied by the intensification of electrostatic attraction of H + ions, which somewhat reduces the degree of dissociation.
Especially large values of dimensionless pressure differences can be expected in capillarity-driven electrokinetic phenomena, in particular, in systems with side evaporation from thin (nano)porous films (see Figure 4 for the schematic). A simple model for the distribution of hydrostatic pressure in such systems has recently been developed in ref 10 using the Darcy law for the description of viscous flow along the film and assuming a constant evaporation rate (controlled by the external mass transfer) from the fully wet part of the film. Under these assumptions, one obtains a linearly decreasing hydrostatic-pressure gradient along the film (in contrast to the constant pressure gradient occurring in the pressure-driven mode), which is because ever more liquid is lost to evaporation while moving along the film. The corresponding expression is where P is the hydrostatic pressure, x is the coordinate along the film, h is the film thickness, L is its length, χ is its hydraulic permeability, and q e is the linear evaporation rate (m/s). The evaporation rate is assumed to be constant along the film (we disregard the dependence of saturated-vapor pressure on the menisci curvature). After integration along the film (taking into account that pressure at the immersed end equals atmospheric (zero relative) pressure), we obtain As discussed in ref 10, the tangential hydraulic flow is driven by the gradient of (negative) capillary pressure arising beneath the curved menisci at the external film surface. While moving Langmuir pubs.acs.org/Langmuir Article along the film away from the immersed end, ever larger negative pressures are required to drive the viscous flow along the ever longer film segment. This negative-pressure buildup occurs due to a gradually increasing menisci curvature, which keeps growing until it reaches the maximum corresponding to the pore size. Once this state is reached, the menisci start to recede into the pores. Thus, the maximum pressure difference along the film is equal to the maximum negative capillary pressure. The length of the fully wet zone can be found by substituting negative maximum capillary pressure, −P cm , into eq 21 For the gradient of streaming potential, eq 1 is still applicable, though the hydrostatic-pressure gradient is not constant anymore but is given by eq 20 from which we obtain After integration where ξ ≡ x/L is the dimensionless coordinate along the porous film Taking into account as previously that the total induced electron/hole charge is zero and using eq 11, in the approximation of constant "chemical" charge, we obtain where ∑ is the surface tension, θ is the contact angle, and r p is the pore radius.
■ RESULTS AND DISCUSSION
Taking into account that integral hyperbolic sinus is a strongly increasing function of its argument, eq 15 shows that when parameter A increases, (Φ − φ(0)) → 0. Physically, this means that the polarization charges distribute in such a way that the net surface-charge density (fixed plus induced charges) at the capillary "entrance" tends to zero, whereas it "peaks" exponentially ever stronger (with increasing pressure difference) close to the "exit" (see eq 10). This is illustrated in Figure 2. Figure 3 confirms the good applicability of eq 31. Figure 4 shows a comparison of pressure dependence of streaming potential calculated for the case of charge regulation using eqs 6 and 18 with the case of constant charge (eqs 6 and 15). The maximum surface-charge density in the case of charge regulation, σ 0 , is assumed to correspond to the same "zeroflow" dimensionless ζ-potential as in the case of constant charge.
As we can see, charge regulation can make the nonlinearity occur at somewhat larger dimensionless hydrostatic pressure differences, but qualitatively the behavior remains the same.
Scenarios of Experimental Verification. Pressure-Driven Mode. Above, we have seen that the extent of the nonlinearity (which distinguishes SP with electron-conducting substrates from the classical case) is directly proportional to the hydrostatic pressure drop and inversely proportional to the solution conductivity. Using the model of identical straight parallel cylindrical capillaries, the pressure drop can be expressed this way where J v is the volume flux (m/s), r p is the pore radius, and γ is the porosity (for tortuous pores, it also includes a tortuosity factor). Accordingly (see eq 7) Figure 2 shows that the nonlinearity becomes noticeable when A ≥ 2 ÷ 3. This parameter gets larger, in particular, in solutions of lower electric conductivity. For our simple model to be applicable, the capillaries have to be sufficiently broad compared to the thickness of diffuse parts of EDLs. The latter is known to increase with decreasing electrolyte concentration (solution conductivity) inversely proportionally to the square root of it. 20 Therefore, to maintain the impact of diffuse parts of EDLs at an acceptably low level, a decrease in concentration should be accompanied by an increase in the capillary radius. The latter implies less pressure drop at a given volume flux. As we can see from eq 33, parameter A is inversely proportional to the square of capillary radius. Therefore, at a given volume flux, reducing electrolyte concentration (and proportionally increasing the capillary radius) would leave parameter A (and the extent of nonlinearity) unchanged. At the same time, this would lead to an increase in Reynolds number and (in sufficiently broad capillaries) may result in deviations from the laminar flow pattern. 21 Another way to increase the "effective pressure drop" is using thicker diaphragms with relatively small pores. Thus, for instance, assuming the thickness (capillary length) of L = 1 cm, the capillary radius of r p = 0.5 μm, the active porosity of γ = 0.1, 1 mM NaCl solution, and a "reasonable" linear filtration rate of 0.3 mm/s, we obtain A ≈ 7. In 1 mM electrolyte solutions of (1:1) electrolytes, the EDL thickness is about 10 nm, which is around 50 times less than the assumed capillary radius (hence, no EDL overlap and surface conductance). Therefore, to achieve this flow rate in such a diaphragm, a pressure difference of about 1 MPa has to be applied. Sintered metals with average pore sizes down to single micrometers are commercially available. 22 Nevertheless, exploration of pronounced nonlinearity requires other recipes.
Evaporation-Driven Mode. In hydrophilic nanopores, capillary pressures can be very high (>10 MPa). In this subsection, we will demonstrate that this can lead to very large pressure differences along thin nanoporous films under evaporation conditions. These, in turn, can give rise to large "dimensionless pressures". In some studies, thin nanoporous films were assembled from electron-conducting nanoparticles (for example, carbon black). 7−9 In a typical configuration (see Figure 5), a thin (supported or stand-alone) film of a nanoporous material is immersed with one extremity in an Assuming as previously a 1 mM aqueous NaCl solution, the pore radius of 0.5 μm, and perfect wetting according to eq 29 (θ = 0), for the dimensionless parameter B occurring at the maximum wet length, we obtain B m ≈ 4. In contrast to parameter A (see eq 33) controlling the nonlinearity in the pressure-driven mode, parameter B is inversely proportional to the first power of pore radius. Therefore, reducing electrolyte concentration (and conductivity) while increasing the pore size to keep the ratio of pore radius and screening length constant causes an increase in parameter B m inversely proportional to the square root of concentration. Thus, with 0.01 mM NaCl solution (and 5 μm pore radius), B m ≈ 40. Incidentally, experimental studies used very dilute electrolyte solutions although the pore size was essentially smaller than 10 μm. Figure 5 shows examples of distribution of electrostatic potential derivative along porous film with evaporation. This distribution is in good qualitative agreement with experimental data obtained in ref 8 for nanoporous films made from carbon black nanoparticles (see Figure 2 of ref 8) (Figure 6).
■ CONCLUSIONS
In the limiting case of sufficiently broad capillaries ("Smoluchowski limit"), streaming potential has long been considered to be a linear function of applied pressure. However, as we have demonstrated in this study, this generally applies only to nonconducting substrates. If substrates are electron-conducting (though still ideally polarizable, no electrode reactions), the linear behavior occurs only at sufficiently low dimensionless pressure differences directly proportional to hydrostatic pressure difference and inversely proportional to solution conductivity. At larger dimensionless pressure differences, the dependence becomes pronouncedly sublinear while the dependence on the coordinate along the flow direction is superlinear (exponential). The extent of nonlinearity also depends on the mechanism of surface-charge formation, charge regulation giving rise to a somewhat less pronounced nonlinearity. Experimental detection of predicted trends calls for the use of rather large applied pressures in systems with relatively large pores in dilute solutions. Alternatively, clear manifestations can be expected in devices where large hydrostatic-pressure differences are induced due to capillarity in water evaporation from nanoporous materials. Experimental data already published for such systems are in good qualitative agreement with the model predictions. | 4,431 | 2022-08-04T00:00:00.000 | [
"Physics",
"Materials Science"
] |
A Logistic Trigonometric Generalized Class of Distribution Characteristics, Applications, and Simulations
We propose a trigonometric generalizer/generator of distributions utilizing the quantile function of modified standard Cauchy distribution and construct a logistic-based new G-class disbursing cotangent function. Significant mathematical characteristics and special models are derived. New mathematical transformations and extended models are also proposed. A two-parameter model logistic cotangent Weibull (LCW) is developed and discussed in detail. The beauty and importance of the proposed model are that its hazard rate exhibits all monotone and non-monotone shapes while the density exhibits unimodal and bimodal (symmetrical, right-skewed, and decreasing) shapes. For parametric estimation, the maximum likelihood approach is used, and simulation analysis is performed to ensure that the estimates are asymptotic. The importance of the proposed trigonometric generalizer, G class, and model is proved via two applications focused on survival and failure datasets whose results attested the distinct better fit, wider flexibility, and greater capability than existing and well-known competing models. The authors thought that the suggested class and models would appeal to a broader audience of professionals working in reliability analysis, actuarial and financial sciences, and lifetime data and analysis.
Introduction and Motivations
Generalizing a classical distribution is an old practice in distribution theory. e earliest work regarding generalizing a distribution was conducted by Pearson [1] using di erential equations. Since 1985, the families (or classes) of distributions have been derived adopting the following famous methodologies: di erential equation method, transformation techniques, compounding methodology, skewed distributions generation, parametric induction, quantile based approach, transformed (T)-transformer (X) mechanism, exponentiated T-X system, T-R\{Y\} approach, etc., It is possible to more readily manage data that is highly skewed with each new model that is developed because of its enhanced heavy-tailed, tractable core functions, and simpli ed simulation technique. e literature review explores the following four critical points which prove the basis for this study. (i) In the statistical literature, a slew of new families and models for algebraically generalizers/distribution's generators W[G(x)] have been introduced in contrast of neglecting trigonometric equations (trigonometric generalizers). (ii) e interest in modelling directional and proportional data led applied researchers to develop and employ trigonometry functionbased models capable of handling these datasets more smoothly and economically. (iii) e use of algebraic and trigonometric functions in mixture generalizers is still to be researched and investigated. (iv) Using the stated transformation, any current G class or model may be simply reversed in a subsequent version. e major motives are considered to be in uenced and inspired by the generated results in terms of accuracy, adaptability, and goodness of fit (gof ) which are included below: (i) To introduce a generator of distributions based on cotangent function (that is a combination of algebraic and trigonometric functions and generators concurrently). (ii) To introduce a new G-class called a new logistic-G class of distributions (LCG for short) in trigonometric scenario. (iii) ere are several advantages to the suggested class, including its simplification, lack of non-identifiability, and lack of over-parametrization. (iv) Because of the injection of the cotangent function, the new CDF can increase flexibility, giving rise to new efficient and flexible models. (v) Non-generalized and non-exponentiated models, according to the literature, give insufficient gof.
e famous generalizers and corresponding G-families are presented in Table 1.
Moreover, the new generators and G-classes are introduced regarding several purposes. Among them, the recent and remarkable include the following. (A) To present a new family utilizing the additive model structure (see [6]). (B) New generator W[G(x)] is defined with quantile function (see [7]). (C) To achieve better gof and more flexibility than existing classical models (see [8]). (D) Type I half logistic Burr XG family has been constructed by Algarni et al. [9]. (E) e bivariate Weibull-G family based on copula function using odd classes has been introduced by El-Sherpieny et al. [10]. (F) A significant amount of distribution families was proposed using parameter induction (inserting one or maybe more new parameters (s) to the baseline), for example, a new one was put forward by Cordeiro et al. [11]. (G) Adopting the T-X family methodology only (see [12]). (H) Presenting a flexible family which deals with both monotone and non-monotone hazard rate function (see [13]). (I) Using a flexible family to introduced flexible generalized Pareto distribution (see [14]). e whole real line interval (− ∞, ∞) distributions naturally come up when random variables should vary in the infinite real interval and several distributions such as logistic, normal, Laplace, t, Chen, and Gumbel distributions are supported on this interval. It has been used to describe the distribution of income and wealth in a fairly basic way. e logistic distribution has numerous uses in statistical analysis and has a form that resembles the normal distribution.
A previous study [15] introduced logistic distribution whose main functions (cdf and pdf ) in a new format are given below: Alzaatreh et al. [16] contributed transformed (T)transformer (X) family of distributions (for short, T-X family) which expanded vision about generators of distributions (W[G(x)]), and in this article, via the T-X method, a new logistic-G class of statistical distributions is proposed adopting a cotangent-based trigonometric generator is paper is outlined as follows. Section 1 is about introduction and motivations while in Section 2, cotangent generator and LCG class are developed. In Section 3, some special models are presented while in Section 4, the characteristics of LCG class are deduced. In Section 5, a new model LCW along with significant properties is discussed and a simulation work is performed in Section 6. e importance of the new class and model is confirmed by two real-life applications using failure and survival datasets in Section 7. Finally, in Section 8, the conclusions are presented.
Development of Cotangent Generator
In this part, let us assume that X is a random variable (r.v.) that follows the famous distribution, the standard Cauchy distribution, and its location parameter has value equal to μ � 0 while the other parameter which is called the scale parameter has value equal to σ � 1; then, its cdf and qf, respectively, are Replacing u by G(x), W[G(x)] � [− cot(πG(x))], a new trigonometric generator based on cotangent function is achieved having support (− ∞, ∞).
Genesis of LCG Class. By assuming that
] is differentiable and monotonically non-decreasing because cotangent function is differentiable and monotonically non-decreasing on Now, the main functions of LCG class in T-X format can be written as (4), the cdf of new class is obtained and presented as e pdf corresponding to (7) reduces to (4), the cdf of new class may be expressed as below: e pdf corresponding to (9) reduces to
Special Models
Some special models of LCG class with their main functions and corresponding graphs are presented subsequently.
e Logistic Cot Exponential (LCE) Distribution.
By assuming that the variable X follows an exponential distribution, then we may be able to express the central functions for the LCE distribution in the form as follows:
e Logistic Cot Lindley (LCLi) Distribution.
Assuming X is a Lindley random variable, we can write the new one-parameter LCLi model that has the following cdf, pdf, and hazard function. Figure 1 demonstrates the graph plots of the LCE, and Figure 2 demonstrates the the graph plots of the LCLi.
3.3. e Logistic Cot Gamma (LCGa) Distribution. Let X be a gamma random variable; then, the LCGa model has the following main functions:
Range of T Generator W[G(x)]
Models of the T-X family Inventor(s) (− ∞, ∞) log(G(x)/G(x)) Log odd logistic family Torabi and Montazeri [2] Logistic-X family Tahir et al. [3] Logistic-G family Mansoor et al. [5] F( e plots of the LCGa are shown as graphs in Figure 3.
e Logistic Cot Dagum (LCD) Distribution.
By assuming that for the possibility that X is a Dagum random variable, then the LCD distribution has the following primary functions: Journal of Mathematics e graphs in Figure 4 show the plots of the LCD.
Mathematical Properties of LCG Class
Here, important properties of the new class are presented.
e Inverse Function for Both pdf and cdf (Quantile Function).
We present an additional property of X which is qf: Here, . So, we can write the quantile density function Q′(u) as follows:
Useful Reliability Functions.
In this part of the paper, we will concentrate our efforts to introduce the most important reliability functions. First we will define the survival function (sf) S(x), after that we write the equation of the hazard rate function (hrf) h(x), and the reversed hazard rate function r(x) is as below, and the cumulative hazard rate function (chrf) H(x) and mills' ratio m(x) are, respectively, given below.
e Hazard Function Analytical Formulas.
In order to find the roots of (20), we must obtain the crucial points of the hrf h(x). As we can see, this equation has many solutions or we can say many roots are available for this equation. Suppose Using any numerical software and (18) and (20), we can investigate local maximums and minimums, as well as inflexion points.
Linear Representation.
e cdf of LCG class presented in (7) can be written as follows: We can demonstrate the cdf (2.6) in this form after applying this series and exponent series e x � ∞ j�0 x j /j!, respectively.
Now, for the term (cot(πG(x))) j , we will use the power are obtained by the aid of the highly speed MATHEMATICA software see Tahir [17]. Hence, We know that H m (x) is considered as the exponentiated distribution function parameter (m) in the power and With the expansion of (8), the following formula may be obtained from the previously mentioned idea of exponentiated distributions: such that h (m) (x) can be noted as the exponentiated density having a power parameter (m) and and (27), and the first one can expressed as follows: e nth moment of X can be written in a second form as it may be deduced from (30) in terms of the G qf like the following equations: du. ese integrals can be calculated numerically.
Weighted Moments.
In this section, we introduce the equation of the weighted moments. So, we can write the (r, s) th probability weighted moment (PWM) of X as en, Putting the pdf of LCG class (given below) in (32), after applying the binomial expansion and exponential series, we get Regarding [cot(πG(x))] j , we can use power series ex- where
Generating Function.
is section is devoted to introduce the moment generating function (mgf ), and it can be written as follows: 4.9. Order Statistics. In this section, we will focus our attention on one of the most important properties which is the order statistics. We can easily write the form of I th order statistic density function as follows: and It should come as no surprise that the density of the LCG order statistics is a linear combination of exp-G densities; this is extremely evident, as revealed by (39), which is the main result to be demonstrated.
Entropy Measures.
e Shannon entropy is defined as , and for the LCG class, it may be formulated as the following equation: Proof. Alzaatreh et al. [16] deduced Shannon entropy of T-X family. Since here W[G(x)] � [− cot(πG(x))], adopting the same methodology, we get η X as where μ T is the mean of r.v. T. Using (43), we can easily prove the Shannon entropy of the LCG class given in (42) where T follows logistic distribution. Rényi entropy is We get the following result after performing the expansion by the aid of power series: After incorporating the result, the Rényi entropy will reduce to Now, we can write the final equation of the Rényi entropy as follows:
Journal of Mathematics
where
LCW Distribution
In this section, a two-parameter special model logistic cot Weibull (LCW) with its properties is presented.
Methodology.
Taking G(x), as the cdf of Weibull distribution while g(x) as the corresponding density, it follows the two-parameter LCWm we can write three of cdf, and its corresponding pdf and associated with its hazard function, as formulated below respectively, are: e graphs represented in Figure 5 demonstrate the plots of the LCW.
Quantile Function.
is section is devoted to demonstrate the quantile function equation of LCW which is where
Residual and Reverse Residual Life.
e residual life has several uses in probability and statistics and risk assessment. Suppose that X represents a unit's lifespan and X ≥ 0 with P(X � 1); then, the r.v. X t � (t − X|X ≤ t), for a fixed t > 0, is known as time since failure. e residual lifetime of LCW r.v. X is denoted by R t (x) and is defined as Additionally, the reversed hazard rate function R t (x) is written as
Stochastic
Ordering. Stochastic ordering is a tool for analysing the constitutional properties of stochastic structures that are intricate. ere are several forms of stochastic orderings that can be used to sort random variables according to their distinguished properties. Suppose S and K are independent random variables with cdfs F(S) and F(K), respectively; then, S is said to be smaller than K iff it satisfies the following. e LCW distribution (λ, α) is ordered according to the strongest "likelihood ratio" ordering, as demonstrated in the following theorem. e versatility of two-parameter LCW distribution (λ, α) is demonstrated. Let S follow LCW (λ 1 , α 1 ) and K follow LCW (λ 2 , α 2 ). en, the likelihood ratio is where If λ 1 � λ 2 � λ and α 1 ≥ α 2 , then (d/ds)[(logf s (S)/ f K (S))] < 0, and hence S ≤ lr (K), S ≤ hr (S), S ≤ mrl (Y) and S ≤ st (K).
Stress-Strength Reliability.
In this section, we will introduce and define one of the most important properties of any distribution which is the reliability function R; this function may be represented as
Journal of Mathematics
(56) Suppose both X and K are LCW independent random variables with parameters α 1 , λ 1 and α 2 , λ 2 and fixed scale parameter σ. en, After applying the binomial and exponent series expansion, then substituting u 1 � π 1 λ 1 α 1 x α 1 − 1 e − λ 1 x α 1 and u 2 � π 2 λ 2 α 2 x α 2 − 1 e − λ 2 x α 2 , the above equation reduces to Solving complicated integration is very hard but with the aid of super computers and advanced mathematical software, we easily find the value of the hard integral introduced above ( Table 2).
Submodels of LCW. 5.8. Estimation. One of the most famous estimators is the maximum likelihood equation as
we can easily obtain and write the log-likelihood function regarding the distribution's parameters of the distribution under consideration by getting the logarithms function for the likelihood function Θ � (α, λ) ⊤ . So as a result, we may formulate the log-likelihood function as follows: ) .
(59)
Consider the following formulae to be the score vector's components U(Θ), or in other words the derivative of the vector with respect to the two parameters: e MLEs may be derived by putting the equations in the previous sentence to zero and solving them concurrently (see, for instance, [18]).
Results Deduced from the Simulation Work
In this phase of the study, we employed Monte Carlo simulation to assess the distribution's effectiveness all across the estimation procedure. e MLEs of the model Tables 3 and 4. As the sample size n increases, in general, the biases, MSEs, L.bounds, and U.bounds of X decrease while the CPs of the confidence intervals are quite close to the 95% nominal levels which indicate that the MLEs have good performance for estimating the parameters of the LCW distribution; also, we will use the conducted results to find the upper and lower bounds for the estimates for the parameters of the model. Tables 3 and 4 contain a summary of all simulation outcomes.
Applications and Data Analysis
Two different applications to actual datasets were shown by us in order to demonstrate the utility of the distribution that was suggested (LCW). e criteria of goodness of fit proved that it can be used in place of famous two, three, and fourparameter models and many others. [19]. e dataset can be found easily in [19]. We avoid adding the data in the paper as they can be easily accessed. We have provided some statistics on the data used to make the reader comfortable in reading the paper. e summary statistics for this dataset are as follows: n � 50, me di an � We can easily recognize that Figure 6 represents the graphical representation of the histogram, TTT plot, box plot, and kernel density for failure time data. Figure 7 represents the comparative cdf and pdf of LCW and other models using failure time data. Table 5 provides the MLEs of the parameters while Table 6 provides the values of AIC, CAIC, BIC, HQIC, A * , W * , K-S, and P values for each model. On the basis of the statistics given in these tables, the best fit model is LCW and has the potential to fit right-skewed data with the increasing failure rate.
Concluding Remarks
A novel logistic-G family of distributions is developed, which employs trigonometric and algebraic generalizers based on cotangent functions. is class has been shown to be more adaptable and useful in a variety of practical applications, particularly survival, dependability, and failure modelling. Furthermore, a two-parameter model (LCW) with various density shapes, as well as hazard rate different shapes, is developed. is work also derives and presents many statistical and mathematical properties of the proposed family. In parametric estimating, the maximum likelihood method is used, and a Monte Carlo simulation analysis is used to determine whether or not the estimates are suitable. To ascertain which distribution is most suitable for modelling the real datasets, we employ a number of goodness-offit measures that decide which one is the superior one among all its competitors. We show that, even with a higher number of parameters, this suggested distribution consistently delivers superior fits than other existing and competing Weibull models. We believe that the suggested class and related models would find wider applicability in sectors such as dependability and survival studies, hydrology, geology, and others.
Future Work
In the upcoming work, we will apply the proposed distribution and the new family of distribution to censored sample scheme. We will try different kinds of censoring schemes like type-I and type-II censored sample and we will generate random censored samples from the new distribution. We can extend our work to apply the proposed model to accelerated life test with different types such as constant and partially constant and maybe progressive stress accelerated life tests. At last, we will use different optimality criteria to the censored samples generated from the proposed model.
Data Availability
e data that were utilised to support the conclusions of this research may be found inside the paper itself.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 4,464.6 | 2022-05-17T00:00:00.000 | [
"Mathematics"
] |
Conference: Morphological, Natural, Analog and Other Unconventional Forms of Computing for Cognition and Intelligence, Berkeley 2019
: The leading theme of the 2019 Summit of International Society for the Study of Information held 2–6 June 2019 at The University of California at Berkeley was the question “Where is the I in AI and the meaning of Information?” The question addresses one of the central issues not only for scientific research and philosophical reflection, but also for technological, economic, and social practice. The Conference “Morphological, Natural, Analog, and Other Unconventional Forms of Computing for Cognition and Intelligence” (MORCOM 2019) was focused on this theme from the perspective of unconventional forms of computing. The present paper, written by the organizers of the conference, reports the objectives of MORCOM 2019 and provides an overview of the contributions.
Thus, intelligence and meaning, sources of hot controversies in the past are reincarnated now in the questions about AI. The questions regarding the modern information processing technology in performing tasks traditionally considered as exclusively human, or even considered as defining human beings, such as thinking, intelligence, consciousness, or goal-oriented agency, are essentially the same questions as those asked by natural philosophers through the ages. We have now more powerful intellectual and technological tools in searching for answers, but the existing pervasive and convenient tool-kit brings also a danger of following the old habits of thinking.
It is necessary to reconsider and reexamine even most fundamental concepts, such as computing or cognition and intelligence. As artificial intelligence is being constructed with the human intelligence as the first goal to achieve, the questions related to the nature of human intelligence have central importance. Constructing "intelligent artefacts" and comparing their features with human intelligence is a recursive process where learning goes two ways, from nature to artefact and back, as Rozenberg and Kari argue in "The Many Facets of Natural Computing" [1].
"AI Makes Philosophy Honest" argued Daniel Dennett in his talk at the I-CAP 2006 International Computers and Philosophy Conference in Laval, France. Instead of pure speculative thinking, AI provides us with a laboratory for testing hypotheses about intelligence and cognition.
There are examples of novel studies, for instance, of morphological computing and embodied cognition, that succeed in escaping the inertia of thinking habits and question conventional theoretical and practical models.
The Conference MORCOM 2019 was not the first gathering of researchers and philosophers devoted to the discussion of the role of unconventional forms of computing in understanding natural and modelling artificial intelligence. Unconventional computing was proposed as a solution for overcoming the limitations which in the conventional form of computation were sources of objections to the intelligence of AI systems. There was a similar event on this subject at the IS4SI Summit 2017 in Gothenburg, Symposium on Morphological Computing and Cognitive Agency, [2]. It was followed by the event in 2018, Morphological and Embodied Computing Symposium on theory and applications [3], attended by philosophers, theoreticians and practitioners from several related fields Each of the past gatherings was a step forward to better understanding of the alternative ways of computing in the context of their potential contributions to understanding of natural cognition and intelligence as well as to the design of cognitive intelligent artefacts.
MORCOM 2019 brought together perspectives on morphological, physical, natural, analog and embodied cognitive computation and other forms of unconventional conceptualization of computing, cognition and intelligence. We, as organizers, encouraged open and constructive debate on the perceived differences in the various perspectives on constructivist and computationalist accounts of the dynamics of information in its natural and artefactual realizations. Contributed presentations gave very diverse perspectives on the relevant subjects.
It is difficult to report or summarize multiple, more or less formal, discussions which followed the presentations and also were carried out at subsequent informal events. In the following, a very brief account of the subjects of presented works can give a basic idea about the MORCOM Conference and its contributions.
Conference "Morphological, Natural, Analog, and Other Unconventional Forms of Computing for Cognition and Intelligence" (MORCOM 2019): Contributions
Todd Hylton's keynote lecture "Thermodynamic Computing: An Intellectual and Technological Frontier" of MORCOM 2019 and of the entire Summit 2019 addressed one of the crucial differences between the structure of the brain and the structure of artificial neural networks. While traditional embodiment of artificial neural networks has static and passive architecture where the network learning is entirely controlled by the iterated external input, we know that the neural network in the human brain is a self-organized system in which this self-organization is an active foundation of learning.
Hylton's approach is based on a self-organized artificial neural network utilizing self-organization of minuscule conductor particles floating in viscous insulating liquid under the influence of an external electric field. It is critical for this process that the external electric field is not determining the structure developed by the particles, but it is rather a stimulating agent for self-organization. This approach is a revolutionary development towards future technologies utilizing self-organized systems for intelligent technologies, based on mechanisms similar to those in living beings.
There were two submissions to MORCOM 2019 in the broadly understood direction of the study of embodiment of unconventional computing which, unfortunately, could not be presented in their entire extent at the conference due to logistic obstacles preventing attendance of the contributors at the summit. Yet the works, summarized in extended and informative abstracts, generated interest and discussion among the participants. One of them, of the authorship of Ricardo Q. Figueroa, Genaro J. Martinez, Andrew Adamatzky and Luz N. Oliva-Moreno, entitled "Robots Simulating Turing Machines" addressed one of the classical subjects of computer science considering machines simulating machines, but in the context of unconventional computing and with the focus on embodiment.
The other work contributing to the conference with the subject for discussion rather than reporting of the results entitled "Propagation of Patterns in Non-linear Media as a Paradigm of Unconventional Computers" was authored by Genaro J. Martinez, Andrew Adamatzky, Ricardo Q. Figueroa and Dmitr A. Zaitsev. Cellular automata are classic models to design unconventional computing in several ways. In their work, the authors adopt an analogy between precipitating chemical, physical or biological media and semi-totalistic binary two-dimensional cellular automata. In this analogy, the patterns, originating from different sources of perturbations, propagating in a precipitating chemical, physical or biological medium, compete for the space. They sub-divide the medium onto the regions unique for an initial configuration of disturbances. This sub-division can be expressed in terms of computation. The work of Martinez et al. demonstrates how to implement basic logic and arithmetical operations (its computability) by patterns propagating in geometrically constrained cellular automata medium. This opens the way to design and to implement computation in a very wide range of physical and chemical media.
The Skype presentation by Hector Zenil "Towards Demystifying Shannon Entropy, Lossless Compression, and Approaches to Statistical Machine Learning" presents a critical analysis of the current approaches of machine and deep learning based on traditional statistics and information theory, showing that they fail to capture fundamental properties of our world and are ill-equipped to deal with high-level functions such as inference, abstraction, and understanding.
In contrast, Zenil explored recent attempts to combine symbolic and sub-symbolic/differentiable computation in a form of unconventional hybrid computation that is more powerful and may eventually display and grasp these higher level elements of human intelligence. In particular, he introduced the field of Algorithmic Information Dynamics and that of Algorithmic Machine Intelligence based on the theories of computability and algorithmic probability, and demonstrated how these approaches promise to shed light on the weaknesses of current AI (especially Deep Learning, which also Joshua Bengio agrees with) and how to attempt to circumvent some of their limitations.
Vincent C. Müller, in his Skype presentation "Morphological Computation and the Discussion about Whether Computation Involves Meaningful Symbols" referred to the discussion about morphological computation and about whether computation involves meaningful symbols, rather than merely syntactic operations. He described his position that computation, as it is traditionally understood, is essentially syntactic algorithmic processing (as the Church-Turing thesis claimed) done by humans with machines. However, there are other, unconventional and very fruitful and plausible notions of computing, for instance, those engaging morphology. Thus, Müller stated that the question about the possibility of computing involving meaning requires scrutiny. What kind of question is this? Do we expect a discovery to find out the truth, or can we slice the world in several plausible ways? Is this essentially the same question as about the philosophical positions of realism and anti-realism applied to computing?
Rao Mikkilineni presented his work with Mark Burgin and Eugene Eberbach "Processing Information in the Clouds" which expressed the view that the way to Higher-Order AI, i.e., to the design of actual intelligent artefacts, leads in the direction of cloud computing as a means to overcome limitations of localization of computational process in the system embodying computation.
In the presentation of his work with Mark Burgin "Structural Machines as Unconventional Knowledge Processors" Rao Mikkilineni involved the concept of knowledge and knowledge processing to describe some new form of unconventional computing understood as knowledge processing, distinguished from information processing. Mikkilineni and Burgin used analogy to demonstrate their view of knowledge as a concept of higher complexity than information: knowledge contains information as matter contains energy. Since knowledge may require very complex structures which are not necessarily linear, while conventional computing machines and automata are typically sequential processing of information, processing of knowledge requires higher forms of computing.
Rao Mikkilineni presented his individual work on a similar subject "Information Processing, Information Networking, Cognitive Apparatuses and Sentient Software Systems" in which he presented a new information processing architecture that enables "digital genes" and "digital neurons" with cognizing agent architecture to design and implement sentient, resilient and intelligent systems in the digital world. His approach was motivated by the recognition of the fact that computing processes, message communication networks and cognitive apparatuses are the building blocks of living sentient beings. Genes and neural networks provide complementary information processing models that enable execution of mechanisms dealing with "life" using physical, chemical and biological processes. Cognizing agent architecture (mind) provides the orchestration of body and the brain to manage the "life" processes to deal with fluctuations and maintain survival and sustenance.
Mikkilineni's presentations belonged to the more theoretical direction of the MORCOM Conference. Another highly theoretical work "Processing Information by Symmetric Inductive Machines" presented by Mark Burgin, was devoted to analysis of the model of computation introduced in 2013 by Marcin J. Schroeder as Symmetric Turing Machines or S-machines. In this model, one-way action of the head (processor) on the tape (memory) is replaced by interaction between two essentially equivalent components of the machine.
Automata that perform transformations with their programs, such as reflexive Turing machines, were explored by Burgin in 1992. It was proved that these machines have the same computing power as Turing machines but could be much more efficient.
Using a technique similar to the one employed in the past, it is possible to prove that functioning of a symmetric Turing machine can be simulated by a conventional Turing machine with two tapes and two heads. It means that symmetric Turing machines have the same computing power as Turing machines. At the same time, it is also possible to prove that symmetric Turing machines can be much more efficient than Turing machines.
To achieve higher computing power, Burgin introduced and initiated study of inductive symmetric machines, which further develop the structure and possibilities of inductive Turing machines allowing to model natural computations in various situations.
In the discussion after the presentation, Schroeder pointed out that the equivalence of the symmetric Turing machine with two heads and two tapes assumes that the dynamics of interaction is described by a computable function, which his original model did not assume. Thus, consideration of the symmetric Turing machine with non-computable dynamics of interaction re-opens the question about its computational power. Schroeder admitted that, at present, his example of such a model is rather artificial, but it demonstrates a possible direction of further research alternative to Burgin's inductive symmetric machine.
Lorenzo Magnani, in his presentation "Disseminated Computation, Cognitive Domestication of New Ignorant Substrates, and Overcomputationalization" presented "eco-cognitive computationalism" understood as a study of computation in context, following some of the main tenets advanced by the recent cognitive science views on embodied, situated, and distributed cognition.
In this eco-cognitive perspective, Magnani analyzed the recent attention in computer science devoted to the importance of cognitive domestication of new substrates, such as in the case of morphological computation. This new perspective shows how the computational domestication of ignorant substrates can originate new unconventional cognitive embodiments, which expand the processes of computationalization already occurring in our societies.
Magnani also introduced and discussed the concept of overcomputationalism, as intertwined with the traditional concepts of pancognitivism, paniformationalism, and pancomputationalism, seeing them in a more naturalized intellectual disposition, appropriate to the aim of bypassing ontological or metaphysical overstatements.
What he called overcomputationalization refers to the presence of too many entities and artefacts that carry computational tasks and powers. Overcomputationalization (1) often promotes a plenty of possible unresolvable disorganizational consequences, and (2) tends to favor philosophical reflections that depict an oversimplified vision of the world. Moreover, it tends to generate too many cognitive constraints and limitations, which lead to a weakening of human creative (abductive) cognitive activities, as he has illustrated in the last chapter of his recent book The Abductive Structure of Scientific Creativity (2017), and, because of the excess of redundant cognitive/informational features attributed to entities (features often exogenous to the original functions of them), it tends to prevent human intellectual freedom to benefit from that cognitive simplification that is characteristic of the absence of informational overloads.
A presentation by Gordana Dodig-Crnkovic "Morphological, Natural, Analog and Other Unconventional Forms of Computing for Cognition and Intelligence" provided a comprehensive review of the subject of MORCOM 2019; starting from the frequently asked fundamental questions: What is the relationship between Cognition and Intelligence? How does cognitive computing relate to AI? What is the difference between Natural, Analog and Morphological computing?
Dodig-Crnkovic demonstrated the need for establishing an appropriate conceptual framework in order to attempt providing answers, as at the moment, there is a huge variety of use of those terms that causes confusion. For this purpose, in her presentation, she elaborated on the taxonomy of computing originally developed in collaboration with Mark Burgin, and extended it by some recent work on cognition as information processing.
The thesis of Dodig-Crnkovic is that nonconventional (morphological) computation provides a good basis for both new and effective ways of computation that are energetically much more favorable than the ones we currently use, and for modelling of information processing in living organisms, thus intelligence and cognition involving meaning. It is equally well suited for modelling of natural as well as artificial intelligence.
Marcin J. Schroeder's presentation, "Intelligent Computing: Oxymoron?" was intended as the direct answer to the leading question of the Summit. Schroeder admitted that the denial of intelligence and capacity to understand meaning to computing machines requires prior definitions of the two concepts infamous for escaping all attempts of conceptualization. Thus, in the absence of such a commonly accepted conceptual framework, every claim attributing or denying intelligence and capacity to understand meaning to computing machines is more an invitation to further discussion than a final answer to the question. On the other hand, there are some characteristics of intelligence and meaning which are not sufficient, but necessary conditions for computing machines to be qualified as intelligent and as capable of understanding meaning.
For instance, it is difficult to expect that any entity (human or artificial) lacking the ability to reduce or eliminate complexity can be considered intelligent. Yet it can be demonstrated that a computing machine in the present model of computation is devoid of this capacity. This, of course, does not mean that computing systems with alternative forms of computation cannot be intelligent.
In conclusion, the Conference on Morphological, Natural, Analog and Other Unconventional Forms of Computing for Cognition and Intelligence at IS4SI summit in Berkeley 2019, provided plenty of stimulating insights into the field of unconventional and morphological computing pointing towards the answers to the central question of the Summit: agency and meaning of information in artefactually intelligent systems.
Funding: This research received no external funding.
Conflicts of Interest:
The authors declare no conflict of interest. | 3,819.2 | 2020-05-07T00:00:00.000 | [
"Computer Science",
"Philosophy"
] |
Modeling the Viscosity of Anhydrous and Hydrous Volcanic Melts
The viscosity of volcanic melts is a dominant factor in controlling the fluid dynamics of magmas and thereby eruption style. It can vary by several orders of magnitude, depending on temperature, chemical composition, and water content. The experimentally accessible temperature range is restricted by melt crystallization and gas exsolution. Therefore, modeling viscosity as a function of temperature and water content is central to physical volcanology. We present a model that describes these dependencies by combining a physically motivated equation for temperature dependence of viscosity and a glass transition temperature ( Tg ) model for the effects of water. The equation uses the viscosity at infinite temperature η∞ , Tg , and the steepness factor m as fitting parameters. We investigate the effect of leaving η∞ free as a parameter and fixing its value, by fitting anhydrous viscosity data of 45 volcanic melts using the temperature dependent model. Both approaches describe experimental data well. Using a constant η∞ therefore provides a viable route for extrapolating viscosity from data restricted to small temperature intervals. Our model describes hydrous data over a wide compositional range of terrestrial magmas (26 data sets) with comparable or better quality than literature fits. With η∞ constrained, we finally apply our model to viscosities derived by differential scanning calorimetry and find—by comparing to viscometry based data and models—that this approach can be used to reliably describe the dependence of viscosity on temperature and water content. This introduces important implications for modeling the effects of nanostructure formation on viscosity.
A combination of concentric cylinder and falling sphere viscometry is employed to measure melt above the liquidus T in the low-/high-T regime (L, Pa s on the timescale of measurements, interpolation between the H and L regimes is required. This is especially critical when the L and H intervals of experiments are reduced as a result of nanostructure formation, primarily nanocrystals and melt demixing, which can lead to a significant increase in (Di Genova, Brooker, et al., 2020;Di Genova, Kolzenburg, et al., 2017;Di Genova, Zandona, & Deubener, 2020;Liebske et al., 2003). These restrictions on the H range accessible to micropenetration and parallel plate experiments can lead to the virtual absence of data near g T (Al- Mukadam et al., 2020;Chevrel et al., 2013;Dingwell et al., 2004).
Here, we present a new fitting approach for of volcanic melts motivated by physically based equations that describe the temperature dependence of viscosity (Mauro et al., 2009) and water dependence of g T (Schneider et al., 1997). This represents one of the first attempts to combine physically based equations in order to provide a single formulation for the viscosity of volcanic melts as a function of temperature and water over a large chemical space, with a set of 1,603 data points, containing both multicomponent dry and hydrous systems as indicated in the total alkali-silica (TAS) diagram (Le Bas et al., 1986, Figure 1). To characterize the behavior of anhydrous melts in a systematic way, we order them according to the chemical parameter SM, which is a proxy of the degree of structural polymerization (Giordano & Dingwell, 2003a with x in mol%; for compositions that only report total iron, we distribute it equally between FeO and 2 3 Fe O with an adjustment factor of 1.11 (reflecting the higher molar weight of 2 3 Fe O ) in terms of wt% before conversion.
10.1029/2021GC009918 2 of 26 First we investigate the fit of for anhydrous samples using a model developed by Mauro et al. (2009) for technical glasses. We discuss the connection between Arrhenian behavior of volcanic melt and the degree of structural polymerization (SM) as well as the hypothesis of a common viscosity value at infinite T ( ) for glass-forming melts. We do so by significantly expanding previous chemical and experimental data sets of melt viscosity data (Russell et al., 2003). For a given silicate melt, the addition of 2 H O can reduce the viscosity in the H regime by several orders of magnitude (e.g. Richet et al., 1996). We ignore the pressure effect on melt viscosity at fixed water content at shallow conditions typical of volcanic systems (Giordano et al., 2008;Hui & Zhang, 2007;Persikov, 1998;Zhang et al., 2003), but our model implicitly accounts for the pressure effect by varying the water content. Many studies (e.g., Dingwell et al., 1998b;Giordano et al., 2009;Misiti et al., 2011;Robert et al., 2015;Vetere et al., 2006;Whittington et al., 2009) have modeled the influence of 2 H O on by various differing empirical expressions. We apply a single formulation for water dependence of and compare our results to published models from the literature. We show that our physically based viscosity equation can perform comparably or better than empirical formulations in the literature. Furthermore, we compare results of our fit with predictions of general chemical models (Duan, 2014;Giordano et al., 2008;Hui & Zhang, 2007). Finally, we apply our model to describe of hydrous volcanic melts based on differential scanning calorimetry (DSC) measurements, which minimizes or avoids nanocrystallization that can occur during standard viscosity measurement around g T (Di Genova, Zandona, & Deubener, 2020). We implement our model with a constant log and show that the combination of our fitting approach with DSC data allows both the accurate prediction of high-temperature and the quantification of the effect nanocrystal formation has on melt viscosity around g T .
Viscosity Models for Anhydrous Systems
The most popular parametrization to describe the viscosity of volcanic melts is the empirical VFT equation named after Vogel (Vogel, 1921), Fulcher (Fulcher, 1925), and Tammann (Tammann & Hesse, 1926). It has been used to fit isochemical data (e.g. Richet et al., 1996;Whittington et al., 2001) and takes the form is often identified with the Kauzmann temperature ( K T ) (Angell, 1997), at which the liquid and crystalline entropies are equal. At K T , the VFT equation in combination with the Adam-Gibbs equation (Adam & Gibbs, 1965) yields a configurational entropy ( c S ) of zero (Mauro et al., 2009;Scherer, 1992), although c 0 S is only possible at absolute zero temperature (Avramov & Milchev, 1988;Mauro et al., 2009). On the other hand, Gibbs and DiMarzio (1958) have derived a possible thermodynamic equilibrium glass transition. Overall the physical meaning of the VFT fitting parameters continue to be a subject of discussions (e.g., Hecksher et al., 2008;Schmelzer et al., 2018;Stillinger, 1988). Finally, the VFT equation is also known to break down at low T (Laughlin & Uhlmann, 1972;Mauro et al., 2009;Scherer, 1992).
Therefore, a physically based parametrization of for glass-forming melts remains an interesting subject of research. For example, the viscosity description given in the model for glass-forming liquids by Adam and Gibbs (1965) (AG) has a physical foundation. It assumes the cooperative rearrangement of independent regions within the liquid and that the potential energy of the system can be expressed by its own partition function. This leads to where AG log A , and AG B is an effective activation barrier (Adam & Gibbs, 1965;Richet, 1984 Geochemistry, Geophysics, Geosystems Di Genova, Romano, Giordano, & Alletti, 2014;Richet, 1987;Robert et al., 2014;Sehlke & Whittington, 2016;Stebbins et al., 1984;Toplis, 1998;Webb, 2008).
One can avoid fitting c S and measuring ,conf P C by using the MYEGA model by Mauro et al. (2009). It describes c S in the AG expression (Equation 4) using constraint theory and an energy landscape analysis, and takes the form where A, K and C are fitting parameters, with log A as above. An alternative, physically insightful, parametrization of Equation 5, suggested by Mauro et al. (2009), can be obtained by inserting the definition of g T (Equation 1) and making use of the steepness index m (fragility), which quantifies the deviation of from Arrhenian behavior at g T (Angell, 1995), Reformulating Equation 5 with respect to these parameters yields: An analogous reformulation can be performed for the VFT model (Equation 3): A comparison between the performance of the MYEGA (Equation 7) and VFT models (Equation 8), using anhydrous simple and multicomponent oxide systems, that is, technical glasses, and molecular liquids covering a wide range of m from 20 to 115, revealed that the MYEGA equation provides a superior fit for in all systems (Mauro et al., 2009). Moreover, using 568 different technical silicate liquids with widely varying compositions and data in the range of 1 10 -6 10 Pa s, Mauro et al. (2009) also showed that the MYEGA model predicted the 11 10 Pa s isokom T better. Finally, unlike the VFT parametrization, the MYEGA equation offers a realistic extrapolation of c S to both the high-and low-T limits, with consequences for the estimate of A and the description of the low-T scaling for (Mauro et al., 2009).
Modeling the Effect of Water on the Viscosity of Silicate Melts
The presence of water in volcanic melts adds complexity to fitting , as even a small amount of 2 H O generally leads to a strong decreases of . While the H and L regimes are usually accessible for anhydrous melts and provide strong constraints on the parametrization over a large range, the lack of L data for hydrous compositions challenges the quality of the fit. This can lead, for example, to an unphysical crossover of at different 2 H O content when viscosity is extrapolated to the L domain ( Figure S1). To avoid (e.g., Giordano et al., 2008Giordano et al., , 2009Romine & Whittington, 2015;Vetere et al., 2013;Whittington et al., 2009). While resulting fits usually provide a good description, there is no systematic approach and no physical interpretation of parameters involved, which results in a plethora of different models based on VFT.
Here we expand the MYEGA parametrization (Equation 7) in a physically motivated way to fit anhydrous and hydrous data for a given volcanic melt with varying 2 H O content. We assume A to be independent of 2 H O content (i.e., fixed by the anhydrous measurements), which reduces the water-dependent parameters to m and g T . We base our description on a g T model by Schneider et al. (1997), who implemented a power concentration expansion of the Gordon-Taylor equation (Gordon & Taylor, 1952 The parameter d m is the melt fragility of the anhydrous sample.
Fitting the Viscosity of Hydrous Silicate Melts
To fit a set of viscosity data including anhydrous and hydrous measurements of one specific melt composition, we follow these steps: 1. We fit the anhydrous data using the MYEGA model (Equation 7). These data sets often include H and L measurements constraining the values of A, g,d T and d m well. 2. We insert Equations 9 and 12 into the MYEGA equation (Equation 7) and fit the resulting model to the remaining hydrous data. This constrains parameters b, c and d. To evaluate the quality of the fit, we employ the root-mean-square error (RMSE),
Viscosity Database
We use 50 viscosity data sets (1,603 data points) from the literature for fitting (Tables 1-3), displayed in a TAS diagram ( Figure 1). The data sets span a large compositional space with 2 SiO content ranging from 44 wt% to 79 wt% and total alkali content ranges from 0 wt% to 17 wt% (mol% reported in Tables 1-3). Virtually all types of magma erupted on Earth are represented.
Of the 50 data sets, 45 include viscosity measurements in the H and L region for anhydrous melts (Table 1), 26 sets additionally contain data for hydrous compositions (marked by * in Table 1 and listed in Table 2). All anhydrous data are used in Section 4 to explore the parameters A, m and g T ; in Section 5, we explore the quality of our model for the 26 2 H O-bearing liquids (Table 2). Table 3 lists five data sets. One only includes anhydrous measurements, while for the remaining four anhydrous and hydrous measurements are available. The values for melts in Table 3 are derived from DSC measurements using the approach reviewed in Stabile et al. (2021) (for further discussion see Section 6). They complement viscometry measurements on glasses from eruptions already included in our database (Tables 1 and 2). With DSC, is determined in the H range only. In Section 6, we use DSC-derived to illustrate that for high-quality data a reliable and predictive extrapolation of the MYEGA model from the H to L range is possible, assuming a fixed value for A.
MYEGA Fit
We use the MYEGA (Equation 7) and VFT models (Equation 8) to fit data from 45 different anhydrous silicate melts, all of which include measurements in the H and L range (MYEGA: Figure 2, VFT: Figure S3). Including H and L measurements provides a good constraint on the fits, as two of the parameters used in the MYEGA model, g,d T and d m , are quantities defined at high (Equations 1 and 6); A, on the other hand, is a low- quantity for T . Data and fits are grouped according to increasing SM values (Equation 2) in Figure 2. The often employed structural NBO/T parameter (Mysen, 1988) was not used as it correlates positively with the chemical parameter SM ( Figure S2), and Giordano and Dingwell (2003a) have shown that SM is a valid empirical parameter to infer the degree of structural polymerization of the melt. Moreover, SM is easier to calculate and therefore used here. Figure 2a shows measurements of samples with SM 10 , which are the most polymerized melts with SiO 2 80 x mol%. Their interval of measurements ranges from 2 10 to 13 10 Pa s, with 785 °C < T < 1650 °C. For these melts, the 1 / T dependence of is quasi-linear, that is, they exhibit an Arrhenian behavior. Figure 2b displays for liquids with 10 ≤ SM < 20. For these less polymerized melts, and T ranges are 1 14 10 10 Pa s, and 585 1710 C , respectively. Some melts (e.g., Rhy14, Pho3) display Arrhenian behavior, while others (e.g., Rhy12, And3) exhibit a weak, but significant departure from linearity, that is, behave in a non-Arrhenian fashion. Figure 2c shows data and fits for relatively depolymerized melts with 20 SM 30 . Viscosity measurements range from 1 10 to 14 10 Pa s, and 615 1570 C . The majority of these melts exhibit a pronounced non-Arrhenian behavior for , with the exception of the shoshonite sample (Sho), for which the L range appears poorly constrained (see discussion on A below). Finally, Figure 2d shows for the most depolymerized melts with SM 30 , with 0 10 Pa s 14 10 Pa s and 635 C 1560 C T . Our results thus agree with the expected scenario that Arrhenian liquids are characterized by a polymerized melt structure due to their high content of network-forming cations (low SM), while liquids with larger values of SM exhibit non-Arrhenian behavior (e.g., Angell, 1995;Mysen, 1988;Ni et al., 2015). (Tables 1-3). Data sets are color coded according to dry only data (orange squares), those including dry and hydrous data (blue) and differential scanning calorimetry (DSC)-derived viscosities (red triangles). Open blue circles denote samples that are used to illustrate the combined fits of the MYEGA (Equation 7) and 2 H O model (Equations 9-12) in Section 5.
Geochemistry, Geophysics, Geosystems
LANGHAMMER ET AL. SiO -rich counterparts. Rhy14 is a peralkaline rhyolite (pantellerite), characterized by an excess of alkali and alkaline earth cations over 2 3 Al O which induces a dramatic depolymerization of the melt structure within rhyolite chemistry (Di Genova et al., 2013;Dingwell et al., 1998a), leading to relatively low ( Figure 2b). As expected from Figure 2, melt fragility (m) positively correlates with SM (Figure 3c). In particular, we find that the strongest melt ( 20.4 m ) is Rhy3 with SM 7.6 , the most fragile melt is Di ( 61.1 m ) with SM 56.2 (Table 1).
Finally, the parameter A increases significantly from −9.6 for Rhy3 to −1.9 for Tep with SM (Figure 3a, Table 1). We find the largest variation of A for SM 10 , and a relatively constant value of 3 A for SM 20 . The low values of A for the polymerized melts with SM 10 is likely caused by the limited range accessible for measurements in the laboratory. For example, the viscosity of the polymerized melt Rhy3 (SM 7.6 , 9.6 A ) that follows an Arrhenian behavior (Figure 2a), was measured in the range of 3.24 log 11.15 It is not possible to extend measurements to significantly lower values for such polymerized melts with T becoming too high for the measuring system and causing volatilization of alkalis from the melt. Therefore, A is not well constrained by this measurement interval. The sample Sho deviates from the expected behavior with SM 29.3 and 9.4 A . This is a very low value of A compared to melts with similar SM. For Sho, only three data points exist in the L range with the lowest measured viscosity log 1.27 (Vetere et al., 2007). This restricted L range may not permit an accurate determination of the T dependence in the L region and thus a reliable estimate of A.
The Viscosity at Infinite Temperature
A common assumption is that the viscosity of glass-forming melts converge to constant value of A as T (Angell et al., 2000), an assumption that can be integrated into the fitting by fixing the parameter A (Section 2.1). Maxwell's equation G provides an order-of-magnitude estimate. G is the shear modulus at infinite frequency and the relaxation time. For silicate melts at infinite T , they are estimated as 10 10 G Pa (Dingwell & Webb, 1989) and 14 10 s (Angell, 1997;Börjesson et al., 1987;Fujimori & Oguni, 1995) content in mol%) and contains information on the data, the third block information on references. In the second block the fitting parameters for the constrained hydrous MYEGA model (Equation 7 with 2.9 A , 9, and 10) and the RMSE are given. "Comment" indicates the sample name in the respective publication. Measurements mentioned to have crystallized/lost water and so on in the respective reference are excluded from fitting. Bas, Basalt; Di, Diopside; Lat, Latite; Rhy, Rhyolite; RMSE, root-mean-square error; Tra, Trachyte.
Geochemistry, Geophysics, Geosystems
The VFT (Equation 3) and AG models (Equation 4) have been used in the literature to explore the range of A values for volcanic melts. Russell et al. (2003) obtained an average 4.3 0.7 A (VFT) and 3.2 0.7 A (AG) for a compilation of 20 silicate melts. Subsequent work by Giordano et al. (2008) Mauro et al. (2009) show that the MYEGA model results in a larger value for A than VFT. We observe a larger A for MYEGA than VFT with MYEGA 4.3 1.9 A and VFT 5.1 1.5 A , respectively. The difference between them is consistent with the results of Zheng et al. (2011). The trend to low values of A that we observe stems largely from the 11 Arrhenian data sets with SM 10 for which the quasi-linear extrapolation of to high T yields very low values of A (Figure 3). When the eleven A values for melts with SM 10 are excluded from averaging, VFT 4.6 1.2 A , in agreement with the value found by Giordano et al. (2008) and close to that of Russell et al. (2003). Nine of the 11 melts in Table 1 with SM 10 were not used in these two studies, but we assume they would have a similar influence on the values of A. A significant-but smaller-difference in A remains compared to the technical data set of Zheng et al. (2011 Figure S3). Abbreviations and references for the different data sets can be found in Table 1: * denotes samples for which hydrous measurements are also reported (Table 2). Symbols are assigned as follows: X for rhyolites, empty circles for HPG8, triangles to the left for trachytes, squares for dacites, pentagons for phonolites, empty crosses for andesites, empty X for latites, diamonds for basaltic andesites, stars for tephriphonolites, octagons for shoshonite, hexagons for basalts, upwards triangles for phono-tephrites, triangles to the right for tephrites, tripods for foidite, crosses for diopside. Low values of A also correlate with low values of the steepness factor (Figure 3), highlighting a difference between the current data set and that of Zheng et al. (2011). In their database, all 25.9 m . If we restrict averaging of A to melts with such m values, we obtain MYEGA 3.2 1.0 A , in excellent agreement with Zheng et al. (2011). This underlines the observation that the measurable T interval for highly polymerized melts (low SM/low m) often is to narrow to constrain A.
Fitting With a Constant Value of A
In order to explore differences in the MYEGA fitting parameters when A is fixed or left as a free parameter, we refit the anhydrous data sets (Table 1) using 2.9 A (Zheng et al., 2011). This may also be important for cases where only a small number of measurements over a limited H range are available, including DSC measurements which we address in Section 6. The RMSE values reported in Table 1 show an expected increase due to the reduction in fitting parameters, but overall the fitting quality is still high.
Values for g T (Table 1 and Figure 3b) are very similar to the fits with free A since g T is generally well constrained by measurements in the vicinity of Similarly, m values for fixed 2.9 A in the interval SM 10 are systematically larger. This is readily rationalized by reversing the argument given in Section 4.2 that an Arrhenian behavior of leads to small A. With 2.9 A constrained, the fit is forced to become more non-Arrhenian, increasing the curvature near g T . For 10 SM 20 , the majority of m values associated with fixed A are larger but the deviation is less pronounced. In the interval SM 20 , deviations are generally small and not systematic. A notable difference is Sho, for which L data are scarce as discussed in Section 4.1, with 35.39 m for 2.9 A , compared to 25.47 m for a fitted 9.44 A . General trends discussed for the MYEGA fit with variable A are preserved for fixed 2.9 A , and become more systematic: g T decreases with SM, and the fragility m increases with SM. Fixing A leads to a narrower distribution of m and indicates a quasi-linear correlation with SM.
Hydrous Silicate Melts
After fitting anhydrous viscosity data using the MYEGA model (Figure 2), we explore the 2 H O-dependent model of Equations 9-12 for the 26 samples with hydrous data ( Table 2). As examples, we show two compositions in Figure 4 that are also highlighted in Figures 1 and 3: a Basaltic Andesite (BasAnd2) (Robert et al., 2013) and a Phonolite (Pho1) (Giordano et al., 2009) Our model describes the measurements for BasAnd2 by Robert et al. (2013) significantly better than the literature model ( Figure 4a)-with the exception of the two L falling sphere data-which is most clearly visible for 12 mol% 2 H O. In addition, our model shows a tendency toward larger curvature in log -1 / T (stronger non-Arrhenian behavior, larger m). For Pho1 (Figure 4d), the data are well described by both our fit and the model used in Giordano et al. (2009), with the exception of the highest 2 H O content (14.39 mol%), which neither of the models match. With a high alkaline content (Figure 1 and Table 2 The steepness parameter m deviates between our model and literature fits (Figure 4) for the non-Arrhenian melt BasAnd2, which is already apparent in the fits themselves. d m reported by Robert et al. (2013) is slightly higher than the value calculated here, and their m shows a steeper decrease with 2 H O, resulting in an increasing deviation between the two models. For Pho1, our model formulation leads to lower values of m with 2 H O compared to the fit by Giordano et al. (2009). The initial decrease is more pronounced than for BasAnd2. This behavior reflects that BasAnd2 has lower degree of polymerization, with the SM 29.3 and SiO 2 56.9 x mol% (Table 2), an effect that is not clearly visible in the models from the literature.
In some cases-illustrated by Pho1 for our model (Figure 4f), but also apparent in some trends from the literature-m extrapolates to negative values at high 2 H O content, which constitutes unphysical behavior. Such behavior should serve as warning against extrapolating models of melt viscosity far beyond the 2 H O content actually measured in the experiments used for fitting. Figure 5 shows a comparison of our fit calculation with RMSE = 0.17 against the measured viscosities as well as prediction of three general chemical viscosity models for these compositions (Duan, 2014;Giordano et al., 2008;Hui & Zhang, 2007). The model by Duan (2014) is the only viscosity model that accounts for the pressure effect on melt viscosity, which we fixed to 1 bar. Also, this model requires the partitioning of the total iron in FeO and 2 3 Fe O . Here, for the melts for which iron partitioning was not provided, we assigned 1 / 2 of the total iron (always given as tot FeO ) as FeO and 1.11 / 2 as 2 3 Fe O . The RMSE across all calculations is 1.95. The models by Giordano et al. (2008) and Hui and Zhang (2007) have RMSE values of 0.74 and 0.69 respectively. Table 2 documents the RMSE values for all three general chemical models and literature models for the individual compositions. Compared to the latter our model performs with comparable or better quality ( Figure S5). However, previously published models differ in their formulations of 2 H O dependence, while we use the same model for all melts (Equations 9-12). In the Supporting Information we provide an excel file to calculate viscosities for the melts referenced here.
Parameters c and d in Equation 9 obtained for six samples (Rhy8,Dac2,Tra3,Pho4,Pho5,Pho6) show strong deviations from the other values ( 19 c and 24 d , Table 2). This leads to unphysical extrapolations of g T and-via Equation 12-m, that is, to an increase of g T with 2 H O content ( Figure S4). Nevertheless, our model accurately reproduces the measured data with RMSE = 0.09−0.35 for these six compositions. The anomalous behavior of g T and m with 2 H O appears to result from minimizing the residuals during the fit process. The unphysical extrapolation behavior serves as reminder to use our model-like any other model-not to extrapolate far beyond the experimental 2 H O range.
Using DSC for Modeling Melt Viscosity
During viscometry experiments in the H regime volcanic melts can be subjected to nanostructural modification (i.e., crystallization and demixing) (Di Genova, Zandona, & Deubener, 2020), and DSC measurements provide an alternative route to obtain data (e.g., Stabile et al., 2021). DSC measurements require a few mg of glass, which is exposed to g T T for a few minutes only (Di Genova, Zandona, & Deubener, 2020;Stabile et al., 2021;Zheng et al., 2019). This is in stark contrast to experiments using micropenetration and parallel plate techniques that require large and double-polished samples (ideally with a thickness of 3 mm) and expose the melt to g T T for significantly longer periods of time (Douglas et al., 1965) which can lead to severe chemical and textural changes in anhydrous and hydrous samples (Bouhifd et al., 2004;Di Genova, Zandona, & Deubener, 2020;Liebske et al., 2003;Richet et al., 1996). However, only temperatures around g T can be probed using DSC, leaving the L range unexplored, complicating fitting. In Sections 4.2 and 4.3, we have explored the role of A for the model, and found that using 2.9 A (Zheng et al., 2011)constraining the high T behavior-provides a systematic and good description of melt viscosity in the L range. Using 2.9 A in the MYEGA fit and applying our description of 2 H O dependence to DSC-derived can therefore provide an alternative route to attain high-quality and reliable predictions.
Diopside: A Test Case
We test this approach for DSC-based data of a diopside melt (Di), an Fe-free system that is a good proxy of volcanic melt not prone to crystallization around g T , and for which a large number of viscometry T marks the sudden drop in heat flow measured in DSC, and peak T corresponds to the (endothermic) minimum of the heat flow undershoot of the glass transformation interval. onset T and peak T were measured at five heating rates, leading to 10 data points. We use the approach of Scherer (1984) where K is the chemically independent parallel shift factor and q c, h the heating rate in 1 K s for onset/peak T (Di Genova, Zandona, & Deubener, 2020).
Here we fit both the DSC-based values, that is, 10 data points with 9 12 10 10 Pa s ( Figure 6) as well as the viscometric measurements compiled by Al-Mukadam et al. (2020), using the MYEGA expression (Equation 7) and assuming 2.9 A (Zheng et al., 2011). Our fit and that by Al-Mukadam et al. (2020)-which leaves A free-to viscometry data show good agreement overall. The deviation at high T stems from the differing values in A. The MYEGA model based on DSC-derived viscosities (at H) predicts the L viscometry data well. Our approach shows that a predictive extrapolation from the H regime over more than 10 orders of magnitudes is reliably possible, spanning the entire range relevant to volcanic eruptions.
Predicting Viscosities Using DSC
After testing this fitting approach on Di, we move to natural melts with fewer DSC data points and more complex oxide chemistry, which can lead to nanocrystallization even in the DSC experiments (Di Genova, Zandona, & Deubener, 2020). We compare the results from the fit to DSC-derived data with models that are based on viscometry measurement on melts of the same eruptions (Table 3) Our results for this set of examples indicate that hydrous DSC-derived can be used to calibrate the model developed here (Equation 7 with 2.9 A and Equations 9-12). Viscosity values of different 2 H O concentration can not only be described well, but accurately predicted (Figure 7). Resulting at eruptive T are well behaved with 2 H O for all DSC-derived models. However, to fully validate this approach and explain the deviations between viscometry and DSC-derived models comprehensively, more DSC and viscometry measurements carried out on samples of equivalent compositions are necessary. As we have pointed out explicitly for Bas1, the formation of nanostructures appears to not only affect viscometry measurements, but also DSC experiments, albeit to a much smaller extent. Careful analysis of samples after experiments, for example, by Raman spectroscopy or TEM, is necessary to check for the formation of nanostructures (Di Genova, Zandona, & Deubener, 2020).
Conclusions
We present a new approach to fit the temperature and water dependence of viscosity for volcanic melts. It is based on a combination of the physically motivated MYEGA model (Mauro et al., 2009) (Equation 7) for an isochemical fit to anhydrous data and a two-component model (Schneider et al., 1997) to describe the influence of water. In the MYEGA model, the fitting parameters are the viscosity at infinite T ( log A ), the glass transition temperature g T , and the steepness factor m. In the two-component model, we formulate a dependence of g T only between the endmembers of the anhydrous melt composition and that of water (Equations 9 and 10). For the dependence of m on water content, we derive an analytical expression dependent on (Table 1), we show that the MYEGA model describes the data comparably to-or better than-the more commonly used VFT fit. We further explore the performance of the MYEGA model by assuming a global constant value of 2.9 A (Zheng et al., 2011); naturally, the misfit to the data increases, but the fits remain good overall. We also find that highly polymerized Arrhenian melts tend to yield smaller values of A due to the experimental inaccessibility of higher T measurements for these types of melts. For 26 data sets with both anhydrous and hydrous measurements, we apply the MYEGA model in combination with the 2 H O-dependent description of g T . We find that our model performs with comparable or better quality than various differing literature models (Table 2) H O dependence of differential scanning calorimetry (DSC) and viscometry derived models at eruptive T : 945 C for Tra3, 1225 C for Bas1, 900 C for Lat, and 750 C for Rhy14. Water content is given in mol%. The black ticks are set in 1 wt% intervals of excel file to calculate viscosities of all melts considered here using our model is provided as Supporting Information.
We further investigate and fit viscosities derived from DSC which is an attractive experimental approach that avoids or reduces nanocrystallization and demixing of samples during the measurements compared to viscometric methods. The lack of low viscosity data due to DSC only probing T around g T is compensated by using a constrained 2.9 A . For a small set of five examples (Table 3), we illustrate that such a fit extrapolates well to high T when compared to viscometry measurements. We apply the 2 H O dependent model with 2.9 A to hydrous DSC-derived viscosities, and find the model to show good fitting and predictive capabilities. Investigating these models at eruptive T also shows well behaved functions; viscosities monotonically decrease with 2 H O content. This underlines the viability of determining with DSC.
Since nanostructures have been shown to significantly influence of volcanic melts (Di Genova, Brooker, et al., 2020;Di Genova, Kolzenburg, et al., 2017;Di Genova, Zandona, & Deubener, 2020), understanding and quantifying their impact on magma transport is an important task in physical volcanology. The characterization of samples exposed to DSC and viscometry measurements by Raman spectroscopy and transmission electron microscopy gives insight into the structural and textural impact of nanostructures. In combination with fitting the DSC-derived viscosities with 2.9 A as well as viscometric measurements, this opens up the possibility to quantify the impact of nanostructure formation on the viscosity of volcanic melts. This in turn may improve our understanding of the eruptive dynamics of volcanoes.
Data Availability Statement
Data can be found in the cited references (Tables 1 and 3). An Excel file to compute viscosities with our model using fitting parameters of Table 2 is supplied as Supporting Information. | 8,345.4 | 2021-08-01T00:00:00.000 | [
"Geology"
] |
Direct Data-based Decision Making under Uncertainty
In a typical one-period decision making model under uncertainty, unknown consequences are modeled as random variables. However, accurately estimating probability distributions of the involved random variables from historical data is rarely possible. As a result, decisions made may be suboptimal or even unacceptable in the future. Also, an agent may not view data occurred at different time moments, e
Introduction
A typical process of decision making under uncertainty is as follows data → uncertainty modeling → → risk preference modeling → choice/decision (1) Let X be a set of available (feasible) actions.Scheme (1) can be formally stated as: (i) modeling unknown consequences of every action X ∈ X as a random variable (r.v.) R(X), (ii) establishing a numerical representation U : R → R for agent's preference relation, defined on a space R of all r.v.'s and (iii) finding best action by maximizing U with respect to X ∈ X : max X∈X U (R(X)). ( What an agent has readily available is only historical/experimental data and his/her preferences towards risk and reward.The rest is statistical inference from the data about corresponding uncertain outcomes based on various assumptions, which largely depend on the nature of data.For example, measurements of the length of some object can be reliably assumed to be realizations of independent and identically distributed (i.i.d.) r.v.'s-timing of those measurements can be safely ignored.By the central limit theorem (CLT), the average of a large number (a) The distributions of rates of return of financial assets are typically non-symmetric with left tails being much heavier than right tails [50].
(b) Increments of actual price processes are not stationary, and consequently, Lévy processes cannot be calibrated with real data [36]. 1c) "Periods of lower returns are systematically followed by compensating periods of higher returns" [51] ("mean reversion" phenomenon)-evidence that price increments are not independent.
In fact, the above issues with stochastic processes can be "fixed" by time-series models.For example, autoregressive models AR(p) assume that asset's rate of return depends on p previous ones, moving-average models MA(q) involve last q values of a stochastic error, autoregressive-moving-average models ARMA(p, q) generalize AR(p) and MA(q), whereas ARIMA models generalize ARMA(p, q), suitable to describe a wide range of non-stationary processes [8].However, any time-series model is merely another inference from the historical data and its parameters are subject to estimation errors.
The discrepancy between a real-life phenomenon and its model is called model error-in contrast to approximation error, which can be resolved by simply increasing the sample size, the model error implies that an increase of observations of asset rates does not directly translate into the accuracy/precision in estimation of the probability distributions of the rates.There are various existing approaches that address model uncertainty.For example, bootstrapping [6] generates different scenarios for the variable of interest from a given time series, robust optimization [4] assumes that probabilities in question belong to certain intervals, whereas dual characterization of risk and deviation measures [2,44] relies on risk envelopes, which can be viewed as sets of distortions of an underlying probability measure, see [32].Notably, Pflug et al. [42] showed that the naive 1/n investment strategy could be optimal in portfolio selection when model uncertainty is high.Savage [49] suggested to study decisions as functions from some state space Ω to a set of outcomes Y ⊂ R, which are now known as Savage acts.This approach involves no probability measure on Ω-a critical feature that gave rise to various Savage-act versions of the expected utility theory (EUT) [11,29].For example, Gilboa and Schmeidler [21] proposed to study preference relations over acts, i.e., "functions from states of nature into finite-support distributions over a set of deterministic outcomes."In this case, the agent ends up with the same optimization problem (2), where R is a functional from X to the set A of all acts, and U : A → R is a numerical representation of Gilboa & Schmeidler's preference relation.Of course, the list of existing approaches goes far beyond these examples, see e.g.[3,10,13,55] for alternative approaches and [38,20] for recent surveys.
However, accurately modeling of outcomes of real-life actions in the context of any of these theories is difficult.For example, modeling of financial portfolio returns in terms of Gilboa-Schmeidler acts [21] includes forecasting of a set of finite-support distributions, and therefore, could, in fact, be harder than that in terms of r.v.'s.The main problem with uncertainty modeling is that, contemplating a choice among several alternatives, an agent ponders what alternative he/she would be most benefited from in the future, while the only available information is often the data representing their historical performances in the past.
In view of failure of common statistical assumptions in application to a stock market [50,36,51] and in view of sensitivity of optimal decisions (portfolios) to errors in estimation of probability distributions of financial assets [28,30], this work aims to identify intertemporal principles for comparing historical time series of asset rates of return and to develop an axiomatic framework for a rational decision making in portfolio theory on the space of historical time series.For example, an agent may postulate that if A always outperformed B in the past, then A B, even though better past performance does not guarantee better future performance.
The idea of making decisions based directly on historical data is not new, 2 but it has received relatively little attention in economic and financial literature.Gilboa and Schmeidler [22,23] introduced a case-based decision theory, which makes decisions based on past experience in similar situations. 3In a financial market setting, this theory would identify the moment in the past when the market behavior was most similar to the current one and would prescribe to invest all money into the financial asset which had the highest rate of return in that "similar" situation.However, it is not clear what "similarity measure" to use, and the resulting investment strategy may con-tradict the diversification principle.There are other objections for the use of direct data-based decision making in portfolio selection: (i) Information such as recent market trends and news about particular companies may provide valuable insights for selecting a financial portfolio.(ii) The future may have little in common with the past, for instance, due to unique events such as BREXIT.(iii) New financial assets lack historical data, but it is unlikely that agents would veiw, say, a new bank and a startup IT company similarly.
However, incorporating news and other non-quantitative information, e.g. a recent hire of a highly regarded CEO, into a mathematical model requires human participation and is, therefore, expensive and slow.In contrast, calibrating stochastic models based only on historical data can be fully automated and performed in milliseconds, which is particularly valuable for high-frequency trading.Thus, if the choice of optimal portfolio is based on some uncertainty modeling, which in turn uses historical data only, then the uncertainty modeling stage could be omitted, and decisions could be made based on data directly.
The contribution and organization of this work are as follows.Section 2 introduces the notion of time profile and discusses numerical representation of time series.Section 3 introduces intertemporal principles of rational choice.Section 4 reinterprets the mean-variance and maxmin utility analyses in the context of direct data-based decision making.Section 5 concludes the work.Appendix A contains proofs of key results in Section 3 and Appendix B provides an axiomatic foundation for a data-based analogue of the EUT.
Time Profiles and Numerical Representation of Time Series
Let T = {s 1 , . . ., s T } be a finite set of discrete time moments s 1 < • • • < s T in the past, and let x 1 , . . ., x T be corresponding rates of return of some financial asset.Since x 1 , . . ., x T encode a time structure and are not realizations of i.i.d.r.v.'s, the agent would unlikely view x 1 , . . ., x T as equally valuable data and may assign them corresponding weights q 1 , . . ., q T of historical data "depreciation" to be collectively referred to as time profile Q.For example, the agent may postulate that fraction q t /q t+1 is a constant q ∈ (0, 1] independent of t, which implies that q t = q T q T−t , t = 1, . . ., T. Alternatively, q 1 , . . ., q T can be chosen to be proportional to the (normalized) autocorrelation profile of the asset-if for some time periods (usually far in the past), the autocorrelation vanishes, then those past values play little role in predicting asset's behavior.Suppose, for example, the FTSE 100 index is such an asset.Figure 1 depicts the sample autocorrelation function (ACF) of daily prices of the index from 1-April-2015 to 1-April-2016 with the lag up to 80, taken from [1].For a lag longer than 80 days, the autocorrelation is negligible, so that T = 80, and weights q 1 , . . ., q 80 are then proportional to the ACF in Figure 1 and satisfy ∑ T t=1 q t = 1.Behavioral evidence supporting the notion of time profile includes, but is not limited to the following (a) The effect of fading memory and emotions [18,Part 6]: an individual is much more likely to rely on and act upon recent experience rather than that occurred far in the past.(b) A reversion of the behavioral time discounting principle stating that "money available at the present time is worth more than the same amount in the future" [18,Part 3].(c) Only 21% of agents agree that historical data should be equally weighted [15].
Technical arguments in favor of time profiles include (a) In time-series analysis, (4) is known as the weighted moving average, in which more weight is often given to the most recent data.In particular, (3) are weights in exponential smoothing [9].(b) The ACF decreases with time and almost vanishes after 80 days (see Figure 1).(c) In mean-variance portfolio selection, the optimal portfolios with time profiles based on geometric progression (3) with various q outperform optimal portfolio in which asset rates of return are modeled by an ARIMA time-series model -to be discussed in Example 9 (Figure 5).
For a time series X = (x 1 , . . ., x T ) and time profile Q = (q 1 , . . ., q T ), the weighted average and the mean-square deviation of x 1 , . . ., x T are defined by respectively.Here, E Q [X] and σ Q (X) are not assumed to be estimates of the future expected value and standard deviation, respectively, they are just weighted average and standard deviation of the time series X.For two time series X = (x 1 , . . ., x T ) and Y = (y 1 , . . ., y T ) and time profile Q = (q 1 , . . ., q T ), the covariance is defined by In fact, the agent may contemplate a whole set Q of various time profiles.First, the agent may postulate that q t are nonnegative, impose normalization ∑ T t=1 q t = 1 and define the set of time profiles to be Q = Q max = (q 1 , . . ., q T ) ∈ R T q 1 0, . . . ,q T 0, ∑ T t=1 q t = 1 .
(7) Next, the agent may assume that q 1 . . .q T for every Q ∈ Q-more recent data is more valuable.A maximal "time averse" subset Q ⊂ Q max is given by Q = (q 1 , . . ., q T ) ∈ R T 0 q 1 . . .q T , ∑ T t=1 q t = 1 .
With a chosen time profile set, Q, the agent may define the utility of the time series X = (x 1 , . . ., x T ) by and then can use (11) for comparing different time series.In fact, ( 11) is a data-based analogue of Gilboa & Schmeidler's maxmin model [21].
Example 2 (data-based version of drawdown measure) Let X = (x 1 , . . ., x T ) be a historical time series of the rate of return of some financial asset, and let x t = ∑ t j=1 x j be uncompounded cumulative rate of return over period [1, t].The drawdown of X can be defined by ξ t = max 1 k t x k − x t [14,56].Then the maximum drawdown max 1 t T ξ t can be represented in the form of (11) with Also, the average of the k largest drawdowns can be defined as where (ξ 1 , . . ., ξ T ) is a permutation of (ξ 1 , . . ., ξ T ) such that ξ 1 . . .ξ T [14, 56].The functional U(X) = −DD α (X) is a particular case of (11).Remarkably, the time-series definition (12) of drawdown, which is a dynamic measure, is as simple as time-series analogue of a one-period risk measure, e.g.CVaR.
Thus, while representation (11) is known, the time profile Q here has the meaning of historical data "depreciation."See Appendix B for a nonlinear generalization of (11).
Let Q * = (q * 1 , . . ., q * T ) ∈ Q max with at least three of q * 1 , . . ., q * T being non-zero.The utility of the time series X = (x 1 , . . ., x T ) can also be measured by the mean-standard deviation functional with a continuous function V strictly increasing in the first argument and strictly decreasing in the second one.
Example 3 (data-based version of the mean-standard deviation utility)
The mean-standard deviation utility is defined by V(m, σ) = m − λσ with a specified "level of risk aversion" λ > 0 [25, Example 6].With this V, (13) takes the form Note that ( 14) is a particular case of (11) with [46, Example 1] Example 4 (data-based version of the mean-variance analysis) With a specified threshold µ on E Q * [X], a data-based analogue of the mean-variance analysis corresponds to the utility functional In contrast to the existing decision theories, the proposed direct-data based approach does not try to make any statistical inference from the historical data, but rather incorporates agent's perception of the historical data into a decision process through the time profiles, e.g. as in (11) and ( 13)- (15), and the goal of this work is to identify intertemporal principles of rational choice for constructing time profile sets.An axiomatic framework for intertemporal principles is laid out in §3, and then the direct data-based decision making approach is demonstrated in portfolio optimization with (11) ( §4.1) and in mean-variance portfolio selection ( §4.2).
Intertemporal Principles of Rational Choice
This section discusses intertemporal principles of rational choice for constructing time profile sets introduced in §2.For any asset A, let be its historical excess rate of return over the risk-free rate.
Let X be a portfolio consisting of risky assets A 1 , . . ., A n with portfolio weights (α 1 , . . ., α n ) ∈ R n (short selling is allowed) and of a risk-free asset A 0 with the weight where x i (t) is defined by ( 16) for asset A i and where I and J are sets of indices i such that x i (t) = k and x i (t) = k, respectively.
Thus, the portfolio X corresponds to a function x : T → F, where F = {a + bk, | a ∈ R, b 0} is a real vector space 5 with addition and multiplication by a constant defined by The set X of all possible portfolios is identified with set F T of all vectors (x 1 , . . ., x T ), where x t = x(s t ), t = 1, . . ., T, or, equivalently, X ⊂ R 2T is the set of vectors (a 1 , b 1 , a 2 , b 2 , . . ., a T , b T ) with b t 0, t = 1, . . ., T, where x t = a t + b t k, t = 1, . . ., T.
Let be a preference relation on X : X Y if X is strictly preferred to Y, X ∼ Y if an agent is indifferent between X and Y, and X Y if either X Y or X ∼ Y.One of the fundamental principles of rational choice is that forms a complete weak order on X .
Axiom 1 (complete weak order) is complete and transitive: (i) X Y or Y X for all X, Y ∈ X (completeness).
(ii) X Y and Y Z imply that X Z for all X, Y, Z ∈ X (transitivity).
Axiom 1(i) asserts that decision making is based solely on historical data and no other information is available to the agent (important and non-trivial assumption).Under a mild technical assumption, 6 axiom 1 implies that admits a numerical representation U : X → R such that X Y ⇐⇒ U(X) U(Y), see Theorem 2.6 in [19].
Set X = F T is a metric space with distance ρ(X, Y): where X = (x 1 , . . ., x T ), Y = (y 1 , . . ., y T ), and We can then define open and closed subsets of X with respect to topology induced by this metric.
Axiom 2 states that does not change when X and Y are slightly perturbed.With axiom 2, Theorem 2.15 in [19] implies that a numerical representation U for can be chosen as a continuous function on X .
It is well known that rational agents diversify their portfolios rather than "keep all eggs in one basket." A numerical representation U of that follows this principle is a quasi-concave function: U(αX + (1 − α)Y) min{U(X), U(Y)}.See [12] for a recent study of quasiconcave utility functions.A trivial sufficient condition for axiom 3 to hold is the existence of concave numerical representation U of : i.e., U(αX Often investment decisions are made in two steps: (i) decide on portion α of the capital to be invested into risky assets (and keep the remaining money in a savings account), and (ii) select a risky portfolio for the portion α.Next axiom states that the choice of the risky portfolio in step (ii) does not depend on α.
Proposition 1 Let X + = {X | X X 0 }, where X 0 denotes investment into the risk-free asset only.satisfies axioms 1-4 on X + if and only if it has a continuous numerical representation U on X + satisfying (i) (ii) Positive homogeneity: U(αX) = αU(X), for every X ∈ X + and α 0.
Proof See Appendix A. 2 For the agent who considers only portfolios strictly preferable to the risk-free investment X 0 , restriction to the set X + in Proposition 1 is inessential.Suppose U satisfies (i) and (ii) on the whole set X .Then, it can be represented by where X = (x 1 , . . ., is a set of vectors Q = (q 1 , q 1 , q 2 , q 2 , . . ., q T , q T ), which can be chosen convex, closed, and bounded, and where 17) is attained will be called identifiers of X.
If Q = {Q} is a singleton, (17) simplifies to U(X) = Q, X .In general, q t (or q t ) is interpreted as the weight/ importance that the agent assigns to the historical data (or absence of data) at time t ∈ T .The agent may consider a set Q of possible weights and may select Q ∈ Q such that Q, X is the worst possible: this is an interpretation of (17).Portfolio optimization with (17) takes the form where X ⊂ X is the set of all feasible portfolios, which motivates calling (17) maxmin utility theory (MMUT).
Some agents prefer investing into portfolios which performed well in the past, while avoiding assets with poor or unknown performance, e.g.those which just appear on the market.This intuition is formalized in the following axiom.
then X Y.In other words, if the proportion of money invested into assets with no historical data in X is less than that in Y and if collectively the assets with known data in X always outperformed those in Y, then X Y.
If no data is missing and the agent uses direct historical simulation for forecasting, axiom 5 is equivalent to monotonicity axiom for r.v.'s.
However, financial interpretations of axiom 5 and ( 18) are completely different.With simple historical simulation, the agent interprets inequalities x 1 y 1 , . . ., x T y T as evidence that a portfolio associated with X = (x 1 , . . ., x T ) will outperform the one associated with Y = (y 1 , . . ., y T ) in the future with probability 1 and, thus, strongly prefers X to Y.This again shows that historical data can be easily misinterpreted (mis-modeled), which could result in poor decisions.In contrast, axiom 5 does not imply that the portfolio associated with X will outperform the one associated with Y in the future-it merely states historical facts.
In fact, when forecasting uses methods other than direct historical simulation, axiom 5 differs from (18) as the following example demonstrates.
Example 5 In a "Gaussian world," the future excess rate of return of a portfolio with past excess rates of return X = (x 1 , . . ., x T ) can be modeled by a normally distributed r.v.R(X) with mean µ X and variance σ 2 X estimated from X by Then (18) simplifies to which neither implies axiom 5 nor follows from it.
Axiom 5 assumes that X outperformed Y at every single time moment in the past.Some agents, however, may consider cumulative past performance.
Axiom 6 (time aversion)
then X Y.
Proposition 2 Let (17) be a numerical representation of on X .The following statements are equivalent.
Proof See Appendix A. 2 Many models of financial market assume that observations of asset's rates of return at different times are independent.Black & Scholes [5] priced options based on the assumption that stock prices follow a Brownian motion, whereas Sato [48] replaced the Brownian motion by a Levy process, which also assumes price increments (and consequently returns) to be independent.In such models, discrete historical data x 1 , . . ., x T can be considered as realizations of independent and identically distributed r.v.'s.In this case, the order of data is not important, which corresponds to the following no-time-structure principle X ∼ Y whenever (y 1 , . . ., y T ) is a permutation of (x 1 , . . ., x T ), (22) which implies that X = (a 1 + b 1 k, . . ., a T + b T k) with a 1 . . .a T (positive trend) and Y = (a T + b T k, . . ., a 1 + b 1 k) with the reverse order of data (negative trend), are equally preferable-this does not seem to be a rational behavior.
An axiomatic framework for data-based expected utility theory (36) is presented in Appendix B.
When there is no missing data or when the agent deliberately excludes assets with missing/incomplete data, every portfolio X can be identified with its historical time series (x 1 , . . ., x T ) ∈ R T .In this case, axioms on a preference relation are introduced and studied on X = R T .
• Axiom 1 states that defines a complete and transitive weak order on R T .• Axiom 2, formulated in usual topology of R T , states that the sets {Y ∈ X |Y X} and {Y ∈ X |X Y} are closed in R T and implies the existence of continuous function U : R T → R such that X Y ⇔ U(X) U(Y).• Axiom 3 states that U is a quasi-concave function on R T .
• Axioms 3 and 4 imply that U is concave and positive homogeneous, hence admit the form (11). • Axiom 5 states that if X = (x 1 , . . ., x T ) and Y = (y 1 , . . ., y T ) are such that x 1 y 1 , . . ., x T y T , then X Y.It implies that q t in ( 11) are non-negative.• Axiom 6 states that X = (x 1 , . . ., x T ) Y = (y 1 , . . ., y T ) provided that ∑ T t=τ x t ∑ T t=τ y t , τ = 1, . . ., T (portfolios with better average recent performance are preferable).It implies that q 1 . . .q T for every Q ∈ Q in (11).Now, with what introduced axioms is the mean-standard deviation functional (13) consistent?Proposition 3 (13) is a numerical representation U of if and only if satisfies axioms 1 and 2, and the following two additional axioms (a) X + C X for all C > 0, and Next propositions address consistency of the mean-standard deviation utility (14) with axiom 5 (monotonicity).
4 Data-based Portfolio Optimization
Maxmin Portfolio Optimization
In the direct data-based decision approach, historical excess rates or return of the risk-free asset and n risky assets during last T time periods are given by X 0 = (0, . . ., 0), X 1 = (x 11 , . . ., x 1T ), . . ., X n = (x n1 , . . ., x nT ), respectively.With no short sales and with U being the utility (11), a portfolio optimization problem is formulated by Example 6 (single time profile) For a singleton Q = {Q * }, problem (24) simplifies to In this case, u * = max i u i , and an optimal strategy is v i * = 1, v i = 0, i = i * , where i * = arg max i u i , i.e. investing the whole capital in the "best" asset-no diversification.
Example 7 ("ultimate" risk aversion) is equivalent to the problem of finding mixed-strategy Nash equilibrium in a two-player zero-sum game [53] with (n + 1) × T payoff matrix X having elements x it .In this case, u * is equal to the value of the game, and the optimal investment strategy v * ∈ V is a solution to the linear program If Q is an arbitrary closed convex subset of Q max , von Neumann minimax theorem [54] implies that u * = max v∈V min Q∈Q v X Q = min Q∈Q max v∈V v X Q , and u * can be found as Example 8 For Q being the maximal "time averse" set (8), or for Q belonging to the one-parameter family (9), problem (25) is a linear program.
The set Q in (11) can also be found by the inverse portfolio approach introduced in [26,27] in terms of r.v.'s.The idea is that the agent recovers Q from the time series X * = (x * 1 , . . ., x * T ) of the rate of return of a portfolio that he/she is relatively satisfied with-such a portfolio should solve (24) with U given by (11).Proposition 7 in [26] implies that the maximal possible (most robust) such Q is given by provided that the following no-perfect-history assumption holds: there is no time series (x 1 , . . ., x T ) of the rate of return of a feasible portfolio such that x 1 0, . . ., x T 0 with at least one inequality being strict.For example, for T n, there is unlikely to be a portfolio that outperforms the risk-free asset for every time period in the past.Thus, T can be chosen sufficiently large to guarantee that the no-perfect-history assumption holds, otherwise the historical time series can be perceived to be too short for making a reliable decision.
Mean-Variance Portfolio Selection
Suppose there is a risk-free asset with constant rate of return r 0 , and there are n risky assets.In the typical approach (1), rates of return of the risky assets are modeled as r.v.'s r 1 , . . ., r n on some probability space.With v i being the fraction of the capital invested into asset i, the rate of return of a portfolio is then ∑ n i=0 v i r i .The mean-variance portfolio selection problem [39] with desired premium ∆ > 0 over r 0 , continues to be a cornerstone of the modern portfolio theory: one-fund theorem, twofund theorem, capital asset pricing model (CAPM), Sharpe ratio and asset beta all stem from (26), see [34],-this is one of the reasons to consider its data-based analogue: where X i = (x i1 , . . ., x iT ) ∈ R T is the time series of the rate of return for risky asset i ∈ {1, . . ., n}, and 4), ( 5) and ( 6), respectively, with a time profile Q = (q 1 , . . ., q T ) ∈ Q max .Note that ( 27) is equivalent to max v 0 ,v 1 ,...,v n U (∑ n i=0 v i X i ) subject to ∑ n i=0 v i = 1 with U defined by (15).
Optimal portfolio weights v * 0 and where e = (1, . . ., 1) is the n-dimensional unit vector and where X and Λ Q are matrices with entries x it , i = 1, . . ., n, t = 1, . . ., T, and cov Q (X i , X j ), i, j = 1, . . ., n, respectively.If e v * = 1, which implies that v * 0 = 0 (no investment into the risk-free asset), then the optimal portfolio is called a master fund of positive type (market portfolio) [45] with the weights [57, (8.2.4)] and the time series of the rate of return X M = v M X.The optimality conditions for the master fund can be restated as the capital asset pricing model (CAPM) [46]: Problem (26) requires knowing the expected values and variance-covariance matrix of asset rates of return.If historical rates of return of each asset are assumed to be realizations of i.i.d.r.v.'s, then ( 26) is a particular case of ( 27) with is what is solved in practice.However, (26) does not distinguish portfolios with different trends but having the same histogram of historical rates over the same period of time.Also, it is well-known that ( 26) is inconsistent with ordinary monotonicity axiom 9 (18).However, in the light of the historical data "depreciation," this could be a result of mis-interpretation of historical data as a "forecast" for future rates of return.In fact, what is violated is axiom 5 (monotonicity).Indeed, the agent may well believe that assets with good historical performance are now overpriced and may prefer a portfolio with acceptable past performance and least historical volatility.
Example 10 Figure 6 depicts the mean-variance efficient frontier for the Markowitz's portfolio problem (27) with the time 9 The mean-variance approach may violate "ordinary" monotonicity axiom: if R X is the rate of return of a solution of (26), then there may exist another feasible portfolio with the rate of return R Y , such that P[R Y R X ] = 1 and P[R Y > R X ] > 0, see [25,Example 5].The best monotone approximation of a mean-variance functional is obtained in [35].Alternatively, one may obtain a monotone preference relation if the standard deviation in (26) is replaced by a general deviation measure [44,45,46]. 10Namely, Apple Inc. (AAPL), Amazon.comInc (AMZN), Bank of America Corp (BAC), Twenty-First Century Fox Inc Class B (FOX), IBM Common Stock (IBM), The Coca-Cola Co (KO), McDonald's Corporation (MCD), Microsoft Corporation (MSFT), Nike Inc (NKE), and Visa Inc (V). 11Negative portfolio weights correspond to short selling.weights chosen according to the ACF of the FTSE 100 index in Figure 1.We select n = 70 most actively traded 12 assets from the FTSE 100 index and use assets' daily rates of return from 1-April-2015 to 1-April-2016.It is assumed that r 0 = 0.01%.
Remark 1 The weights Q = (q 1 , . . ., q T ) can be viewed as parameters, and sensitivity of the optimal value V(Q) = σ Q (X * (Q)) in (27) with respect to changes in q 1 , . . ., q T can 12 The comparison is made based on average trading volume.
Conclusions
In a typical one-period decision making under uncertainty, outcomes of feasible actions are modeled as r.v.'s.As a result, optimal decisions depend on the accuracy of estimation of the time (1 May 2016 -1 May 2017) out-of-sample price evolution of $1 portfolio q = 1 q = 0.99 q = 0.95 q = 0.9 ARIMA corresponding probability distributions.Agents who believe that probability distributions of asset rates cannot be reliably estimated would unlikely use any random variable-based decision theory.They will also find it hard to apply Gilboa & Schmeidler's case-based decision theory [22,23], which requires a similarity measure for market behavior over different time periods.As an alternative, this work has formulated "intertemporal" principles/axioms for a preference relation on the space of historical time series to facilitate making a rational choice in portfolio selection.It does not suggest to dismiss existing decision theories which include uncertainty modeling.Instead, it shows how to adapt them to deal with historical time series.This adaptation, however, is not "mechanical": some of the proposed axioms, e.g."time aversion," have no direct analogue in the existing theories.Example 5 demonstrates that the same axiom (in this case, monotonicity) may lead to completely different decisions when applied to r.v.'s and to time series.Thus, instead of making statistical inference from the historical data, an agent may incorporate his/her perception of the data through time profiles and make a decision based on the data directly.Figure 5 (Example 9) shows that in mean-variance portfolio selection, the optimal portfolios with the exponential time profiles with various q outperform optimal portfolio in which asset rates of return are modeled by the ARIMA model.However, no matter what advantage on a particular dataset, the direct databased decision making approach demonstrates over approaches with uncertainty modeling, those agents who believe that asset prices (rates of return) can be reliably predicted by merely statistical means could hardly be discouraged-they will continue either relying on some "trusted" statistical model or searching for an ideal one.This work aims to provide an alternative decision making approach for those who do not have such a belief.
While the focus of this work is on direct data-based analogues of the Gilboa & Schmeidler's maxmin model, the meanvariance approach, and the EUT, other existing decision theories can be reinterpreted and analyzed in a similar fashion.is strictly monotone.With (31), f cannot be decreasing, hence it is strictly increasing.Thus, f has a strictly increasing inverse function f −1 , and U(X) := f −1 (U (X)) is another numerical representation of .Then U(αX * ) = f −1 (U (αX * )) = α for all α > 0. Then X ∼ U(X)X * for all X ∈ X + , and, by axiom 4, αX ∼ αU(X)X * .Hence, U(αX) = U(αU(X)X * ) = αU(X), and (ii) follows.
Proof of Proposition 2
For (iii) ⇒ (i) part, select any X, Y satisfying (19) and choose any Q ∈ Q.Let δ 1 = q 1 0 and δ t = q t − q t−1 0, t = 2, . . ., T. Then T t=1 q t c t , where the inequality follows from (19).By similar argument, ∑ T t=1 q t b t ∑ T t=1 q t d t , so that U(X) U(Y), and, consequently, X Y.
(i) ⇒ (ii) is straightforward.For (ii) ⇒ (iii), let Q Y = (q Y 1 , q Y 1 , . . ., q Y T , q Y T ) ∈ Q be an identifier of any Y = (a 1 + b 1 k, . . . ,a T + b T k) ∈ X and let q Y i > q Y j for some i < j.Then for X defined by x(t) = y(t) − δI T 1 ,T 2 (t) with T 1 = {i}, T 2 = {j}.This contradicts (20), and, consequently, 0 q Y 1 . . .q Y T for every Y ∈ X .Similarly, (21) yields 0 q Y 1 . . .q Y T .Let Q ⊆ Q be the closure of the convex hull of all Q ∈ Q, which are identifiers of some Y ∈ X , and let U be given by (17) with Q .Then U(Y) = Q Y , Y U (Y) for every Y ∈ X , so that Q ⊆ Q , which yields Q = Q.Consequently, Q is the closure of the convex hull of some vectors satisfying (ii), and thus, this condition holds for every Q ∈ Q.
(c) → (a): Fix j such that q * j = z, and let X x , x ∈ R, be a one-parameter family in R T such that x j = x, and x ] = q * j x 2 j + ∑ t =j q * t x 2 t = x 2 z + z 2 1−z , and Consequently, Since U(X x ) is non-decreasing in x by (c), ( 23) follows.
B Appendix:
Data-Based Expected Utility Theory (EUT) A data-based analogue of the independence axiom for r.v.'s is stated as follows.
Axiom 7 (independence) Let A and B be any disjoint sets such that A ∪ B = T .For any X, Y ∈ X denote X A ⊕ Y B a function z : T → F such that z(t) = x(t) for t ∈ A and z(t) = y(t) for t ∈ B. Then X A ⊕ Z B Y A ⊕ Z B implies that X A ⊕ W B Y A ⊕ W B for any X, Y, Z, W ∈ X . is a numerical equivalent of the appeal for the historical rate of return x of a given portfolio at time t.If u t is differentiable, u t (x) measures sensitivity of investor's utility to changes in data at time t, and (37) represents the principle "recent observations are more important than past ones."If u t (x) = q t x, t = 1, . . ., T, (36) and (37) simplify to (11) with a singleton Q = {Q} with q 1 . . .q T .
Figure 5 :Figure 6 :
Figure5: In-sample and out-of-sample price evolution of 1$ invested at the beginning of the corresponding periods into the master fund in(27) with q = 1, 0.99, 0.97, and 0.95, and in(26) with the ARIMA model, respectively. | 9,189.6 | 2017-11-21T00:00:00.000 | [
"Computer Science"
] |
Modification of transfer-matrix method for electromagnetic waves in layered superconductor in presence of dc magnetic field методу трансфер-матриць для електромагнітних хвиль надпровіднику наявності постійного магнітного поля
In the present paper, we modify the transfer-matrix method to study the dissipation-free transition of electromagnetic waves of terahertz range through a plate of layered superconductor embedded in the dielectric environment in the presence of external direct current (dc) magnetic field. In this work, we сonsider TM-polarized electromagnetic waves. The setup is arranged in such a way that the dielectric and superconducting layers in the plate are perpendicular to its interface, and the external magnetic field is directed along the plate and parallel to the layers. We consider the case of a weak external dc field at which magnetic vortices do not penetrate the plate. Due to the nonlinearity of the Josephson plasma formed in the layered superconductor, the dc magnetic field penetrates non-uniformly into the plate and affects the electromagnetic wave. Hence, the magnitude of the external dc magnetic field can be used as a variable parameter to tune various phenomena associated with the propagation of an electromagnetic waves in layered superconductors. In the presence of the external homogeneous dc magnetic field, linear electromagnetic waves in the layered superconductor turn out to be non-exponential. Therefore we cannot directly apply the transfer matrix method, in which the amplitudes of the corresponding exponents are compared. However, in the present paper, it is shown that for a sufficiently thick plate, the matrices describing the wave transfer through the plate can be introduced. The analytical expressions for these matrices are derived explicitly in terms of special Legendre functions. The obtained transfer-matrices can be used for the further study of the wave transfer through the layered superconductor in the presence of an external dc magnetic field.
In the present paper, we modify the transfer-matrix method to study the dissipation-free transition of electromagnetic waves of terahertz range through a plate of layered superconductor embedded in the dielectric environment in the presence of external direct current (dc) magnetic field.
In this work, we сonsider TM-polarized electromagnetic waves. The setup is arranged in such a way that the dielectric and superconducting layers in the plate are perpendicular to its interface, and the external magnetic field is directed along the plate and parallel to the layers. We consider the case of a weak external dc field at which magnetic vortices do not penetrate the plate.
Due to the nonlinearity of the Josephson plasma formed in the layered superconductor, the dc magnetic field penetrates nonuniformly into the plate and affects the electromagnetic wave. Hence, the magnitude of the external dc magnetic field can be used as a variable parameter to tune various phenomena associated with the propagation of an electromagnetic waves in layered superconductors.
In the presence of the external homogeneous dc magnetic field, linear electromagnetic waves in the layered superconductor turn out to be non-exponential. Therefore we cannot directly apply the transfer matrix method, in which the amplitudes of the corresponding exponents are compared. However, in the present paper, it is shown that for a sufficiently thick plate, the matrices describing the wave transfer through the plate can be introduced. The analytical expressions for these matrices are derived explicitly in terms of special Legendre functions. The obtained transfer-matrices can be used for the further study of the wave transfer through the layered superconductor in the presence of an external dc magnetic field.
Introduction
Layered superconductors are periodic structures that consist of thin alternating superconducting and insulating layers. Natural crystals Therefore, the various non-trivial electromagnetic phenomena are predicted for layered superconductors [3,4,5]. Also, these materials are of particular interest due to possibility of flexible tuning their electromagnetic properties by an external direct current (dc) magnetic field [6,7]. The additional interest is related to the operating frequencies of the Josephson plasma waves that are of terahertz (THz) range. By present, there is still a gap in controllable and high-power THz-devices, which are, meanwhile, considered to be promising for many areas starting from basic science to medicine or homeland security [8,9].
To study the transfer of electromagnetic waves it is convenient to use the transfer-matrix method (see, e.g., book [10]). In the absence of dc magnetic field, the electromagnetic properties of the layered superconductor can be described by the effective permittivity tensor [11], and, therefore, the transfer-matrix method can be directly applied (see, e.g., the recent paper [4]). However, in the presence of an external dc magnetic field, the problem becomes more complicated because the electromagnetic field inside the plate is described not by harmonic (exponential) functions, but by special Legendre functions [6].
In this paper, we modify the transfer-matrix method for the electromagnetic wave propagation through a plate of layered superconductor in the presence of an external dc magnetic field and calculate the corresponding transfer-matrices.
Problem Formulation
We study a dissipation-free propagation of an electromagnetic wave through the system that consists of a layered superconductor plate of thickness s placed in the dielectric environment as shown in Fig.1. Dielectric and superconducting layers are considered perpendicular to the interface. The coordinate system is chosen in such a way that the x -axis is directed perpendicular to the plate, the z -axis is orthogonal to the superconducting layers.
The external dc magnetic field 0 H is supposed to be directed parallel to the plate and to the layers, i.e. along the y -axis. We consider TM-polarized wave. In the chosen coordinate system, its electric ( , , , ) E x y z t and magnetic ( , , , ) H x y z t components can be written as follows: where is the wave frequency, z k is z -projection of the wave vector.
It is worth noting that the plate is supposed to be sufficiently thick: where c is the London penetration depth along the layers of superconductor. On this assumption, the dc magnetic field deeply inside the layered superconductor is absent. Also, we assume that the external magnetic field magnitude is less than the critical value
Main Equations for the Electromagnetic Field
The expressions for the electromagnetic field components in a dielectric medium can be obtained from the system of Maxwell's equations. At the left and right interfaces, respectively, for magnetic components we have: where d k is the x -projection of the wave vector of the incident wave: For the corresponding electric field components, we have: The field inside the plate which obeys the coupled sine-Gordon equations [3]. For the wavelengths that are greater than the thickness d (i.e. in the continuous limit), it can be represented as follows: First, we construct the solution for the right and left part of the plate independently. The interaction between magnetic vortices from the opposite sides can be neglected due to the assumption (2), then we neglect the first or second component with cosh in the expression (12). Then the solution of Eq. (11) can be found in terms of associated Legendre functions [13]. We present the solution in the form of superposition for the right half of the plate: and for the left one: (1 ) The specific form of () f allows us interpret these functions as non-exponential running waves inside layered superconductor. Indeed, for 11 z there is an asymptotic expression [13] for Legendre functions: The approximation (15) can be applied for ).
If the external dc field tends to zero, the expressions (17) turn out to be harmonic, and the wave transfer through the investigated system could be described by the matrices of passing through the boundaries and free propagation in the medium [4] in a similar way to the dielectric case. Otherwise, the magnetic field () y Hx can be considered as a plane wave superposition only in the center of the plate.
Transfer matrices
The transfer-matrix T that corresponds to the wave transfer through the plate connects the amplitudes of outgoing and incoming waves for the magnetic field () y Hx . According to (4): = .
RL T RL
Since the field in the layered superconductor can be described by exponential functions only in the center, we can present the matrix T as The boundary conditions are the matching of the tangential components of the electromagnetic field: In accordance with the expressions for the electromagnetic field (4), (5), and (17), these conditions can be rewritten as: therefore, the symmetry of the problem is not broken.
Conclusions
In this theoretical work, we have modified the transfer-matrix method for TM-polarized electromagnetic waves propagating through a plate of layered superconductor taking into account the interaction of Josephson plasma with an external dc magnetic field. It was shown that although the electromagnetic field inside the plate cannot be described by harmonic (exponential) functions, far from the boundaries, it can be considered as a superposition of a running and reflected waves. Then, for sufficiently thick plate, the transfer-matrices can be obtained analytically in terms of special Legendre functions. The received matrices can be used in the further studies related to the transfer of electromagnetic waves through layered superconductors. | 2,115.4 | 2019-12-26T00:00:00.000 | [
"Physics"
] |
Modelling the Tumour Microenvironment, but What Exactly Do We Mean by “Model”?
Simple Summary The word “model” can be used with different meanings and different contexts, like a model student, clay models or a model railway. In some cases, the context can clarify exactly what is meant by “model”, but sometimes several meanings of model can be present in one area. For instance, with reference to cancer research, there can be ambiguity for what is meant by model. This paper reviews the use of the word model as related to cancer research and within the specific area of the microenvironment that surrounds a cancer tumour. The review grouped different definitions of model into four categories (model organisms, in vitro models, mathematical models and computational models) and explored what is meant in each case, mentioning the advantages and disadvantages of the different models Next, a quantitative investigation of the scientific publications listed in the database of the United States National Library of Medicine was performed by counting the frequencies of use of these terms, as well as the components of the microenvironments and the organs modelled with these techniques. Abstract The Oxford English Dictionary includes 17 definitions for the word “model” as a noun and another 11 as a verb. Therefore, context is necessary to understand the meaning of the word model. For instance, “model railways” refer to replicas of railways and trains at a smaller scale and a “model student” refers to an exemplary individual. In some cases, a specific context, like cancer research, may not be sufficient to provide one specific meaning for model. Even if the context is narrowed, specifically, to research related to the tumour microenvironment, “model” can be understood in a wide variety of ways, from an animal model to a mathematical expression. This paper presents a review of different “models” of the tumour microenvironment, as grouped by different definitions of the word into four categories: model organisms, in vitro models, mathematical models and computational models. Then, the frequencies of different meanings of the word “model” related to the tumour microenvironment are measured from numbers of entries in the MEDLINE database of the United States National Library of Medicine at the National Institutes of Health. The frequencies of the main components of the microenvironment and the organ-related cancers modelled are also assessed quantitatively with specific keywords. Whilst animal models, particularly xenografts and mouse models, are the most commonly used “models”, the number of these entries has been slowly decreasing. Mathematical models, as well as prognostic and risk models, follow in frequency, and these have been growing in use.
Introduction
It is now widely accepted that cancer research cannot solely rely on the study of individual cancer cells or a tumour in isolation [1] but rather on the collection of many different cells and their interactions in what is known as the tumour microenvironment [2]. The complex relationships of cancerous cells with healthy cells, immune cells, vasculature, the extracellular matrix, molecules, and other elements that surround and interact with 2. Different Concepts of Model 2.1. Model: "An Animal or Plant to Which Another Bears a Mimetic Resemblance" Perhaps the most widely used concept of "model" is that related to a model organism: a nonhuman species used for performing experiments that can reveal some understanding of a biological phenomenon [19]. From simple organisms, like the bacteria Escherichia coli [20] or yeast like Saccharomyces cerevisiae [21], to zebrafish [22], rodents [23] or drosophila [24], model organisms have been extensively used to elucidate anything from aging [25] to Zika [26]. Part of the success of model organisms has been the fact that the operating principles of some cellular processes, like the cell cycle or signalling pathways, are similar in humans and other species that branched out from earlier common ancestors [27]. Rodents have taken a predominant place as a model organism in cancer and other conditions due to several factors: ease of maintenance and transport, high fertility rates, relative low costs and ease of genetic modifications [19]. Specific mouse models can now be used to study perimenopausal depression [28], tuberculosis [29] and myocardial infarction [30], and the genetically engineered mouse is considered by some to be the preferred organism used in cancer studies [31,32]. Cancer can be induced in these models through the administration of a carcinogen [33,34], the diet [35,36] or the transplantation of tissue or cells from patients or cell lines into the model, i.e., xenografts [37,38]. Alternatively, in transgenic animals that have been genetically modified, cancer can occur spontaneously [23,39]. As this type of model is a whole living organism, it is expected that they intrinsically "capture the intricacies of the tumor immune response and microenvironment" [40]. This on its own is one the most important advantages of model organisms, which do not need the design of an environment to model the tumour microenvironment. The organism itself provide the microenvironment from which aspects like therapeutic implications or side effects can be observed [41]. However, there are important shortcomings, as the host organism is a different species than the donor, and there may be a species mismatch between the tumour and the host microenvironments [32,42]. The reliability of the translation from animal models to human diseases, therefore, remains controversial [43,44]. The model then bears a resemblance to the microenvironment of human cancer, but it is not exactly the same.
The tumour microenvironment of a model can be observed through histopathology [45][46][47] and immunohistochemistry [45,48,49], in which tissue is extracted, thinly sliced, and stained with different techniques highlighting important components of the tumour microenvironment, such as macrophages and lymphocytes. An important limitation of histopathology is that there is only one time point of observation. When techniques such as dorsal skin fold window chambers [50] are used, the development of a tumour and its microenvironment can be directly observed through intravital imaging techniques [32,51], which allow repeated observation and the possible effect of treatments [52,53] for a period of time. Alternatively, tissue can be observed using magnetic resonance imaging [54,55] or positron emission tomography [56,57], which are less invasive but have much lower resolution than microscopical techniques. Another popular concept of model related to cancer is that of "in vitro" or "in glass" experiments. These models refer to investigations performed with cells, organisms or parts of organisms in Petri dishes or similar equipment and have been used for a long time in cancer-related experiments, such as cell growth [58] and the screening of antitumour substances [59]. These experiments imply artificial conditions and a significant simplification of the microenvironment of a tumour. Conversely, these models offer a number of advantages over in vivo experiments with model organisms, not least the avoidance of animal testing. Advantages of in vitro experiments include lower costs and higher throughput, and they can be considered more amenable to mechanistic analysis [40]. Also, despite the considerable simplification of the environment, these models can have higher human relevance since cancer cells derived from primary patient material can be directly used [60,61]. In vitro models have been considered to have fewer problems with how valid the results for one species are when applied to another species [62]. On the other hand, in vitro models are limited as compared with animal models in the complexity they can offer. There is no physiological response, and it is more difficult to observe side effects.
A simple setting to mimic the tumour microenvironment is to co-culture cancer cells with cells of the tumour microenvironment, like myofibroblasts [63], cancer-associated fibroblasts [64], endothelial cells [65] or stromal cell types and/or the extracellular matrix [66]. These co-cultures can then be used to perform a wide variety of experiments related to cell proliferation [67], migration [68,69], invasion [70] or treatment and drug combinations [71,72]. Despite the simplicity of these experiments, the inherent 2D nature of the cultures is a major limitation, as the interactions between cells and the environment do not resemble the 3D nature of a tumour and its microenvironment [73,74]. Accordingly, 3D in vitro models of the tumour microenvironment have evolved significantly, for instance, in breast cancer [75] and now include multicellular aggregates, like spheroids [76,77] or organoids [78,79], which are maintained in different settings, such as purified extracellular matrix gels, hanging drop cultures, and 3D gels or 3D scaffolds [80] of meshes or sponges, which offer a greater number of conditions, such as porosity, biodegradability, chemical composition, transparency, etc. [74]. A further complexity can be introduced to in vitro models by allowing external interaction, thus simulating metabolic processes [81], or pro- viding complex geometries, such as branching structures, that mimic the vasculature of a tumour [82]. These models are known by different names: 3D bioprinted, microfluidic, tumour-on-a-chip or organ-on-a-chip [83][84][85][86][87][88]. One of the major advantages of these models over animal models is the observation, as the settings themselves are easy to examine with microscopes or in other settings.
Model: A Simplified or Idealised Description or Conception of a Particular System, Situation, or Process, Often in Mathematical Terms
A mathematical model can be understood as the simplification and abstraction of a complex phenomenon and its subsequent description in mathematical equations. A model should tackle one or more biological or clinical hypotheses and analyse experimental data together with the formulation of a mathematical description, i.e., the model itself, and undergoes a cycle of refinements until it can be validated [89,90].
A classic example of a mathematical model is the Malthusian growth model [91] that assumes that a population (P 0 ) grows in time (t) in an exponential way depending on the growth rate (r) following the equation P(t) = P 0 e rt . This model is similar to the cancer initiation model proposed by Armitage and Doll [92] describing the incidence rate (I) of a cancer at age t as I(t) = k t n , where k is a constant, and n is the number of stages (or mutations) that must be passed for a cell to become malignant. These two models are descriptive models, i.e., they describe the broad characteristics of a phenomenon or can be used to predict or prognosticate a future state. When the description refers to the time of occurrence of an event being modelled, the process is sometimes called a survival analysis [93]. If the model takes into account one factor (time) but ignores other factors (such as ethnic group, age or lifestyle), the model is considered univariate [94]. Multivariate statistical models [95], on the other hand, consider several variables at the same time, for instance, the correlation between the overall survival of patients with non-small-cell lung cancer and the concentrations of amino acids and metabolites measured from blood samples [96].
Alternative to descriptive models are those considered mechanistic or conceptual [97], which attempt to explain the processes that drive phenomena [98] and from which it is possible to derive biologically important characteristics of a tumour, for instance, that distal recurrence of glioblastoma depends on a hypoxic microenvironment and the migration and proliferation rates of tumour cells [99].
Models that provide the same results every time are considered deterministic, and those which include a certain randomness in the process are considered stochastic [97]. Stochastic models of the tumour microenvironment [100][101][102] are more common than deterministic ones [103] by an approximate ratio of 10 to 1, which is probably a reflection of the many factors related to cancer, like somatic evolution, which are not deterministic [104].
The scale, or point of view, of a model, provides different resolutions at which the model operates: at an organ scale, they are considered macroscale models [105] and, at the cell level, they are considered microscale models [106]. Intermediate scales are sometimes referred as mesoscale models [107] and are related to mesoscale imaging [108], which aims to link the information of cells, organs and tissues. Some authors consider that there is a gap at the mesoscale, for instance, to relate interactions of cells that are far away from each other [109]. The term mesoscale itself originated in meteorology as an intermediate scale between large-and small-scale systems. The nature of the tumour microenvironment can be studied at different scales at the same time; thus, many models are considered "multiscale" [110][111][112][113][114][115], as they consider, from molecules to cells to tissuelevel phenomena [116,117], how the extracellular matrix is altered [118,119], or an avascular tumour growth and cell model [120]. It is important to consider that any model should be able to reproduce data that have been observed through experiments [121] and, as such, models at different scales require validation at different scales as well [122]. Some authors stress the importance of incorporating cellular models into whole-organ models [123]. This can be an advantage of mathematical models over in vitro models, and it is one that in vivo models provide intrinsically.
An interesting perspective to formulate models is to consider the cell as a basic unit, i.e., a virtual cell [124,125], with a set of rules for behaviour. The unit is sometimes called an "agent", with rules to proliferate, reproduce or transform depending on interactions with its external microenvironment [111] and probabilistic rules [126]. Different types of cells (tumour, immune or dendritic) constitute different agents [127]. Since these approaches build a study up from single cells, they are considered "bottom-up" [128]. "Top-down" approaches, on the other hand, zoom out and focus on whole organs or consider cells as a group or population. The behaviour is considered as a mean of all the cells and not as individuals [122]. It is possible, of course, to start not at the top or the bottom, but rather somewhere in between with "middle-out" models [129][130][131]. A middle-out model is useful in cases where there are rich levels of biological data than can be used as a starting point from which to reach up and down [123], or when the phenomena to be modelled are themselves in the mesoscale, like microcirculation [132].
In an alternative approach, these cells, whether cancerous or healthy, can be considered as species that strive for survival, treating cancer as a problem of ecology and evolution [133][134][135] and considering subpopulations within a single cancer [136]. The ecological and evolutionary perspectives can themselves be intrinsically related to cancer, as has been proposed by several authors [137][138][139]. An example of this approach is the branching process [140,141] in which, as time passes, a cell may divide, die or mutate at certain rates. After a number of cycles, mutations may accumulate in the population of cells. From a simple formulation like this one, it is possible then to significantly increase complexity by adding different types of cells, i.e., cells of the immune system [142]. As such, models have now been proposed for migration [143], tumour growth [144], invasion [145], angiogenesis [146,147], treatment and recurrence [148], cancer cell intravasation [149], fluid transport in vascularised tumours [150], macrophage infiltration [151], response to radiotherapy [152] and optimisation of chemotherapy [153]. For reviews of mathematical modelling of cancer, the reader is referred to [89,90,98].
As many of the previously mentioned approaches require computer simulations, these models are sometimes called in silico models or computational models. Some mathematical models are purely mathematical, like that of Armitage and Doll, which does not require simulations or computations but merely applies an equation. However, many mathematical models apply numerical methods and are intrinsically computational [154]. Some authors [155] distinguish mathematical models when they use a continuous model using mathematical equations from computational models, which are discrete and based on a series of steps or instructions. Still, in many cases, distinctions between mathematical and computational are not considered, and some authors use the terms "mathematical model" and "computational model" interchangeably [156], and others consider a model itself to be both mathematical and computational [97,[157][158][159][160][161]. For more information about mathematical and computational models of the tumour microenvironment and cancer, the reader is referred to [97,112,122,162,163].
Mathematical and computational models include numerous advantages: no need of animals or tissues, lower costs and the rapidity at which simulations can be generated. However, the limitations are numerous, and not the least is the inherent simplicity of any mathematical model as compared with a living organism, a complex disease like cancer and a complex setting like the tumour microenvironment. Despite the close relationship between the mathematical and computational approaches, there are different methodologies that are fundamentally computational. In these cases, computational methods are applied to process, analyse and extract information from datasets. As opposed to a "model" that describes the growth of a tumour, these methods can, for instance, count something [164] or measure colour [165]. What is modelled is not the cells or the cancer itself, but rather derived features, like the shape of a cell or a vessel [72], the movement of cells or fluorescent intensity. There does not need to exist an underlying mathematical abstraction of cancer or a biological process in these methodologies, but the information extracted relates to conditions of the cancer, like the cellularity [166].
Computational methods that belong to areas of computer vision, image processing, machine learning and, more recently, deep learning can be applied. Features related to important characteristics, like the number of nuclei identification [167] or microvessel density [168], can be extracted. Surely, these computational methods can extract features or quantities that can be then used to inform mathematical models. For instance, to estimate vascular permeability [53] in tumours, the fluorescence intensity can be acquired, and then, through image-processing techniques, the vasculature can be segmented, the intensity inside and outside the vessels calculated, and these quantities fed to the Patlak Model [169] to model blood extravasation. The effect of vascular disrupting agents on tumours can be assessed using the velocity of red blood cells travelling inside a tumour, and a model of movement can be applied to measure the velocity of the cells [170]. The spatial heterogeneity in a tumour microenvironment [171] can be assessed by identifying and mapping cells from histological samples, and then ecological models can be used for the information extracted.
To complicate matters, another quite different computational type of model has been gaining popularity. Namely, those are models associated with the areas of artificial intelligence, artificial neural networks and deep learning. These models are inspired by neurobiology and the simplification of a neurone as a unit with many input signals, which are weighted, i.e., multiplied by individual values, and then combined (i.e., summed) to produce a single or multiple output value. This model is known as the McCulloch-Pitts model of a neuron [172,173]. Many neurones, sometimes also called nodes or units, with this and many other functions, are then combined into layers with a specific structure, sometimes called an architecture. With time and increase in computer power, these models of artificial neural networks increase in complexity, adding more and more layers with millions of neurones to their architectures and, thus, gaining the name "deep". One key difference is that, unlike other mathematical or computational models in which fine-tuning of the parameters is performed manually by a person (hand-crafted), these have huge numbers of parameters that self-tune when presented with a large amount of training data, i.e., raw data, like an image coupled with class labels that indicate what is where. This process through which the parameters of the architecture adapt is called "learning", and the area in general is known as machine learning and, in particular, deep learning for larger architectures. Thus, a specific model can be equally used to analyse images of cats and dogs or images of the tumour microenvironment depending on the training data that are provided. Sometimes the arrangement of the basic blocks or structure is called architecture and, once it is specifically trained for a task, it is called a model, but as in other cases, architecture and model are used interchangeably. These models are normally known by short acronyms, like CNN (for convolutional neural network) of VGG [174] (after the Visual Geometry Group at Oxford University), sometimes followed by numbers associated with the number of layers of the architecture like VGG16, as well as AlexNet [175] (after the name of the designer of the architecture Alex Krizhevsky), U-Net [176] (after the shape of the architecture, like a letter U), or GoogLeNet [177] (after the affiliation of some of the authors where the architecture was introduced). For introductory reviews to deep learning, the reader is referred to [178] and, for neural networks and deep learning for biologists, to [179]. For more specific reviews on deep learning applied to cancer and histopathology, the reader is referred to [180][181][182][183][184][185]. The following paragraph illustrates with a few examples how deep-learning models are applied.
The differences between a breast stromal microenvironment and benign biopsies in haematoxylin and eosin (H&E) slides were distinguished using a VGG model [186]. The model was then used with a different dataset to detect a higher amount of tumourassociated stroma in ductal carcinoma in situ for grade 3 compared with grade 1. Cancer grading was calculated from prostate cancer H&E slides with a combination of several CNNs that performed detection and classification and, for tissue, with a posterior slidelevel analysis, which provided a Gleason grade [187]. Patient survival was predicted from colorectal histology slides [188] by applying a VGG19 model for the classification of the slides into a series of classes (adipose, background, debris, lymphocytes, mucus, smooth, etc.) from which a combination of values was used to create a "deep stromal score" with considerable prognostic power, especially for advanced cancer stages. In another study [189], patient survival was predicted from a score (tumour-associated stromainfiltrating lymphocytes (TASIL) score), which was calculated from spatial co-occurrence statistics (stroma-stroma, stroma-lymphocyte, etc.) that were extracted using a DenseNet model [190] to segment each class in head and neck squamous cell carcinoma H&E slides.
Quantitative Evaluation of the Presence of Different Models in Medline
To assess the distribution of the different definitions of the word model as related to the tumour microenvironment, a quantitative and unbiased analysis was performed. The analysis mined the MEDLINE database of the United States National Library of Medicine at the National Institutes of Health. Mining was performed using the PubMed search engine through a series of queries with combinations of keywords and basic terms as previously described [191] with custom scripts written in Matlab ® (The Mathworks TM , Natick, MA, USA) and available at https://github.com/reyesaldasoro/TumourMicroenvironmentModels, accessed on 27 June 2023. The basic terms were the search URL of PubMed (https: //www.ncbi.nlm.nih.gov/pubmed/?term=, accessed on 27 June 2023) and tumour microenvironment in British and American spellings (("tumor microenvironment") OR ("tumour microenvironment")), and "cancer microenvironment" was also included with an OR. Dates were restricted to 2000-2023 (2000:2023[dp]). The keywords were manually curated based on the previously described definitions of the word model and are shown in Table 1. The concatenation was performed sequentially with one keyword at a time. The following caveats should be considered when observing the results. A single entry could be retrieved more than once, e.g., "Imaging interactions between macrophages and tumour cells that are involved in metastasis in vivo and in vitro" was counted for both in vivo and in vitro. Similarly, the same type of model could be referred to with two different keywords, like mouse and mice. The entries were mined if the keyword appeared in the PubMed record, which included title, abstract and Mesh terms. That is, if the keywords only appeared in the main text of a paper, it was not retrieved. Furthermore, it is very important to note that the term tumour/tumor could include benign tumours and, thus, the results were not restricted to cancer. The total number of entries in PubMed for each of the keywords is shown in Figure 1 as a bar chart. Colours are used to group each keyword according to the definitions of model. In Figure 2, the entries are aggregated into four groups and are shown per year as ribbons with the same colours as in Figure 1. keyword appeared in the PubMed record, which included title, abstract and Mesh terms. That is, if the keywords only appeared in the main text of a paper, it was not retrieved. Furthermore, it is very important to note that the term tumour/tumor could include benign tumours and, thus, the results were not restricted to cancer. The total number of entries in PubMed for each of the keywords is shown in Figure 1 as a bar chart. Colours are used to group each keyword according to the definitions of model. In Figure 2, the entries are aggregated into four groups and are shown per year as ribbons with the same colours as in Figure 1. dp]) and restrictions corresponding to tumour microenvironment ((("tumor microenvironment") OR ("tumour microenvironment")) OR ("cancer microenvironment")). Colours are allocated for organism (red), mathematical (blue), in vitro (green) and computational (brown) models for visualisation purposes. The legend in the top right indicates the aggregates per group.
The first observation is that the most frequent entries for the tumour microenvironment were those related to animal models, far more than the in vitro models. Since the scale of the vertical axis was logarithmic, xenograft and mouse were an order of magnitude above most other keywords. These were followed by the mathematical keywords of prognostic and risk. Despite the simplicity of in vitro models and the perceived lack of human relevance of animal models, these latter ones dominated the research on the tumour microenvironment. However, the temporal trends shown in Figure 2 show that there was a slight decrease in the number of entries related to animal models in the last 3-4 years. Furthermore, whilst the term mathematical model appeared much more recently than the organism or in vitro models, the growth was faster and overtook both, especially in the past 5 years. The term computational model appeared later but also showed an increasing trend, although not as high as for mathematical model. It will be interesting to observe these trends in future years. Next, to identify important components of the microenvironment and their frequencies of appearance in PubMed, 39 keywords related to the microenvironment (e.g., T cells, endothelial cells, B cells, invasion, metastasis, inflammation, cytokine, pathways, etc.) were added to the queries (e.g., ("cytokine") AND (("tumor microenvironment") OR ("tumour microenvironment") OR ("cancer microenvironment") AND (model) AND The first observation is that the most frequent entries for the tumour microenvironment were those related to animal models, far more than the in vitro models. Since the scale of the vertical axis was logarithmic, xenograft and mouse were an order of magnitude above most other keywords. These were followed by the mathematical keywords of prognostic and risk. Despite the simplicity of in vitro models and the perceived lack of human relevance of animal models, these latter ones dominated the research on the tumour microenvironment. However, the temporal trends shown in Figure 2 show that there was a slight decrease in the number of entries related to animal models in the last 3-4 years. Furthermore, whilst the term mathematical model appeared much more recently than the organism or in vitro models, the growth was faster and overtook both, especially in the past 5 years. The term computational model appeared later but also showed an increasing trend, although not as high as for mathematical model. It will be interesting to observe these trends in future years.
Next, to identify important components of the microenvironment and their frequencies of appearance in PubMed, 39 keywords related to the microenvironment (e.g., T cells, endothelial cells, B cells, invasion, metastasis, inflammation, cytokine, pathways, etc.) were added to the queries (e.g., ("cytokine") AND (("tumor microenvironment") OR ("tumour microenvironment") OR ("cancer microenvironment") AND (model) AND (2000:2023[dp])). Figure 3 shows the frequencies of appearance of the keywords in decreasing order, starting with metabolism, therapeutic and survival and decreasing towards pre-metastatic niche, extracellular vesicle, anti-vascular and macroenvironment. Again, in addition to the caveats previously mentioned, it should be taken into consideration that this figure indicates only how frequently the terms appeared in the query. For instance, the frequency of the term neutrophils was one order of magnitude lower than the term macrophages. Still, the most common term and possibly the related research question was to investigate the metabolism of the tumour microenvironment. To investigate the trends of these terms with time, a relative count of the keywords per year was performed. The number of entries of each keyword was divided by the total number of entries of all the keywords per year. As could be expected, the uses of certain keywords increased, others decreased and some remained relatively constant. Figure 4 shows some selected keywords, showing these trends, e.g., whilst T-cells increased (Figure 4a), angiogenesis decreased ( Figure 4c) and extracellular matrix remained stable (Figure 4d). Some keywords, like B-cells, transcriptomics or chemoresistance, also grew, but the numbers of entries were much smaller than others, so these were shown separately (Figure 4b).
Cancers 2023, 15, x FOR PEER REVIEW 10 of 20 list were bowel, leukaemia, pituitary, testicular and uterus. It is interesting that some related terms, like colorectal/colon, could have similar numbers of entries whilst others, like uterine/uterus, could be orders of magnitude different. In this case, the query included a keyword, e.g., angiogenesis, and the word "model". It should be noted that the vertical axis is logarithmic. Figure 3. Numbers of entries indexed in PubMed for individual queries. In this case, the query included a keyword, e.g., angiogenesis, and the word "model". It should be noted that the vertical axis is logarithmic. Organ-specific keywords were used to investigate the most frequently modelled microenvironments, and the results are shown in Figure 5. Breast and lung were the most common terms, followed by liver, melanoma and bone, with very similar numbers of entries. These numbers partially correlated with the incidence and mortality rates of cancer. Worldwide, the most common cancers by incidence are breast, lung, colorectal, prostate and stomach and, by mortality, are lung, colorectal, liver, stomach and breast [192]. Proportionally, melanoma, bone and brain were more common in research entries in PubMed than their corresponding incidence and mortality rates. At the bottom of the list were bowel, leukaemia, pituitary, testicular and uterus. It is interesting that some related terms, like colorectal/colon, could have similar numbers of entries whilst others, like uterine/uterus, could be orders of magnitude different.
A combination of keywords for the models and the organs as pairs (e.g., "(xenograft) AND (brain)" added to the query) is displayed in Figure 6, and a magnified view the terms with most results is shown in Figure 7. These figures show that the most common combination was xenograft with breast with 479 entries, followed closely by mouse modelbreast (398), mouse model-lung (392) and xenograft-lung (388). The most frequent entries when using prognostic or risk models were lung, liver, breast, cervical and colorectal. For in vitro, the most frequent entries were breast, brain, lung and bone.
scale (notice the differences in the vertical axes). (c) Keywords that showed decreased. (d) Keywords that remained relatively stable. A combination of keywords for the models and the organs as pairs (e.g., "(xenograft) AND (brain)" added to the query) is displayed in Figure 6, and a magnified view the terms with most results is shown in Figure 7. These figures show that the most common combination was xenograft with breast with 479 entries, followed closely by mouse model-breast (398), mouse model-lung (392) and xenograft-lung (388). The most frequent entries when using prognostic or risk models were lung, liver, breast, cervical and colorectal. For in vitro, the most frequent entries were breast, brain, lung and bone. A combination of keywords for the models and the organs as pairs (e.g., "(xenograft) AND (brain)" added to the query) is displayed in Figure 6, and a magnified view the terms with most results is shown in Figure 7. These figures show that the most common combination was xenograft with breast with 479 entries, followed closely by mouse model-breast (398), mouse model-lung (392) and xenograft-lung (388). The most frequent entries when using prognostic or risk models were lung, liver, breast, cervical and colorectal. For in vitro, the most frequent entries were breast, brain, lung and bone.
Conclusions
Whilst the most common setting to investigate the tumour microenvironment is model organisms, recent years have shown a slight decrease in the number of entries in PubMed. In vitro models also showed growth with a slowdown in the last 2 years of the analysis here presented. On the other hand, the number of entries using mathematical models grew steadily and are now as common as the number of entries for in vivo models. The use of computational models also grew, especially agent-based models and convolutional neural networks. It will be interesting to see how these trends continue in the near future.
The basic idea behind the models here described is that these constitute a simplified, idealised and more accessible representation of something more complex and hard to observe, in this case the tumour microenvironment. Whilst it should always be well understood that no model is a perfect representation of reality, a good model should capture some essential characteristics of the microenvironment and permit successful experimentation from which observations can be translated to patient treatment or care. It should always be considered that not everyone understands models in the same way; thus, it is important to make an effort to use these terms in ways that avoid confusion, if possible. For instance, when talking about deep learning, the term "architecture" could be used instead of model. Adding the word "organism" in cases of animal models could also help, e.g., "the mouse has become the favorite mammalian model organism". Similarly, in mathematical cases, the specification of a risk model or a mechanistic-model and not just a model would improve clarity. Biologically and clinically, there are still many unanswered questions related to the tumour microenvironment and all its components. Interdisciplinary research related to the microenvironment is growing, and as such, a single study may include, say, in vitro models that are then processed with deep-learning models or histopathology slides that are analysed with machine-learning models that then feed a prognostic model of survival. Therefore, a clear understanding of what is meant each time that the word "model" appears in a paper is necessary, and researchers from all sides of the spectrum should bear in mind that not everyone understands the same meaning from "model".
Conclusions
Whilst the most common setting to investigate the tumour microenvironment is model organisms, recent years have shown a slight decrease in the number of entries in PubMed. In vitro models also showed growth with a slowdown in the last 2 years of the analysis here presented. On the other hand, the number of entries using mathematical models grew steadily and are now as common as the number of entries for in vivo models. The use of computational models also grew, especially agent-based models and convolutional neural networks. It will be interesting to see how these trends continue in the near future.
The basic idea behind the models here described is that these constitute a simplified, idealised and more accessible representation of something more complex and hard to observe, in this case the tumour microenvironment. Whilst it should always be well understood that no model is a perfect representation of reality, a good model should capture some essential characteristics of the microenvironment and permit successful experimentation from which observations can be translated to patient treatment or care. It should always be considered that not everyone understands models in the same way; thus, it is important to make an effort to use these terms in ways that avoid confusion, if possible. For instance, when talking about deep learning, the term "architecture" could be used instead of model. Adding the word "organism" in cases of animal models could also help, e.g., "the mouse has become the favorite mammalian model organism". Similarly, in mathematical cases, the specification of a risk model or a mechanistic-model and not just a model would improve clarity. Biologically and clinically, there are still many unanswered questions related to the tumour microenvironment and all its components. Interdisciplinary research related to the microenvironment is growing, and as such, a single study may include, say, in vitro models that are then processed with deep-learning models or histopathology slides that are analysed with machine-learning models that then feed a prognostic model of survival. Therefore, a clear understanding of what is meant each time that the word "model" appears in a paper is necessary, and researchers from all sides of the spectrum should bear in mind that not everyone understands the same meaning from "model".
Funding: This research received no external funding.
Conflicts of Interest:
The author declares no conflict of interest. | 8,534.4 | 2023-07-26T00:00:00.000 | [
"Biology"
] |
Proton structure functions at small x
Proton structure functions are measured in electron-proton collision through inelastic scattering of virtual photons with virtuality Q on protons; x denotes the momentum fraction carried by the struck parton. Proton structure functions are currently described with excellent accuracy in terms of scale dependent parton distribution functions, defined in terms of collinear factorization and DGLAP evolution in Q. With decreasing x however, parton densities increase and are ultimately expected to saturate. In this regime DGLAP evolution will finally break down and non-linear evolution equations w.r.t x are expected to take over. In the first part of the talk we present recent result on an implementation of physical DGLAP evolution. Unlike the conventional description in terms of parton distribution functions, the former describes directly the Q dependence of the measured structure functions. It is therefore physical insensitive to factorization scheme and scale ambiguities. It therefore provides a more stringent test of DGLAP evolution and eases the manifestation of (non-linear) small x effects. It however requires a precise measurement of both structure functions F2 and FL, which will be only possible at future facilities, such as an Electron Ion Collider. In the second part we present a recent analysis of the small x region of the combined HERA data on the structure function F2. We demonstrate that (linear) next-to-leading order BFKL evolution describes the effective Pomeron intercept, determined from the combined HERA data, once a resummation of collinear enhanced terms is included and the renormalization scale is fixed using the BLM optimal scale setting procedure. We also provide a detailed description of the Q and x dependence of the full structure functions F2 in the small x region, as measured at HERA. Predictions for the structure function FL are found to be in agreement with the existing HERA data.
Introduction
The description of the proton in terms its elementary constituents, quarks and gluons, remains one of the big unsovled problems of nuclear-and elementary particle physics. At the typical energy scale of the proton, which is of the order of Λ QCD 200 MeV, Quantum Chromodynamics, the Quantum Field Theory description of strong interactions is strongly coupled, and quarks and gluons are subject to confinement. It is however possible to obtain very valuable information about the structure of the proton from collision processes of protons with leptonic projectiles, such as the electron. Due to the point-like structure of the electron and a very good theoretical understanding of electromagnetic interactions, the electron provides the perfect probe to explore the nucleon. To leading order in Quantum Electrodynamics (QED), scattering of the electron and the proton takes place through the exchange of a virtual photon with virtuality q 2 = −Q 2 , see Fig. 1. If the photon virtuality is large, the proton is destroyed during the scattering and the process is generally referred to as Deep Inelastic Scattering (DIS). The cross-section for neutral-current DIS on unplolarized nucleons can be written in terms of two Lorentz invariant structure functions F 2 and F L in the following way d 2 σ dxdQ 2 = 4πα 2 e.m. xQ 4 1 − y + y 2 2 Here y = (q ·p)/(k ·p) denotes the inelasticity with 0 < y < 1, see also Fig. 1. The structure functions themselves depend on only two Lorentz invariants, the photon virtuality Q 2 and Bjorken x = Q 2 /(2p · q). Within the parton model, to be discussed below, x denotes the momentum fraction of the parton hit by the virtual photon.
If the photon virtuality Q 2 is significantly larger than the non-perturbative energy scale of the proton, asymptotic freedom provides for such processes a weak strong coupling constant α s (Q 2 ) 1 and a description within perturbative theory becomes possible. The conventional theoretical framework for such DIS processes is based on the collinear factorization theorem [2]. At leading order, the essential physics is captured by the parton model [3] of the proton. Within this model, the highly virtual photon interacts not with the entire proton with characteristic size ∼ 1/Λ QCD , but with a single, essentially point-like, parton, i.e., a quark or gluon, with effective size 1/Q. Interference effects with spectator quarks or gluons are on the other hand suppressed by powers of Q 2 . To arrive at the complete cross-section, the "partonic" interaction of virtual photon and quark, needs to be convoluted with parton distribution functions (PDFs), f i (x, Q 2 ), i = q,q, g, which encode the probability to find a parton with a certain proton moment fraction inside the proton. Higher order corrections to such partonic cross-sections, calculated within QCD perturbation theory, possess reveal then a new kind of singularity, apart from the conventional ultra-violet singularity, which is removed through renormalization of the QCD Lagrangian. This new singularity is of infra-red type and can be associated with configu-rations where an additionally emitted parton is collinear to the proton momentum. In physical terms, this initial state singularity reflects interference between the perturbative calculable partonic interactions at the hard scale Q, with spectator quarks and gluons, characterized by the hadronic scale Λ QCD . Collinear factorization provides then a systematic framework to remove such singularities from the perturbative hard cross-section, resulting into finite Wilson coefficients [4][5][6][7], absorbing them into parton distribution functions, which encode the long-distance, non-perturbative physics; for a recent review see, e.g., [8].
To make the separation between long-and short-distance physics manifest, one needs to introduce some arbitrary factorization scale µ f , apart from the scale µ r appearing in the renormalization of the strong coupling α s . The independence of physical observables such as F 2,L on µ f can be used to derive powerful renormalization group equations (RGEs) governing the scale dependence of PDFs in each order of perturbation theory, known as the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equation. The corresponding kernels are the anomalous dimensions or splitting functions associated with collinear two-parton configurations [9][10][11]. DGLAP evolution has been impressively confirmed by experiment, in particular through the very accurate DIS data on the structure function F 2 from the DESY-HERA experiment, see and evolved with the DGLAP equations, have been shown to describe F 2 data over orders of magnitude, both in x and Q 2 [13]. Despite of the impressive success of the DGLAP evolution equations, theoretical considerations suggests that at some point in phase space this description is supposed to break down. With decreasing x, logarithms ln 1/x increase and are capable of balancing the strong coupling, α s ln 1/x ∼ 1, leading to a break-down of the naïve perturbative expansion, see Fig. 3. The necessary resummation of enhanced terms (α s ln 1/x) n is the achieved by the Balitsky-Fadin-Kuarev-Lipatov (BFKL) evolution equation [14]. One of the main predictions of BFKL evolution is a power-like rise of the gluon density. If continued to ultra-small x, the 1/Q 2 expansion, on which collinear factorization is based, will eventually break down. A description of proton structure functions in this region of phase space is provided by the Balitsky-Kovchegov (BK) [15] and JIMWLK [16] evolution equations, which provide a non-linear extension of BFKL evolution, resumming corrections due to high gluon densities to all orders. While theoretical arguments suggests the relevance of such corrections already at current collider energies, current data provide no clear evidence for deviations from linear DGLAP evolution, which would provide a signature for the onset of a non-linear kinematic regime dominated by high, or saturated, gluon densities. Definite evidence for such a regime of QCD requires therefore new experiments such as a future high-luminosity electron-ion collider, i.e.., the EIC [1] and the LHeC [17] projects, whose physics case is currently studied. In particular these projects plan to measure both structure functions F 2 and F L and their scaling violations very precisely at small x both in electron-proton and in electron-heavy ion collisions.
From the theory side this requires the development of suitable tools which allows to pin down possible deviations from DGLAP evolution. In particular it is necessary to reduce the large freedom in fitting initial conditions of parton distribution functions. With non-linear saturation effects most likely to manifest themselves at small values of Q 2 , the large number of free parameters used for the description of initial conditions in PDF fits, does not allow to exclude the possibility that saturation effects, while present in reality, are currently hidden in the initial conditions of DGLAP evolution.
In the following we present two approaches which have the potential to restrict this large freedom at low scales. Section 2 is dedicated to the concept of physical evolution kernels, which allows to reduce the number of independent PDFs in DIS fits and eliminates scale-and scheme dependence in their definition. Section 3 contains results of a recently achieved BFKL fit of the combined HERA data. While both theoretically and experimentally less explored than collinear factorization, BFKL evolution has the potential to reveal the emergence of non-linear effects more easily than DGLAP evolution. Unlike DGLAP evolution, BFKL drives the system into the saturated regime, making a detection of high density effects more likely. For details we refer to [18][19][20]
Physical evolution kernels for DIS observables
Since collinear factorization can be carried out in infinitely many different ways, one is left with an additional choice of the factorization scheme for which one usually adopts the MS prescription. Likewise, the RGE governing the running of α s with µ r can be deduced from taking the derivative of F 2,L with respect to µ r . Upon properly combining PDFs and Wilson coefficients in the same factorization scheme, any residual dependence on µ f is suppressed by an additional power of α s , i.e., is formally one order higher in the perturbative expansion but not necessarily numerically small. Alternatively, it is possible to formulate QCD scale evolution equations directly for physical observables without resorting to auxiliary, convention-dependent quantities such as PDFs. This circumvents the introduction of a factorization scheme and µ f and, hence, any dependence of the results on their actual choice. The concept of physical anomalous dimensions is not at all a new idea and has been proposed quite some time ago [4,21,22] but its practical aspects have never been studied in detail. The framework is suited best for theoretical analyses based on DIS data with the scale µ r in the strong coupling being the only theoretical ambiguity. In addition, F 2,L or their scaling violations can be parametrized much more economically than a full set of quark and gluon PDFs, which greatly simplifies any fitting procedure and phenomenological analysis. The determination of α s from fits to DIS structure functions is the most obvious application, as theoretical scheme and scale uncertainties are reduced to a minimum.
Here we largely focus on the practical implementation of physical anomalous dimensions in analyses of DIS data up to next-to-leading order (NLO) accuracy. We shall study in detail potential differences with results obtained in the conventional framework based on scaledependent quark and gluon densities, which could be caused by the way how the perturbative series is truncated at any given order.
Theoretical Framework
This gist of the factorization scheme-invariant framework amounts to combine any two DIS observables {F A , F B } and determine their corresponding 2 × 2 matrix of physical anomalous dimensions instead of the scale-dependent quark singlet, Σ ≡ q (q +q), and gluon distributions appearing in the standard, coupled singlet DGLAP evolution equations. Instead of using measurements of F 2 and F L (actually their flavor singlet parts), one can also utilize their variation with scale for any given value of x, i.e., dF 2,L (x, Q 2 )/d ln Q 2 as an observable. The required sets of physical anomalous dimensions for both {F 2 , F L } and {F 2 , dF 2 /d ln Q 2 } have been derived in [22] up to NLO accuracy. The additionally needed evolution equations for the non-singlet portions of the structure functions F 2,L are simpler and not matrix valued. As we shall see below, the required physical anomalous dimensions comprise the inverse of coefficient and splitting functions and are most conveniently expressed in Mellin n moment space. The Mellin transformation of a function φ given in Bjorken x space, such as PDFs or splitting functions, is defined as where n is complex valued. As an added benefit, convolutions in x space turn into ordinary products upon applying (2), which, in turn, allows for an analytic solution of QCD scale evolution equations for PDFs. The corresponding inverse Mellin transformation is straightforwardly performed numerically along a suitable contour in n space, see, e.g., Ref. [23] for details. The necessary analytic continuations to non-integer n moments are given in [24,25], and an extensive list of Mellin transforms is tabulated in [26]. We will work in Mellin space throughout this review. Assuming factorization, moments of DIS structure functions F I at a scale Q can be expressed where the sum runs over all contributing n f active quark flavors with electric charge squared e 2 q and the gluon g, each represented by a PDF f k . For e 2 g , the averaged quark charge factor e 2 q = (1/n f ) q e 2 q has to be used. µ r and µ f specify renormalization and factorization scale, respectively. The scale Q 0 defines the starting scale for the PDF evolution, where a set of nonperturbative input distributions needs to be specified. For simplicity we identify in the following the renormalization scale with the factorization scale, i.e., µ r = µ f ≡ µ. The coefficient functions C I,k are calculable in pQCD [4][5][6][7] and exhibit the following series in a s ≡ α s /4π where m 0 depends on the first non-vanishing order in a s in the expansion for the observable under consideration, e.g., m 0 = 0 for F 2 and m 0 = 1 for F L .
where the l → k splitting functions have a similar expansion [9][10][11] as the coefficient functions in Eq. (4): The P kl (n) relate to the corresponding anomalous dimensions through γ kl (n) = −2P kl (n) in the normalization conventions we adopt, where we use the leading order (LO) and NLO expressions for γ kl (n) given in App. B of the first reference in [10]. We note that the same normalization is used in the publicly available Pegasus evolution code [23]. In practice one distinguishes a 2 × 2 matrix-valued DGLAP equation evolving the flavor singlet vector comprising Σ(n, µ 2 /Q 2 0 ) and g(n, µ 2 /Q 2 0 ) and a set of n f − 1 RGEs for the relevant non-singlet quark flavor combinations. The scale-dependent strong coupling itself obeys another RGE governed by the QCD beta function with β 0 = 11 − 2n f /3 and β 1 = 102 − 38n f /3 up to NLO accuracy. To compare below with the results for the physical anomalous dimensions in Ref. [22] we also introduce the evolution variable Instead of studying F I (n, Q 2 ) in (3) in terms of scale-dependent PDFs, which are obtained from solving the singlet and non-singlet DGLAP equations (5) in a fit to data [13], one can also derive evolution equations directly in terms of the observables F I (n, Q 2 ). To this end, we consider a pair of DIS observables F A and F B , to be specified below, whose scale dependence is governed by a coupled matrix-valued equation for the remainders.
The required physical anomalous dimensions in Eqs. (9) and (10), obey a similar perturbative expansion in a s as in (6). The singlet kernels in (9) are constructed by substituting into the left-hand side of Eq. (9) and taking the derivatives. Note that we have normalized the quark singlet part of F A,B with the same averaged charge factorē 2 f which appears in the gluonic sector. Upon making use of the RGEs for PDFs and the strong coupling in Eq. (5) and (7), respectively, one arrives at where we have introduced 2 × 2 matrices for the relevant singlet coefficient and splitting functions, respectively. An analogous, albeit much simpler expression holds for the NS kernel K (N S) in (10). As has been demonstrated in [22], the kernels (12) are independent of the chosen factorization scheme and scale but do depend on µ r and the details of the renormalization procedure. We also note that the inverse C −1 in (12), appearing upon re-expressing all PDFs by F A,B , can be straightforwardly computed only in Mellin moment space.
2.2. Example I: F 2 and F L Let us first consider the evolution of the pair of observables {F 2 , F L }. A precise determination of F L in a broad kinematic regime is a key objective at both an EIC [1] and the LHeC [17].
Since the perturbative series for F L only starts at O(a s ), one wants to account for this offset by actually considering the evolution of either L,q )}. Both sets of kernels K AB show a rather different behavior with n, as we shall illustrate below, but without having any impact on the convergence properties of the inverse Mellin transform needed to obtain x dependent structure functions. The kernels K AB at LO and NLO accuracy for L,g )} can be found in [22]. Note that evolution in [22] is expressed in terms of t. Using (8), d/da s = −2/(a s β 0 )d/dt, and (7) to compute to extra terms proportional to β(a s ), we fully agree with their results. 45) in Ref. [22]. For NLO kernels we refer to [18]. L,i and β 0 , respectively, to K (1) AB have been omitted. A global factor of α s /4π has been ignored in the perturbative expansion, i.e., In Fig. 4 we illustrate the n dependence of the LO and NLO singlet kernels K AB for the L,q )} assuming α s = 0.2 and n f = 3. As can be seen, NLO corrections are sizable for all singlet kernels, in particular, when compared with the perturbative expansion of the singlet splitting functions P kl (n) in (6); see Figs. 1 and 2 in Ref. [11]. This is, however, not too surprising given that the known large higher order QCD corrections to the Wilson coefficients C L,g and C L,q [27] are absorbed into the physical anomalous dimensions K AB for the evolution of the DIS structure functions F 2 and F L . The impact of contributions from the NLO coefficients C L,g and C L,q on the results obtained for K AB is illustrated by the dash-dotted lines in Fig. 4. Another source for large corrections are the terms proportional to β 0 in the NLO corrections as can be inferred from the dotted lines; note that K L,q . In Sec. 2.4 we will demonstrate how the differences between the LO and NLO kernels become apparent in the scale evolution of F 2,L (x, Q 2 ).
gq ∼ 1/n, and P (0) gq ∼ ln n. The NLO kernel K L2 exhibits an even stronger rise with n. In the same way one obtains, for instance, that K L,q )} only grows like ln n, see Eq. (14).
Despite this peculiar n dependence and the differences between the singlet kernels shown in Fig. 5, both sets of observables, L,g )} can be used interchangeably in an analysis at LO and NLO accuracy. Results for the QCD scale evolution are identical, and one does not encounter any numerical instabilities related to the inverse Mellin transform, which we perform along a contour as described in Ref. [23]. In fact, it is easy to see that the eigenvalues which appear when solving the matrix valued evolution equation (9), are identical for both sets of kernels and also agree with the corresponding eigenvalues for the matrix of singlet anomalous dimensions P kl ; see also the discussions in Sec. 2.4 2.3. Example II: F 2 and dF 2 /dt Of future phenomenological interest could be also the pair of observables {F 2 , dF 2 /dt}, in particular, in the absence of precise data for F L . Determining experimentally the t or Q 2 slope of F 2 is, of course, also challenging. Defining F D ≡ dF 2 /dt, we obtain the following physical evolution kernels at LO to be used in Eq. (9); for NLO kernels see [18]. The kernels K AB in (16) exhibit more moderate higher order corrections, mainly through terms proportional to β 0,1 , than those listed in Sec. 2.2. This shall become apparent in the next Section when we discuss results for the scale dependence of both {F 2 , dF 2 /dt} and {F 2 , F L }.
Numerical Studies
In this Section we apply the methodology based on physical anomalous dimensions as outlined above and compare with the results obtained in the conventional framework of scale-dependent quark and gluon densities and coefficient functions. Due to the lack of precise enough data for F L or dF 2 /dt we will adopt the following realistic "toy" initial conditions for the standard DGLAP evolution of PDFs at a scale Q 0 = √ 2 GeV [23] xu v (x, for all our numerical studies. The value of the strong coupling α s at Q 0 is taken to be 0.35. For our purposes we can ignore the complications due to heavy flavor thresholds and set n f = 3 throughout. We use this set of PDFs to compute the flavor singlet parts of F 2 , F L , and dF 2 /dt at the input scale Q 0 using Eq. (11). For studies of DIS in the small x region, say x 10 −3 , in which we are mainly interested in, the flavor singlet parts are expected to dominate over NS contributions and, hence, shall be a good proxy for the full DIS structure functions. Results at scales Q > Q 0 are obtained by either solving the RGEs for PDFs or by evolving the input structure functions directly adopting Eq. (9). For the solution in terms of PDFs we adopt from now on the standard choice µ = Q.
For completeness and to facilitate the discussions below, let us quickly review the solution of the matrix-valued RGEs such as Eqs. (5) and (9). While one can truncate the QCD beta function and the anomalous dimensions consistently at any given order in a s , there exists no unique solution to beyond the LO accuracy. The matrix-valued nature of (9) only allows for iterations around the LO solution, which at order a k s can differ in various ways by formally higher-order terms of O(a l>k s ). To this end, we employ the standard truncated solution in Mellin moment space, which can be found, for instance, in Ref. [24], see also [23], and reads where the evolution operator up to NLO is defined as with Here, a 0 = a s (Q 0 ), Γ P = Σ g , and Γ K = F A F B , i.e., the index i = P refers to the coupled RGE for the quark singlet and gluon and i = K to the RGE for the pair {F A , F B } of DIS structure functions in (9). For i = K one has with a corresponding definition for i = P in terms of the 2 × 2 matrices of singlet splitting functions P (0) and P (1) . λ ± denote the eigenvalues given in Eq. (15) and e ± the projection operators onto the corresponding eigenspaces; see Refs. [23,24]. As has been mentioned already at the end of Sec. 2.2, the eigenvalues λ ± (n) are identical when computed for the kernels K AB and P kl . This in turn implies that as long as, say, F 2 and F L are calculated at µ = Q 0 with LO accuracy, their scale evolution based on physical anomalous dimensions reproduces exactly the conventional results obtained with the help of scale-dependent PDFs. Figure 6 shows our results for the scale dependence of the DIS structure functions F 2 and F L . The input functions at Q 0 = √ 2 GeV are shown as dotted lines. While LO results are identical, starting from NLO accuracy the comparison between the two methods of scale evolution becomes more subtle, and results seemingly differ significantly as can be inferred from the middle panels of Fig. 6.
The origin of the differences between F 2,L (x, Q 2 ) computed based on Wilson coefficients and scale-dependent PDFs and physical anomalous dimensions can be readily understood from terms which are formally beyond NLO accuracy. For instance, upon inserting the NLO Wilson coefficients (4) and the truncated NLO solution (18)- (20) into Eq. (11), F 2 at O(a s ) contains spurious terms of both O(a 0 a s ) and O(a 2 s ). Since F L starts one order higher in a s , similar terms are less important here. On the other hand, when we evolve F 2,L with the help of physical anomalous dimensions we first compute, due to the lack of data, the input at a 0 based on Eq. (11), which then enters the RGE solution (18)- (20). Again, this leads to terms beyond NLO. In case of F 2 they are now of the order O(a 0 a s ) and O(a 2 0 ), i.e., even more relevant than in case of PDFs since a 0 > a s .
To test if the entire difference between the two evolution methods shown in Fig. 6 is caused by these formally higher order contributions, one can easily remove all O(a 2 s ), O(a 0 a s ), and O(a 2 0 ) contributions from our results. Indeed, the scale evolution based on physical anomalous dimensions and the calculation of F 2,L from PDFs then yields exactly the same results also at NLO accuracy. We note that this way of computing properly truncated physical observables from scale-dependent PDFs beyond the LO accuracy has been put forward some time ago in Ref. [28,29] but was not pursued any further in practical calculation.
Another interesting aspect to notice from Fig. 6 are the sizable NLO corrections illustrated in lower panels, in particular, for F 2 in the small x region. For this comparison, LO results refer to the same input structure functions F 2,L as used to obtain the NLO results but now evolved at LO accuracy, i.e., by truncating the evolution operator in Eqs. (18)-(20) at L (0) K . At first sight the large corrections appear to be surprising given that global PDF fits in general lead to acceptable fits of DIS data even at LO accuracy [13]. However, this is usually achieved by exploiting the freedom to have different sets of PDFs at LO and, say, NLO accuracy. The framework based on physical anomalous dimensions does not provide this option as the input for the scale evolution is, in principle, fully determined by experimental data, and only the value for the strong coupling can be adjusted at any given order. In this sense it provides a much more stringent test of the underlying framework and perhaps a better sensitivity to, for instance, the possible presence of non-linear effects to the scale evolution in the kinematic regime dominated by small x gluons. . Same as in Fig. 7 but now for the pair of observables F 2 and F D ≡ dF 2 /dt.
In Fig. 7 we show the corresponding results for the scale dependence of the DIS structure function F 2 and its slope F D = dF 2 /dt. Again, any differences between the scale evolution performed with physical anomalous dimensions and based on PDFs are caused by formally higher order terms O(a 2 s ), O(a 0 a s ), and O(a 2 0 ), which can be removed with the same recipe as above. As for {F 2 , F L }, NLO corrections are sizable in the small x region due to numerically large contributions to K DD from the QCD beta function.
Summary
We have presented a phenomenological study of the QCD scale evolution of deep-inelastic structure functions within the framework of physical anomalous dimensions. The method is free of ambiguities from choosing a specific factorization scheme and scale as it does not require the introduction of parton distribution functions. Explicit results for the physical evolution kernels needed to evolve the structure functions F 2 , its Q 2 slope, and F L have been presented up to next-to-leading order accuracy.
It was shown that any differences with results obtained in the conventional framework of scaledependent quark and gluon densities can be attributed to the truncation of the perturbative series at a given order in the strong coupling. At next-to-leading order accuracy the numerical impact of these formally higher order terms is far from being negligible but, if desired, such contributions can be systematically removed. A particular strength of performing the QCD scale evolution based on physical anomalous dimensions rather than auxiliary quantities such as parton densities is that the required initial conditions are completely fixed by data and cannot be tuned freely in each order of perturbation theory. Apart from a possible adjustment of the strong coupling, this leads to easily testable predictions for the scale dependence of structure functions and also clearly exposes the relevance of higher order QCD corrections in describing deep-inelastic scattering data. Next-to-leading order corrections have been demonstrated to be numerically sizable, which is not too surprising given that the physical evolution kernels absorb all known large higher order QCD corrections to the hard scattering Wilson coefficients.
Once high precision deep-inelastic scattering data from future electron-ion colliders become available, an interesting application of our results will be to unambiguously quantify the size and relevance of non-linear saturation effects caused by an abundance of gluons with small momentum fractions. To this end, one needs to observe deviations from the scale evolution governed by the physical anomalous dimensions discussed in this work. The method of physical anomalous dimensions can be also used for a theoretically clean extraction of the strong coupling and is readily generalized to other processes such polarized deep-inelastic scattering or inclusive one-hadron production.
3. F 2 and F L at small x using collinearly-improved BFKL resummation 3.1. Structure functions within the BFKL framework At small x and center-of-mass energy s = Q 2 /x, we can apply high energy factorization and write the structure functions F I , I = 2, L as where q ≡ q 2 ⊥ ) . Φ P is the non-perturbative proton impact factor which we model using where we have introduced two free parameters and a normalization. Φ I is the impact factor associated to the photon which we treat at leading-order (LO), i.e. where , Ω 2 = (11 + 12ν 2 )/8, Ω L = ν 2 + 1/4, and the strong coupling α s is fixed at the renormalization scale µ 2 . In the present work we will also use the kinematically improved impact factors proposed in [30,31], which include part of the higher order corrections by considering exact gluon kinematics. Its implementation requires to replace the functions c I (ν) byc I (γ, ω) wherec ψ(γ) is the logarithmic derivative of the Euler Gamma function and ξ = 1 − 2γ + ω, while ω is the Mellin variable conjugate to x in the definition of the gluon Green function F, see Eq. (28) below. The main difference between these impact factors is that the LO ones roughly double the value of their kinematically improved counterparts in the region with small |ν|, while being very similar for |ν| ≥ 1.
The gluon Green function can be written in the form withᾱ s = α s N c /π. The collinearly improved BFKL kernel as introduced in eq. (28) is an operator consisting of a diagonal (scale invariant) pieceχ(γ) with eigenvalue where , plus a termχ RC (γ) proportional to β 0 which contains the running coupling corrections of the NLO kernel [32]:χ The precise form of the NLO kernel χ 1 can be found in [19,33]. The resummation of collinear logarithms of orderᾱ 3 s and beyond is realized by the term [19,34,35] Our final expression for the structure functions reads where M 2 and M L can be found in [19]. For the kinematical improved version of F I we replace c I (ν) byc I (1/2 + iν, χ(1/2 + iν)). In Eq. (33) the scale of the running of the coupling has been set to µ 2 = QQ 0 . Building on the work of [36] we found in [19] that in order to obtain a good description of the Q 2 dependence of the effective intercept of F 2 , λ, for x < 10 −2 , it is very useful to operate with non-Abelian physical renormalization schemes using the Brodsky-Lepage-Mackenzie (BLM) optimal scale setting [37] with the momentum space (MOM) physical renormalization scheme [38]. For technical details on our precise implementation we refer the reader to [19] (see also [39] for a review on the subject and [40] for a related work). More qualitatively, in these schemes the pieces of the NLO BFKL kernel proportional to β 0 are absorbed in a new definition of the running coupling in order to remove the infrared renormalon ambiguity. Once this is done, the residual scheme dependence in this framework is very small. We also found it convenient [19] to introduce, in order to describe the data with small Q 2 , an analytic parametrization of the running coupling in the infrared proposed in [41].
Comparison to DIS experimental data
In the following we compare our results with the experimental data for F 2 and F L . Let us first compare the result obtained in [19] for the logarithmic derivative d log F 2 /d log(1/x) using Eq. (33) with a LO photon impact factor and our new calculation using the kinematically improved one. In Fig. 8 we present our results with the values of our best fits for both types of impact factors and compare them with the H1-ZEUS combined data [12] for x < 10 −2 . The values of the parameters defining the proton impact factor in (23) and the position of the (regularized) Landau pole (we use n f = 4) for the strong coupling are δ = 8.4, Q 0 = 0.28 GeV, Λ = 0.21 GeV for the LO order case and δ = 6.5, Q 0 = 0.28 GeV, Λ = 0.21 GeV for the kinematically improved (note that the normalization C does not contribute to this quantity). Figure 8. Fit to λ for F 2 with the LO photon impact factor (solid line) and the kinematically improved one (dashed line). The data set has been extracted from [12].
The LO impact factor generates lower values than the kinematically improved one in the high Q 2 region and slightly higher ones when Q 2 2 GeV 2 . It is interesting to see how the approach presented here allows for a good description of the data in a very wide range of Q 2 , not only for high values, where the experimental uncertainties are larger, but also in the non-perturbative regions due to our treatment of the running of the coupling. Encouraged by these positive results we now turn to investigate more differential distributions. We select data with fixed values of x and compare the Q 2 dependence of our theoretical predictions with them, now fixing the normalization for the LO impact factor to C = 1.50 and 2.39 for the kinematically improved. Our results are presented in Fig. 9. The equivalent Figure 9. Study of the dependence of F 2 (x, Q 2 ) on Q 2 using the LO photon impact factor (solid lines) and the kinematically improved one (dashed lines). Q 2 runs from 1.2 to 200 GeV 2 .
comparison to data, this time fixing Q 2 and looking into the evolution in the x variable, is shown in Fig. 10. We observe that our predictions give a very accurate description of the data for both types of impact factors. Let us remark that the values for the parameters in this fit are in syntony with the theoretical expectations for the proton impact factor since Q 0 is very similar to the confinement scale and x Q² 120 GeV² Figure 10. Study of the dependence of F 2 (x, Q 2 ) on x using the LO photon impact factor (solid lines) and the kinematically improved one (dashed lines). Q 2 runs from 1.2 to 120 GeV 2 .
the value of δ sets the maximal contribution from the impact factor also in that region. This is reasonable given that the proton has a large transverse size.
The longitudinal structure function is an interesting observable which is very sensitive to the gluon content of the proton. We will now present our predictions for F L using the best values for the parameters previously obtained in the fit of F 2 . We will see that the agreement with the data is very good. First, Q 2 is fixed and the x dependence is investigated in Fig. 11. The experimental data have been taken from [43]. To present the Q 2 dependence it is convenient to calculate, for each bin in Q 2 , the average value of x, see Fig. 12. In some sense this is a similar plot to the one previously presented for λ in the F 2 analysis and we can see that the effect of using different types of impact factors is to generate a global shift in the normalization. Again we note that we have an accurate description of the transition from high to low Q 2 , which was one of the main targets of our work. Figure 11. Fit to F L with the LO photon impact factor (solid lines) and the improved one (dashed lines). The experimental data are taken from [43].
Predictions for future colliders
While our predictions for the structure functions are in agreement with the data from the HERA collider experiments H1 and ZEUS, these observables are too inclusive to provide unambiguous evidence for BFKL evolution (for other recent studies in this context see [42]). Comparable in quality fits can be obtained by both DGLAP evolution and saturation models, see e.g. [43,44]. In order to distinguish among different parton evolution pictures new collider experiments are needed, such as the proposed Electron-Ion-Collider (EIC) at BNL/JLab (USA) [1] and the Large Hadron Electron Collider (LHeC) at CERN (Switzerland) [45], which will be able to measure both F 2 and F L at unprecedented small values of Bjorken x. In Fig. 13 we present two studies with our predictions for F 2 and F L down to values of x = 10 −6 . Figure 12. The proton structure function F L as a function of Q 2 . The average x values for each Q 2 of the H1 data (black) are given in Figure 13 of [43]. ZEUS data are taken from [44]. The solid line represents our calculation with the LO photon impact factor and the dashed line using the kinematically improved one. Figure 13. Predictions for F 2 (left) and F L (right) for LHeC. On the left plot, the curve with Q 2 = 10GeV 2 can be compared with Figure 4.13 of [45]. Simulated measurements for F L in the kinematic range plotted here (right) can be found in Figure 3.7 of the same reference.
Conclusions
We have presented an application of the BFKL resummation program to the description of the x and Q 2 dependence of structure functions as extracted from Deep Inelastic Scattering data at HERA. We have also provided some predictions for these observables at future colliders. In order to obtain the correct dependence on the virtuality of the photon at high values of the scattering energy, we have included in the BFKL kernel the main collinear contributions to all orders. We have also used optimal renormalization and an analytic running coupling in the infrared in order to accurately describe the regions of low Q 2 .
Summary
DIS scattering experiments allow to explore the proton in terms of its QCD content, i.e., quarks and gluons. At small values of x, the description in terms of linear DGLAP evolution and parton distribution functions is expected to break down and non-linear effects, associated with high gluon densities, are believed to set in. To pin down such effects in DIS, new collider experiments are needed, which allow to measure both structure functions F 2 and F L with high accuracy for both electron-proton and electron-nucleus scattering. DGLAP evolution formulated in terms of physical evolution kernels allows for a direct evolution of structure function doublets. Apart from removing scale-and scheme dependence in the description of structure functions, it further reduces the number of free parameters, used in the parametrization of non-perturbative initial conditions. BFKL evolution describes on the other hand directly evolution in x and hence drives the system into the non-linear regime, promising higher sensitivity to non-linear effects. Being less explored than DGLAP evolution, we took a first step towards such applications by confronting BFKL evolution with the combined HERA data. In particular we demonstrated that NLO BFKL evolution is capable to describe the combined HERA data, if the NLO BFKL kernel is supplemented with collinear resummation and optimal scale setting for the QCD running coupling is used. | 9,812.6 | 2015-11-03T00:00:00.000 | [
"Physics"
] |
Compliant surface after ACL reconstruction and its effects on gait
Previous studies of gait analysis in patients following reconstructive anterior cruciate ligament (ACL) surgery have shown changes in kinematics, kinetics and energy patterns in the lower limb. Usually these patients perform complaint surface training during clinical treatment. The purpose of this study was to evaluate the changes in selected gait kinematic parameters following ACL reconstruction while walking on an unstable surface. We tested 16 subjects: eight patients who underwent ACL reconstruction, at four weeks after the surgical intervention; and eight healthy subjects (control group) matched by age and gender. Participants walked at a selfselected comfortable speed on an 8 m-walkway while sagittal plane kinematic data of the principal lower limb joints (hip, knee and ankle) were collected using 60-Hz cameras. We compared the joint angles under three conditions: (A) walking on stable ground, (B) walking on a foam mat (5 cm thick; 33 kg m density) and (C) back at the normal ground. Results showed that ACL patients were slower and had smaller range of motion at all joints as compared to the control group under all conditions; however the repeated exposure to unstable surface may help changes in such patients. Further investigation is necessary to expand our understanding and may improve the development of more effective rehabilitation treatments.
Introduction
Changes in the gait patterns of subjects following anterior cruciate ligament (ACL) reconstruction have been assessed in a number of studies (BULGHERONI et al., 1997;DEVITA et al., 1997;KNOLL et al., 2004;HART et al., 2009;MORAITI et al., 2009;HART et al., 2010;GAO;ZHENG, 2010;SCANLAN et al., 2010;TSIVGOULIS et al., 2011), all using different techniques.Studies have demonstrated that these individuals walk with different gait parameters in the lower extremity as compared to healthy individuals, such as joint extensor/flexor torques, step length and walking base, joint excursions and muscle activity.These gait patterns may develop as a result of muscle adaptations and neuromuscular reprogramming, possibly in response to pain or instability, to stabilize the knee and to prevent re-injury during gait (FERBER et al., 2002;WEXLER et al., 1998).It is the consensus that these adaptations are beneficial to these individuals because they reduce anterior displacement of the tibia relative to the femur and therefore reduce stress on the knee joint, while they also enable the subjects to perform the desired movement.Also, the agreement of opinion is that the adaptations are caused by subconsciously learned, neuromuscular strategies owing to the injury.However studies evaluating the health-related quality of life after an ACL reconstruction showed that these patients reported good conditions (MANSSON et al., 2011;MÖLLER et al., 2009).
Injuries to the ACL represent a significant portion of the knee injuries sustained by athletes in sport as well as healthy individuals (CERULLI et al., 2003;CHAUDHARI et al., 2008;DELAHUNT et al., 2012).As ACL reconstruction becomes a more predictable and more frequently performed operation, there is an increasing desire on the part of surgeons and patients alike for not only a more rapid return to sporting activities but also to activities of daily living including work and study (FELLER et al., 2001;MÖLLER et al., 2009).The surgical reconstruction is performed to reestablish the mechanical properties of the knee in the hope of returning the patient to an active lifestyle.Most of the advances responsible for allowing the return to pre-injury activity have resulted from improvements in surgical techniques and rehabilitation procedures.A scientifically based and well-designed rehabilitation program plays a vital role in the functional outcome of the ACL-reconstructed individual.Rehabilitation following ACL reconstruction has changed dramatically over the past few decades.The trend toward innovative rehabilitation of the ACL reconstructed knee patient is partly the result of the improved outcomes documented with accelerated rehabilitation compared with more conservative programs.The inclusion of these patients in rehabilitation programs is greatly recommended and produces better functional outcomes, overcoming many of the complications after the ACL reconstruction (prolonged knee stiffness, limitation of complex extension, delay in strength recovery, anterior knee pain) (SHELBOURNE; NITZ, 1992;SHELBOURNE;KLOTZ, 2006).
The ideal situation is one in which the patient with ACL deficiency undergoing surgical reconstruction will ultimately have a result of excellent stability, full range of motion and strength, and normal function.Treatment techniques involving exercises on unstable surfaces may induce compensatory muscle activity and proprioceptive training that could improve knee stability and increase the probability of returning patients to high-level physical activity.Therefore, this study aimed to investigate the changes in selected gait kinematic parameters in the lower limb of ACL-reconstructed individuals during walking on an unstable surface and to compare these findings with an age-matched, injury-free control group.
Material and methods
The study was carried out with a group of 16 participants: eight subjects with ACL-reconstructed (double semitendinosus tendon technique) (five males and three females) and eight healthy subjects with no history of musculoskeletal pathology, matched by age, gender, BMI (body mass index) and activity level.The mean age, body weight, and body height of the ACLreconstructed subjects were 25.5 year (SD 7.1 year), 61.3 kg (SD 27.1 kg), and 1.68 m (SD 0.1 m), respectively.The ACL-reconstructed subjects were tested on average 32 days (± 6 days) after the surgical intervention.All of them had been advised by their surgeons to resume full weight-bearing at the affected limb, and were taking no medication.No subject reported a history of major back, hip, or ankle pathology/injury or a history of neurologic disease.Ethics approval was obtained by the University's Review Board for Health Sciences Research Involving Human Subjects (0049.0.186.000-06), and all subjects provided written informed consent before testing.This was an exploratory cross-sectional study.
The analysis of gait features was performed using three video cameras for the recording of the kinematic data (60 Hz sampling rate).Eight passive markers were placed at the following positions: (1) on the right greater trochanter; (2) left greater trochanter; (3) right femoral condyle; (4) left femoral condyle; (5) right lateral malleolus; (6) left lateral malleolus; (7) point between the head of the second and third metatarsal (right side); and (8) point between the head of the second and third metatarsal (left side).Each subject was asked to perform five trials of walking at his natural cadence on two different surfaces: (1) on stable ground, and (2) on a foam mat (5 cm thick and 33 kg m -3 density).After walking on the unstable surface (foam mat), all subjects went back to the stable ground and walked for another five trials, this was taken as our third testing condition.
The data obtained from the camera recording of the markers allowed the reconstruction of the lower limb segment model, using dedicated gait analysis software (APAS -Ariel Performance Analysis System -Ariel Dynamics Inc).The raw data was high pass filtered to eliminate frequency components below 10 Hz.A single gait cycle (complete stride) was identified, and all the corresponding data sets were then reduced to 100 points.Joint angular positions were calculated at the ankle, knee and hip joints, and the joint range of motion during stride was calculated to compare the two groups among conditions (pre-exposure, exposure and post-exposure).Custom computer algorithms for data analysis were written in IGOR Pro (Wavemetrics Inc.).Two different groups were defined: (P) ACLreconstructed subjects, and (C) control subjects.Total joint range of motion was calculated at the ankle (angle between foot and leg segments), knee (angle between leg and thigh segments) and hip (angle between thigh segments and the vertical axis) (see PERRY; BURNFIELD, 2010).The average trend for all variables was computed for each group.Means of individual dependent variables were analyzed using a one-way repeated measures analysis of variance (ANOVA), with group (patients (PG) and controls (CG)) as a between-subject factor, and exposure condition (pre-exposure (PR), exposure (foam mat) (EX), post-exposure (PO)) as the within-subject factor.Subjects were treated as a random factor.For the analysis we compared patients' affected side, and matched with the respective control side.For all analyses, statistical significance was tested using an alpha value of 0.05.
Results and discussion
Stride length was similar between the healthy and patient groups (Table 1); both groups walked approximately 1.23 m to complete a gait cycle.The initial effect of walking on the unstable surface was to increase stride length.In addition, patients were slower than the control subjects in all conditions, although both groups walked faster at the exposure condition (mean = 1.35 m s -1 (PG), mean = 1.53 m s -1 (CG)).The ANOVA showed a main effect of group and conditions on joint range of motion (Table 1).Patients produced significantly smaller (p = 0.0266) range of motion at the knee joint (mean = 34.9°)as compared to the control subjects (mean = 41.7°).Exposure conditions influenced the hip joint (p < 0.0001), as the range of motion for the non-exposure conditions (mean = 32.1°(PG), mean = 33.8°(CG)) was smaller than those for the exposure condition (mean = 35.3°(PG), mean = 39.4°(CG)).The angular kinematics of the patients indicated that their range of motion was smaller than that of the healthy control subjects throughout the gait cycle.With respect to joint angles (Figures 1, 2 and 3), the patients showed changes as a consequence of the ACL reconstruction, mainly at the knee and ankle joints.During all phases of the gait cycle and under all conditions, both of these joints demonstrated reduced excursions with respect to the control group.However, the functional pattern of the flexion-extension angle was maintained.Ankle angular values (Figure 1) of the ACL reconstructed group were slightly different among the conditions, particularly from early to late swing.Exposure caused the subjects to walk in a more flexed knee and ankle positions, particularly at early and mid-swing phases, as compared to the nonexposure conditions.The ACL-reconstructed and control groups' knee-position curves paralleled one another throughout stance and followed a flexionextension-flexion pattern (Figure 2).After exposure, the kinematics for the patients' knee showed a partial return to pre-exposure values, however after Hip kinematics (Figure 3) was less affected by the exposure at the extension phase; essential changes were shown at late swing.Nevertheless, the patients group demonstrated an additional flexed pattern at the swing phase post-exposure as compared to pre-exposure.
Before the exposure, joint kinematics patterns in the ACL patients had similar features to the pattern in the control subjects, but with lower magnitude in flexion and extension.Exposure increased joint flexion responses for all subjects, whereas postexposure control patterns approached pre-exposure patterns, for the patient group this was not always the case, except at the hip joint, where angular values of the ACL reconstructed group approached closer to the normal control subjects.
Studies on postural control adaptability to floor oscillations have suggested that postural responses are partially controlled by anticipatory mechanisms affecting the joint movement (BUCHANAN;HORAK, 2003HORAK, , 2001)), furthermore ACL patients may show adaptations to avoid knee instability (ZHANG et al., 2003).Other studies have shown lower extremity relative phase dynamics adjustments during walking and running, and altered gait pattern in ACL reconstructed patients evidencing a compensatory mechanism applied by this population (FERBER et al., 2002;KURZ et al., 2005;GAO et al., 2012).These deficits are identified as initial biomechanical gait responses to injury, surgery, and partial rehabilitation; also, suggesting learning periods for gait adaptations that might be related to learning process and the development of neuromuscular adaptations.Rehabilitation programs applying perturbation training showed satisfactory results, improving knee stability and strength, and restoring coordinated movement pattern (WILK et al., 2003 , 2006;MENDIGUCHIA et al., 2011).Lower extremity kinematics of ACL-reconstructed individuals, before and after exposure were fairly different from those of normal gait.The most notable difference was the tendency to improve knee flexion at early-mid swing phase compared with the flexion response in healthy individuals.Despite the reduced number of subjects participating in this study, our results suggested that repeated exposure to unstable surface allowed closing behavior to the normal gait patterns, indicating that this kind of exposure could result in relevant performance modifications.Also, therapy programs targeted towards a symmetrical pattern could lead to normal gait and functional activities.Generally, repeated exposure resulted in patterns closer to normal when comparing pre and post results, and these modifications could possibly be emphasized if exposure time and repetitions were prolonged.Surgical treatment cannot give satisfactory results without intensive and comprehensive rehabilitation, as well as a more individualized program.
Conclusion
Distinct adaptations to ACL injury and reconstruction have been observed in lower extremity kinematics, kinetics and EMG patterns (DEVITA et al., 1997;DELAHUNT et al., 2012;WEBSTER et al., 2012;HART et al., 2010;GAO et al., 2012) therefore an appropriate rehabilitation program can be a key factor to regain normal pattern.An ACL rehabilitation program often involves exercising on an unstable surface as the one presented here, thus biomechanical analyses of this type of testing can be used to improve rehabilitation protocols and promote a more individualized rehabilitation program (FELLER et al., 2004;SHELBOURNE;KLOTZ, 2006).Taking into account that the exposure introduced a challenge to subjects; postural compensatory mechanisms tend to be evidenced at the control subjects as compared to the patients.However, further controlled investigations of rehabilitation procedures are necessary to better understand how these techniques can best be applied and manipulated.
Figure 1 .
Figure 1.Ankle angular displacement for patients (A) and control (B) subjects under the three conditions tested (Pre-exposure (PR), Exposure (EX) and post-exposure (PO).
Figure 2 .
Figure 2. Knee angular displacement for patients (A) and control (B) subjects under the three conditions tested: Pre-exposure (PR), Exposure (EX) and post-exposure (PO).
Figure 3 .
Figure 3. Hip angular displacement for patients (A) and control (B) subjects under the three conditions tested: Pre-exposure (PR), Exposure (EX) and post-exposure (PO). | 3,101 | 2013-06-14T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Identification of ActivinβA and Gonadotropin Regulation of the Activin System in the Ovary of Chinese Sturgeon Acipenser sinensis
Simple Summary Activin is a dimeric growth factor with diverse biological activities in vertebrates. This study aimed to investigate the regulatory role of the activin signaling pathway in the ovary of cultured Acipenser sinensis. One activinβA subunit with a full-length cDNA sequence of 1572 base pairs was identified. Multiple sequence alignment and phylogenetic analyses indicated the conserved evolution of ActivinβA from mammals to fish species. Transcripts of activinβA were distributed ubiquitously in ovary and non-ovarian tissues. The in vitro human recombinant Activin A incubation stimulated not only the activin system-related gene transcriptions of activinβA, follistatin, its receptors activinRIIA and activinRIIB, and smad2, smad3, and smad4, but also the ovary development-related genes cyp19a1a, erα, and erβ. Gonadotropin activated activin signaling by recruiting activinβA, follistatin, activinRIIA, and smad2. These results were helpful for not only the molecular exploration of activin signaling in fish species, but also the ovarian maturation regulation of A. sinensis. Abstract Activin is a dimeric growth factor with diverse biological activities in vertebrates. This study aimed to investigate the regulatory role of the activin signaling pathway in the ovary of the endangered, cultured sturgeon species Acipenser sinensis. One activinβA subunit was identified, with a full-length complementary DNA (cDNA) sequence of 1572 base pairs. Multiple sequence alignment suggested that ActivinβA shared high sequence identities with its counterparts in four other sturgeon species. Phylogenetic analysis indicated the conserved evolution of ActivinβA among vertebrates from mammals to fish species. Transcripts of activinβA were distributed ubiquitously in the liver, kidney, intestine, ovary, midbrain, hypothalamus, and pituitary, with the highest transcription found in the pituitary. In Chinese sturgeon ovarian cells, in vitro human recombinant Activin A incubation stimulated the activin system-related gene transcriptions of activinβA, follistatin, its receptors -activinRIIA and activinRIIB, and drosophila mothers against decapentaplegic proteins (smads) smad2, smad3, and smad4. Ovary development-related mRNA levels of cyp19a1a and aromatase receptors of erα and erβ were enhanced by Activin A or human chorionic gonadotropin (hCG) incubation. Furthermore, 15 IU/mL hCG treatment increased the transcription levels of activinβA, follistatin, activinRIIA, and smad2. This suggested that the activin system was functional for the regulation of ovary development in Chinese sturgeon, possibly under the regulation of gonadotropin, by recruiting activinβA, follistatin, activinRIIA, and smad2. These results were helpful for the molecular exploration of activin signaling in fish species, as well as the ovarian maturation regulation of A. sinensis.
Introduction
Activin belongs to the transforming growth factor β (TGFβ) superfamily, and was originally identified from follicular fluid as gonadal peptides that stimulate follicle-stimulating hormone (FSH) secretion in pituitary cells [1,2].It is a dimeric glycoprotein composed of two β subunits, namely βA and βB subunits, and the dimerization of these two subunits leads to the formation of Activin homodimer (Activin A with βA:βA subunits or Activin B with βB:βB subunits) or heterodimer (Activin AB with βA:βB subunits) [3].Activin signaling is mediated through specific cell surface Activin type II receptors (either ACTRIIA or ACTRIIB), which then recruit and phosphorylate Activin type I receptors (ACTRIB, also known as Activin receptor-like kinase 4 (ALK4)) [3].Subsequently, these receptors activate the downstream drosophila mothers against decapentaplegic protein (SMAD) signaling cascade by promoting the phosphorylation of SMAD2 and SMAD3 and then forming heterotrimeric complexes with common SMAD4 [4].These complexes finally translocate to the nucleus and modulate gene expression as transcription factors [5].
In vertebrates, activins and their receptors exhibit a widespread tissue distribution and act as autocrine/paracrine factors for the regulation of diverse physiological activities, including tissue differentiation [6,7], wound repair [8,9], bone metabolism [10], immune responses [11], local regulation of pituitary hormones [12], spermatogenesis [13], and folliculogenesis [14].In fish, both βA and βB subunits of activin have been identified from several species and their distribution in reproductive tissues has been demonstrated [15][16][17][18][19].The activin βA and βB subunitswere both expressed in the thecal cells of follicles in the rainbow trout Oncorhynchus mykiss [16].Moreover, paracrine roles of activin in ovarian functions were largely reported in zebrafish Danio rerio [20][21][22] and goldfish Carassius auratus [23].In zebrafish, both activin and its type IIA receptor were expressed in the ovary, and both recombinant goldfish Activin B and recombinant human Activin A had potent stimulatory effects on the final oocyte maturation [24,25].Furthermore, the effect of Activin on the final oocyte maturation could be blocked by co-treatment with the activin-binding protein Follistatin [24,25].
Using the zebrafish model, studies showed that pituitary gonadotropins such as hCG (a homolog of luteinizing hormone LH in teleosts) had a positive regulatory effect on activin subunits, the type IIA receptor, and the activin-binding protein follistatin in both time-and dose-dependent manners in follicle cells [17,22,26].Interestingly, Follistatin also suppressed hCG-induced zebrafish oocyte maturation, suggesting activin as a downstream mediator of hCG, which functioned specifically via the zebrafish LH receptor (Lhr) [27].Series experiments in zebrafish demonstrated that gonadotropin and activin promoted oocyte maturational competence, and their stimulatory effects could both be suppressed by follistatin [26].
Chinese sturgeon Acipenser sinensis is a large-sized anadromous fish distributed in the Yangtze River and East China Sea and is now critically endangered [28].The natural spawning activities of Chinese sturgeon were interrupted for three consecutive years (2017-2019), which caused its natural population to be on the verge of extinction [29].Controlled propagation was successful in achieving better species conservation [30].However, it is rather difficult for breeding females to reach final sexual maturation due to a long period of sexual maturity (14-26 years) and a reproduction interval of 2-7 years.Therefore, a limited population of female broodstocks is available for artificial propagation, which hampers the speed of species recovery.This study aimed to investigate the actions of gonadotropin and Activin on the oocyte development of Chinese sturgeon.The activin βA subunit was identified, and the sequence characterization and tissue distribution were further analyzed in Chinese sturgeon.In addition, in vitro incubation of recombinant Activin A or hCG with ovarian cells was performed to examine the transcriptional changes in activin signaling-related and oocyte development-related genes.These results should be meaningful for not only the molecular exploration of the activin system in fish species, but also for the artificial ovarian maturation regulation and species conservation of A. sinensis.
Experimental Fish and Sample Collection
The five-year-old, artificially propagated Chinese sturgeons (A.sinensis) (average body weight of 4.37 ± 0.5 kg and average whole length of 89.07 ± 10 cm) used in this study were cultured in Taihu station, Yangtze River Fisheries Research Institute, Chinese Academy of Fishery Sciences.All fish handling procedures were performed with the approval of the Animal Care and Use Committee of the Yangtze River Fisheries Research Institute, Chinese Academy of Fishery Sciences (ID number YFI2021YHM01).Efforts were made to alleviate the suffering of fish as much as possible.
Three female cultured Chinese sturgeons were anaesthetized with 0.05% MS222 (Sigma, Shanghai, China) and decapitated.Since the sturgeons were to be sacrificed, the number of Chinese sturgeons used was limited to three for the purpose of species resource conservation.Partial tissue samples of the liver, spleen, kidney, intestine, ovary, midbrain, hypothalamus, and pituitary were quickly dissected and preserved in the RNAlater solution (Ambion, Austin, TX, USA).Samples were stored at 4 • C for 16 h, and then preserved in an ultralow freezer at −80 • C until the preparation of RNA for tissue distribution analysis.Another small piece of ovary was fixed in Bouin's solution for histological analysis.The rest of the ovary tissue was used for the subsequent in vitro culture experiment.
Histological Analysis
The ovary tissue fixed in Bouin's solution was embedded in paraffin, cut at 8 µm, and stained with hematoxylin and eosin (HE).Images of sections were observed under a light microscope (BX-51, Olympus, Tokyo, Japan) equipped with a digital camera (DP-73, Olympus).
Full-Length cDNA Sequence Cloning of ActivinβA
Total RNA of the Chinese sturgeon ovary was extracted by the RNeasy Plus Mini Kit (Qiagen, Dusseldorf, Germany) with the manufacturer's instructions.First-strand SMART cDNA was then amplified with the SMARTer ® RACE 5 ′ /3 ′ Kit (Takara, San Jose, CA, USA) as described.Fragmented cDNA sequence of activinβA was retrieved from the ovary transcriptome database of Chinese sturgeon [31] and verified by PCR with the primer pairs of activin-F1/activin-R1 (Table 1).Subsequently, 5 ′ and 3 ′ RACE (rapid amplification of cDNA ends) together with two rounds of nested PCRs were applied to obtain the rest of the 5 ′ and 3 ′ partial sequences.For amplification of the 5 ′ end cDNA sequence, the first round of PCR was conducted using the first-strand SMART cDNA as the template and the primer pair of activin-R1/UPM (Universal Primer Mix; Table 1).The obtained PCR product was then used as the template for the second round of PCR with the primer pair of activin-R2/UPMS (Universal Primer Mix Short) (Table 1).The 3 ′ -end cDNA sequence of activinβA was cloned similarly by two rounds of PCRs, with the primer pairs of activin-F1/UPM and activin-F2/UPMS (Table 1), respectively.
Table 1.Primers used in this study.
Sequence Analysis
The nucleotide and amino acid sequence identities were searched against the BLAST program (NCBI, http://blast.ncbi.nlm.nih.gov/Blast.cgi,accessed on 6 August 2021).Conserved domains were predicted in the Conserved Domain Database (NCBI, https: //www.ncbi.nlm.nih.gov/cdd,accessed on 6 August 2021).Multiple amino acid sequence alignments were accomplished with the CLUSTAL X program (version 1.83) and refined with the GeneDoc software (version 2.7.0).The Mega (version X) software was applied for the phylogenetic tree construction using the Maximum Likelihood method based on the Poisson correction model and bootstrap setting of 1000 replicates.The Activin sequence of Drosophila melanogaster was set as the outgroup root.All the amino acid sequences analyzed were downloaded from the NCBI website.The GenBank accession numbers were as follows:
Tissue Distribution Analysis
After total RNA extraction from the above eight tissue samples preserved in RNAlater solution, reverse-transcribed cDNAs were obtained by methods described in the Prime-ScriptRT reagent Kit with gDNA Eraser (Takara, Kusatsu, Shiga, Japan).Relative real-time PCR was performed for temporal tissue distribution analysis.The PCR was performed in a volume of 20 µL with SYBR green real-time PCR master mix (Takara, Otsu, Shiga, Japan) on a QuantStudio 6 Flex real-time PCR system (Applied Biosystems, Foster City, CA, USA).The amplification protocols were as follows: 1 min at 95 • C, followed by 40 cycles of 15 s at 95 • C, 15 s at 54 • C, and 15 s at 72 • C. The housekeeping gene ef1α (ef1α-rF and ef1α-rR; Table 1) was chosen as the internal control, as suggested in [32].The PCR amplification efficiency of each primer pair including activin-rF/activin-rR was evaluated by a standard curve.All primers used in this study met standards of efficiency between 90 and 110% and R 2 ≥ 0.99.All samples were analyzed in triplicate and the relative transcription levels were calculated with the 2 −∆CT method.
Human ActivinβA and hCG Treatment with In Vitro Ovarian Cell Culture
The freshly dissected ovary tissue of Chinese sturgeon was minced using stainless steel scissors and washed three times in DMEM medium (FBS included, Gibco, New York, NY, USA).It was randomly dispersed into a 6-well cell culture plate for the recombinant human ActivinβA (R&D systems, Minnneapolis, MN, USA) and human gonadotropin hCG (Macklin, Shanghai, China) treatment, respectively.Three wells were treated as one group with the protein or hCG solutions.
The recombinant human ActivinβA was first dissolved into RNA-free sterile water at 10 µg/mL, and three graded concentrations of 50 ng/mL, 100 ng/mL, and 200 ng/mL were used for ovarian cells incubation [33].The hCG powder was dissolved with RNA-free sterile water into 1300 IU/mL according to the manufacturer's instructions.For ovarian cell incubation, 15 IU/mL hCG was applied [17].After 12 h pre-incubation at 28 • C with 5% CO 2 , the medium was discarded, and the cells were washed twice and incubated with the medium (control) or medium containing ActivinβA or hCG for 6 h.Ovarian cells treated in each well were collected for total RNA extraction.The treatment time was confirmed based on our preliminary experiment.
Relative real-time PCR was conducted for related gene transcription analysis, as described above.The sequences of follistatin, activinRIIA, activinRIIB, and smad4 for primer design were searched against the transcriptome data of Chinese sturgeon [31].Primer sequences of smad2, smad3, cypa19a1a, erα, and erβ were acquired from the previous study [34].
Statistical Analysis
All data were presented as mean ± SD.In tissue distribution analysis, the data were assessed by using one-way analysis of variance (ANOVA) followed by Duncan's multiple range tests with the software SPSS 22.0 (SPSS Inc., Michigan Avenue, Chicago, IL, USA).In the ovarian cells in vitro incubation experiment, independent samples Student's t-test was used, and Levene's test was applied for equality of variances.A probability (p) of <0.05 was considered statistically significant.
Molecular Characterization of ActivinβA in Chinese Sturgeon
Histological analysis of the ovary tissue used in this study suggested that the oocytes were mainly in the cortical-alveolar stage (stage II) (Figure S1).The full-length cDNA sequence of activinβA (GenBank No. PQ118000) cloned from the ovary of Chinese sturgeon was 1572 bp, including a 206 bp 5 ′ terminal untranslated region (UTR), a 190 bp 3 ′ terminal UTR, and an open reading frame (ORF) of 1176 bp encoding a protein of 391 amino acids (aa).The deduced amino acids were predicted to contain conserved domains of the transforming growth factor beta (TGF-β) propeptide (51-258 aa, underlined) and the TGF-β-like domain found in Inhibin beta A chain (284-391 aa, boxed) (Figure S2).
Multiple amino acid sequence alignment showed that ActivinβA of Chinese sturgeon shared the highest sequence identity with that of Huso huso (99.23%), followed by that of the other two species in Acipenseridae, including Acipenser ruthenus (98.72%) and Polyodon spathula (96.42%) (Figure 1).Further phylogenetic analysis displayed that the analyzed vertebrate ActivnβA sequences formed two sub-clusters, including the tetrapod cluster and the teleost fish cluster (Figure 2).ActivinβA of Chinese sturgeon was situated in the teleost fish cluster and shared the same branch with another four sturgeon species: Huso huso, Acipenser ruthenus, Acipenser oxyrinchus oxyrinchus, and Polyodon spathula.The sturgeon branch was further clustered with the ancient fish species of Polypterus senegalus.
Tissue Distribution of ActivinβA in Chinese Sturgeon
Relative real-time PCR analysis demonstrated that activinβA mRNAs of Chinese sturgeon were transcribed in liver, kidney, intestine, ovary, midbrain, hypothalamus, and pituitary tissues (Figure 3).The highest transcription levels of activinβA were present in the pituitary, followed by transcriptions in the hypothalamus and ovary.
Effect of Human ActivinβA on Activin Signaling Pathway-Related Gene Transcription
Relative real-time PCR detection suggested that 50 ng/mL recombinant human Ac-tivinβA protein incubation increased the mRNA levels of activinβA, follistatin, and activin-RIIA in the in vitro ovary culture of Chinese sturgeon (p < 0.05) (Figure 4A).Additionally, activinRIIB transcription was significantly increased by 100 ng/mL ActivinβA treatment (p < 0.05) (Figure 4A).The transcriptions of three smad genes were also investigated, which showed that smad3 transcriptions were all increased by the three doses of ActivinβA incubation (p < 0.05) (Figure 4B).However, increased mRNA levels of smad2 and smad4 were only exhibited in the 50 ng/mL ActivinβA treatment group (p < 0.05).Furthermore, 100 ng/mL ActivinβA led to the increase of cyp19a1a transcription, while mRNA levels of erα and erβ were both enhanced by 50 ng/mL and 100 ng/mL ActivinβA incubation, respectively (p < 0.05) (Figure 4C).
Effect of Human ActivinβA on Activin Signaling Pathway-Related Gene Transcription
Relative real-time PCR detection suggested that 50 ng/mL recombinant human ActivinβA protein incubation increased the mRNA levels of activinβA, follistatin, and activinRIIA in the in vitro ovary culture of Chinese sturgeon (p < 0.05) (Figure 4A).Additionally, activinRIIB transcription was significantly increased by 100 ng/mL ActivinβA treatment (p < 0.05) (Figure 4A).The transcriptions of three smad genes were also investigated, which showed that smad3 transcriptions were all increased by the three doses of ActivinβA incubation (p < 0.05) (Figure 4B).However, increased mRNA levels of smad2 and smad4 were only exhibited in the 50 ng/mL ActivinβA treatment group (p < 0.05).Furthermore, 100 ng/mL ActivinβA led to the increase of cyp19a1a transcription, while mRNA levels of erα and erβ were both enhanced by 50 ng/mL and 100 ng/mL ActivinβA incubation, respectively (p < 0.05) (Figure 4C).
Regulation of Activin Signaling Pathway-Related Genes by Gonadotropin
Treatment of the cultured Chinese sturgeon ovarian cells with hCG at 15 IU/mL caused a significant increase in the transcriptions of activinβA, follistatin, and activinRIIA (p < 0.05), with no significant change of the activinRIIB transcription (p > 0.05) (Figure 5A).
Regulation of Activin Signaling Pathway-Related Genes by Gonadotropin
Treatment of the cultured Chinese sturgeon ovarian cells with hCG at 15 IU/mL caused a significant increase in the transcriptions of activinβA, follistatin, and activinRIIA (p < 0.05), with no significant change of the activinRIIB transcription (p > 0.05) (Figure 5A).The smad2 mRNA level was increased by hCG incubation (p < 0.05), while no significant changes were found in the transcription levels of smad3 and smad4 (p > 0.05) (Figure 5B).Furthermore, hCG treatment led to significant enhancement of the mRNA levels of cyp19a1a, erα, and erβ (p < 0.05) (Figure 5C).
Discussion
In the present study, the β subunit of activin A in one primitive species of Chinese sturgeon was identified, and the sequence characterization suggested that it contained the typical conserved domains of the TGF β superfamily.The spatial tissue distribution and paracrine function of activinβA in the ovarian development regulation of Chinese sturgeon were investigated as well.The results of this study will help to provide a theoretic basis and technological support for both the ovary maturation regulation and species conservation of the endangered Chinese sturgeon.
The amino acid sequences for the Activin βA subunit were recorded in five sturgeon species, as well as in the primitive fish species Polypterus senegalus (Figure 2).This suggests that the Activin βA subunit was conserved throughout evolution from mammal to fish species, which also indicates its important and conserved role for physiological functions in vertebrates.Earlier studies of activinβA in fish species were limited to species such as zebrafish, goldfish, and rainbow trout; our research of activinβA in Chinese sturgeon enriched the molecular exploration of teleost activin subunits.Subsequent spatial distribution analysis showed that activinβA was transcribed extensively in the tissues, including pituitary and ovary, of Chinese sturgeon (Figure 3).This result is in accordance with previous studies, both in rodents [35][36][37] and fish models [15,[38][39][40], revealing the wide expression of activinβA both in the ovary and in non-ovarian tissues, such as the pituitary, placenta, liver, different brain areas, etc.The diverse distribution pattern corroborates the common finding that activin mainly serves as an autocrine/paracrine factor rather than an endocrine hormone [41].Furthermore, the highest transcription of activinβA in the pituitary of Chinese sturgeon might indicate the paracrine modulation of pituitary function for the activin system.
Discussion
In the present study, the β subunit of activin A in one primitive species of Chinese sturgeon was identified, and the sequence characterization suggested that it contained the typical conserved domains of the TGF β superfamily.The spatial tissue distribution and paracrine function of activinβA in the ovarian development regulation of Chinese sturgeon were investigated as well.The results of this study will help to provide a theoretic basis and technological support for both the ovary maturation regulation and species conservation of the endangered Chinese sturgeon.
The amino acid sequences for the Activin βA subunit were recorded in five sturgeon species, as well as in the primitive fish species Polypterus senegalus (Figure 2).This suggests that the Activin βA subunit was conserved throughout evolution from mammal to fish species, which also indicates its important and conserved role for physiological functions in vertebrates.Earlier studies of activinβA in fish species were limited to species such as zebrafish, goldfish, and rainbow trout; our research of activinβA in Chinese sturgeon enriched the molecular exploration of teleost activin subunits.Subsequent spatial distribution analysis showed that activinβA was transcribed extensively in the tissues, including pituitary and ovary, of Chinese sturgeon (Figure 3).This result is in accordance with previous studies, both in rodents [35][36][37] and fish models [15,[38][39][40], revealing the wide expression of activinβA both in the ovary and in non-ovarian tissues, such as the pituitary, placenta, liver, different brain areas, etc.The diverse distribution pattern corroborates the common finding that activin mainly serves as an autocrine/paracrine factor rather than an endocrine hormone [41].Furthermore, the highest transcription of activinβA in the pituitary of Chinese sturgeon might indicate the paracrine modulation of pituitary function for the activin system.
In goldfish, iodinated human Activin A is bound to ActRIIB-transfected cells, and this binding could be completely blocked by unlabeled Activin, indicating the specific affinity of human Activin with fish ActRIIB [42].Human recombinant Activin A incubation also evaluated the actRIB mRNA levels in the pituitary cells of grass carp Ctenopharyngodon idellus [43].Another report in tilapia has demonstrated that human Activin A stimulated the expression of glycoprotein hormone, FSH, and LH mRNAs in pituitary cells [44].Therefore, it is evident that human Activin A was useful for probing the activin system mediated by Activin receptors in fish species.Herein, 50 ng/mL human Activin A incubation promoted ovarian transcriptions of follistatin, activinRIIA, activinRIIB, smad2, smad3, and smad4 (Figure 4A,B), which indicated the existence of autocrine regulations of activin signaling systems in the ovary of Chinese sturgeons.Furthermore, Activin incubation stimulated the mRNA levels of cyp19ala, era, and erβ, which reinforces the ovary development regulatory role of the activin signaling pathway in Chinese sturgeon.
In previous studies of zebrafish, hCG upregulated Activin A protein expression [25] and activin βA1 and actRIIA mRNA levels [17,22].Furthermore, the Activin-binding protein Follistatin blocked hCG-induced oocyte maturation in zebrafish [24,25].The stimulatory effect of ovarian activin by gonadotropin was consistent with the reports in mammals [45,46] and humans [47].In Chinese sturgeon, hCG improved the transcription levels of cyp19ala, era, and erβ, suggesting its effective stimulation of ovary development.In addition, mRNA levels of activinβA, follistatin, activinRIIA, and smad2 were upregulated by hCG incubation, while transcripts of activinRIIB, smad3, and smad4 were not changed (Figure 5).This indicated that hCG stimulated ovary development by regulation of the activin system via recruiting activinRIIA and the downstream smad2 in Chinese sturgeon.
In conclusion, the activinβA subunit was characterized in Acipenser sinensis, and spatial distribution analysis demonstrated its diverse transcription in tissues.The activin system was able to regulate ovary development in an autocrine way.Gonadotropin activated the activin system in the Chinese sturgeon ovary by increasing the transcription of activin, follistatin, its receptor activinRIIA, and the downstream factor smad2.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ani14162314/s1, Figure S1.Histological analysis of ovary tissue by H.E. staining.Scale bar is 50 µm.Institutional Review Board Statement: All fish handling procedures were performed with the approval of the Animal Care and Use Committee of the Yangtze River Fisheries Research Institute, Chinese Academy of Fishery Sciences (ID number YFI2021YHM01).
Informed Consent Statement: Not applicable.
Figure 1 .
Figure 1.Multiple amino acid sequence alignment of ActivinβA of Chinese sturgeon with other representative vertebrates.Identical and similar amino acids are highlighted with black and gray shading.Identical amino acids were further marked with asterisks.Sequence identities are indicated at right.
Figure 2 .
Figure 2. The Maximum Likelihood phylogenetic tree of Smad2/3 constructed by Mega X software of representative vertebrates.Horizontal branch lengths are proportional to estimated divergence of the sequence from the branch point.
Figure 1 .
Figure 1.Multiple amino acid sequence alignment of ActivinβA of Chinese sturgeon with other representative vertebrates.Identical and similar amino acids are highlighted with black and gray shading.Identical amino acids were further marked with asterisks.Sequence identities are indicated at right.
Figure 1 .
Figure 1.Multiple amino acid sequence alignment of ActivinβA of Chinese sturgeon with other representative vertebrates.Identical and similar amino acids are highlighted with black and gray shading.Identical amino acids were further marked with asterisks.Sequence identities are indicated at right.
Figure 2 .
Figure 2. The Maximum Likelihood phylogenetic tree of Smad2/3 constructed by Mega X software of representative vertebrates.Horizontal branch lengths are proportional to estimated divergence of the sequence from the branch point.
Figure 2 .
Figure 2. The Maximum Likelihood phylogenetic tree of Smad2/3 constructed by Mega X software of representative vertebrates.Horizontal branch lengths are proportional to estimated divergence of the sequence from the branch point.
Figure 3 .
Figure 3. Tissue distribution analysis of activinβA evaluated by relative real-time PCR.Data are normalized to ef1α mRNA and represent mean ± SD of three separate experiments.Values with different letters above are significantly different (p < 0.05).
Figure 3 .
Figure 3. Tissue distribution analysis of activinβA evaluated by relative real-time PCR.Data are normalized to ef1α mRNA and represent mean ± SD of three separate experiments.Values with different letters above are significantly different (p < 0.05).
Figure 4 .
Figure 4. Effect of human Activin A incubation on the transcriptions of activin, folistatin, activin receptors (A), smad genes (B), and ovary development-related genes (C) in the ovarian cells evaluated by relative real-time PCR.Data are normalized to ef1α mRNA and represent mean ± SD of three separate experiments.Asterisks denote significant difference from control at p < 0.05.
Figure 4 .
Figure 4. Effect of human Activin A incubation on the transcriptions of activin, folistatin, activin receptors (A), smad genes (B), and ovary development-related genes (C) in the ovarian cells evaluated by relative real-time PCR.Data are normalized to ef1α mRNA and represent mean ± SD of three separate experiments.Asterisks denote significant difference from control at p < 0.05.
Animals 2024 , 12 Figure 5 .
Figure 5.Effect of hCG incubation on the transcriptions of activin, follistatin, activin receptors (A), smad genes (B), and ovary development-related genes (C) in ovarian cells evaluated by relative realtime PCR.Data are normalized to ef1α mRNA and represent mean ± SD of three separate experiments.Asterisks denote significant difference from control at p < 0.05.
Figure 5 .
Figure 5.Effect of hCG incubation on the transcriptions of activin, follistatin, activin receptors (A), smad genes (B), and ovary development-related genes (C) in ovarian cells evaluated by relative real-time PCR.Data are normalized to ef1α mRNA and represent mean ± SD of three separate experiments.Asterisks denote significant difference from control at p < 0.05.
Figure S2.Full-length cDNA sequence and deduced amino acids of activinβA in Chinese sturgeon.Nucleotides (upper line) are numbered from 5 ′ to 3 ′ .The transforming growth factor beta (TGF-β) propeptide is underlined.The TGF-β-like domain found in Inhibin beta A chain is boxed.The asterisk (*) indicates the stop codon.Author Contributions: H.Y. (Huamei Yue) conducted the experiment, analyzed the data, and wrote the manuscript.H.Y. (Huan Ye) performed the in vitro ovarian cell incubation experiment.R.R. helped with the sample collection; H.D. provided the experimental fish and helped with the fish rearing; C.L. revised the manuscript.All authors have read and agreed to the published version of the manuscript.Funding: This research was supported by the National Key R&D Program of China (2021YFD1200304), the Central Public-Interest Scientific Institution Basal Research Fund (2023TD23), and the National Natural Science Foundation of China (31802282). | 6,185.2 | 2024-08-01T00:00:00.000 | [
"Biology"
] |
DEVELOPMENT OF THE METAL RHEOLOGY MODEL OF HIGH-TEMPERATURE DEFORMATION FOR MODELING BY FINITE ELEMENT METHOD
It is shown that when modeling the processes of forging and stamping, it is necessary to take into account not only the hardening of the material, but also softening, which occurs during hot processing. Otherwise, the power parameters of the deformation processes are precisely determined, which leads to the choice of more powerful equipment. Softening accounting (processes of stress relaxation) will allow to accurately determine the stress and strain state (SSS) of the workpiece, as well as the power parameters of the processes of deformation. This will expand the technological capabilities of these processes. Existing commercial software systems for modeling hot plastic deformations based on the finite element method (FEM) do not allow this. This is due to the absence in these software products of the communication model of the component deformation rates and stresses, which would take into account stress relaxation. As a result, on the basis of the Maxwell visco-elastic model, a relationship is established between deformation rates and stresses. The developed model allows to take into account the metal softening during a pause after hot deformation. The resulting mathematical model is tested by experiment on different steels at different temperatures of deformation. The process of steels softening is determined using plastometers. It is established experimentally that the model developed by 89...93 % describes the rheology of the metal during hot deformation. The relationship between the components of the deformation rates and stresses is established, which allows to obtain a direct numerical solution of plastic deformation problems without FED iterative procedures, taking into account the real properties of the metal during deformation. As a result, the number of iterations and calculations has significantly decreased.
Introduction
In the FEM study of the hot stamping and forging operations, difficulties arise due to the nonlinearity of the properties of the material during high-temperature deformation [1,2].The main idea of the existing methods for taking into account the nonlinearity of these properties is solving the problem in an elastic formulation and use additional iterations (successive approximations) to switch to the plastic properties of a deformable metal [3].As a result, the total computation time increases, which makes the FEM method less efficient compared to other numerical methods [4].A more complete account of the mechanical characteristics of the deformable metal is one of the most important reserves of intensification and increasing the efficiency of modeling forging and stamping operations [5].
In the process of hot deformation, the metal is strengthened, at the same time dynamic processes of return, polonium saturation and recrystallization occur, leading to relaxation of stresses (softening) in the material at forging and stamping temperatures [6].Accounting for the phenomenon of thermal softening of metals and alloys allows to improve the technical and economic indicators of the production of metal products produced by hot deformation [7].Practice shows that during hot deformation with pauses, it is possible to carry out operations with lower energy consumption for deformation [8].Therefore, the establishment of a valid metal rheology, which is strengthened and rooted out during hot deformation to establish a stress-strain state is an important task in mechanical engineering [9].The aim of the work is to develop a mathematical model that would repeat the rheology of the material during the implementation of forging and stamping operations, which will improve the accuracy of FEM and power parameters determination when forging large-sized forgings.
To achieve the aim, the following objectives are set: -establish an analytical model of stress relaxation in the alloy during hot deformation; -check the installed model with the actual behavior of the metal during the implementation of hot deformation.
Establishment of an analytical model of stress relaxation
When solving FEM problems, it is advisable to establish and set a real relationship between deformation rates { } ε and stresses when varying temperature and rate regimes during hot deformation, when viscosity appears in the alloy [10].This ratio is necessary for setting the plasticity matrix [K] for the FE modeling and determination of stress components [11] { The main difference between irreversible (viscous) deformations and plastic solids is that the latter depend on the deformation rate, especially at elevated temperatures [12].The alloy has a viscosity when the deformation rate affects the stress σ=σ(έ).The viscosity of the metal is manifested in the fact that after deformation the internal stresses change with time.For the operations of forging and stamping, when the material is rotated with hardening, the Maxwell relaxing model is an exact model that takes into account the rheology.
The deformation degree, according to this model, consists of elastic e ε and viscous ν ε components.
Accounting for the stresses σ(0) at the time t=0 and fixed deformation d 0 dt where T -the pause time, s.
represents the time for which the initial stress decreases by a factor of e 2,718.= Thus, it can be assumed that Maxwell's environment takes into account the real behavior of the metal during high-temperature plastic deformation (hardening, as well as softening).The minimization provides a reduction in the resistance to deformation (in this case, by an exponential dependence) with constant deformation [13].
After introducing the notation T (7), let's rewrite Eq. ( 6) Regarding σ xx , expression (8) under the initial condition, when the time t=0 and and the body also deforms at a constant rate, then the stresses change in time according to the law ( ) ( ) In real deformation processes, the rate is not constant [14]; therefore, to solve equation ( 8), let's define a function of the deformation degree.This function must be growing, as in the process of deformation the deformation degree increases.As such a function, it is possible to choose a monotonically increasing exponential function, which corresponds to the actual deformation processes (Fig. 1) Deformation rate in this case Using the above functions ( 8) and ( 9), graphs of changes in the degree (ε) and velocity (έ) of deformation in time are plotted (Fig. 1).The initial data for the calculations are: the pause time (T) is 2.0 s; deformation rate (έ) is 0.002 s -1 ; the deformation time (t) varies from 0 to 6 seconds.These output parameters correspond to the actual deformation processes (stamping or one press during forging).
These exponential dependencies correspond to the forging and stamping operations.In particular, with an increase in the degree of deformation, a hardening of the material and an increase in the size of the deformation zone occur, which leads to an increase in the deformation force [15].As a result, the deformation rate exponentially decreases [16].
The resulting equation is solved by the variation method of a constant; for this, the homogeneous differential equation is first considered To solve the inhomogeneous equation ( 8), let's apply the method of variation by a constant, replacing C with an unknown function φ(t), then Differentiate ( 12) and obtain After substitution (13) into equation ( 8)
Mechanical Engineering
Integrating, let's define After substitution ( 14) in (12) let's obtain ( ) where t 0 and t -the integration limits: t 0 -the beginning of the pause, and t -the end of the pause.
Assuming that After appropriate transformations and using the initial condition When А=1, one should reveal the uncertainty of the form 0 0 and come to a particular case of the solution.Let's consider separately the case when А→1 ( ) e Reports on research projects (2019) Taking into account the above, with А=1, it is possible to simplify the functions describing the degree and rate of deformation (10) to the form: A mathematical procedure is carried out to solve the limit as А→1 does not change the appearance of the functions for describing degrees and deformation rates; they are similar to those shown in Fig. 1.Analysis of the established model (15) allows to determine the results important for materials science: -the maximum stress affects the Young's modulus at certain temperatures of deformation and rate of deformation; -the peak of the function ( 15) corresponds to the time equal to T=ν/E, that is, this is the moment when the pause (metal unloading) comes, does not contradict the mechanics of the deformation process.This time T can be calculated with a certain degree and rate of deformation, it is the initial data for solving the problem.
In addition, this model allows to determine the viscosity ν of a material from relation ( 7) through the product T•E or by fitting.According to the known deformation time, rate and degree of deformation, let's change the Young's modulus to match the values of the function (15) with the experiment.Thus, taking into account the viscous properties of the body is reduced to establishing the exact value of the Young's modulus, depending on the temperature level.
The solution to this problem is not difficult with a known material tensile diagram or in the presence of a hardening curve for different temperatures.Also, to determine the Young's modulus, it is possible to use the reference literature.Young's modulus with increasing temperature decreases exponentially.
The obtained model does not exclude the determination of the Young's modulus by the method of selection to the coincidence of the obtained dependence with the hardening-softening curve.Reducing the level of stress in the material after deformation (during a pause), according to the obtained model, occurs exponentially, does not contradict the actual behavior of the material after the load is removed.It does not require the specification of additional factors.
Results of experimental studies of steel rheology during hot deformation
To test the developed model of steel rheology during hot deformation, it is necessary to conduct experimental studies and compare them with the analytical model.In this case, it is necessary to establish the mechanical properties of the material that is deformed.The main factors affecting the mechanical properties of the material under study: temperature, degree and rate of deformation.Investigated steel: 40Х, 9ХФ, ХВГ, 10Х16Н8.The temperature of steel samples varied in the range from 800 to 1200 °C with steps of 100 °C.The deformation degree varies from 0 to 0.4, the deformation rate in the range (2...6)×10 -3 s -1 , covers the deformation and high-speed mode of technological processes of forging and stamping.Experimental planning is carried out using type 3 3 PFE plan.The high-temperature mechanical properties of steels are studied jointly with Czestochowa University of Technology (Poland) on the Gleeble 3800 unit for physical modeling of thermomechanical compression and tension tests.After conducting experimental studies and processing the results obtained using the experiment planning theory, the coefficients of the regression equations are determined, which determine the dependence of the deformation resistance on the degree, deformation rate and temperature.After exclusion of non-significant coefficients: where For comparison and verification of the theoretical and experimental results, equations ( 15) and ( 16) are used to construct the dependences of the deformation resistance during deformation with a pause at hot pressure treatment temperatures (Fig. 2).The experimental data are shown by the dashed line, and the theoretical ones by the solid line.
Graphically developed in the work dependence ( 15) is shown in Fig. 2 as a function, has areas of hardening and relaxation of stresses (softening) when removing the load.Fig. 2 shows experimental data of flow curves (curves 1 and 3) for various steels at a deformation rate
Discussion of the results obtained using the developed model of the visco-elastic behavior of steel at pressure treatment temperatures
The analysis of the obtained results makes it possible to establish that a developed rheological model describes the physical processes occurring in hot-deformed steels -material hardening and stress relaxation.Based on the developed mathematical model, it is found that the maximum stress is determined by the Young's modulus for certain temperatures and material and deformation rate at the acoustic time of deformation.The peak of the function corresponds to the time T=ν/E.The analytical dependence asymptotically approaches the state when the stress is zero, and the experimental data asymptotically approach a certain stress, which is the yield strength of the material.This difference is explained by the fact that during the experimental study, the specimen remained under load after deformation, due to the research method and the design of the cam plastometer.As a result, the stress can't be reduced to zero, since the load seems to act after the deformation process stops, so that it is possible to set a change in relaxation in the material after deformation is stopped.The difference between the experimental and calculated stress values at the relaxation site is the value of the yield strength of the material at a given temperature.Taking into account in the resulting model the effect of the load on the sample with a value equal to the yield strength of the material (σ T ), let's obtain the coincidence of dependencies with a deviation of 7...11 %.Thus, the reason for the discrepancy between the experimental data and the theoretical values of hardening and softening is established, which gives reason to consider the developed model to be reliable, since it describes the rheology of the metal during hot deformation by 89...93 %.
In contrast to the existing methods for taking into account the mechanical properties of a material when FEM modeling, an analytical model is developed, which allows not to carry out iterative procedures.A feature of the developed model of the connection component of the deformation rate and stress during hot deformation is that it takes into account the mechanism of stress relaxation in the metal (alloy) after deformation.This opens up broad prospects for its use in FE modeling.
A limitation of this approach is the need to specify a Young's modulus for a certain material depending on temperature.This information is not enough in the literature today.Moreover, it is advisable to check the resulting model on other metals and alloys.
Conclusions
1. Based on Maxwell's viscoelastic rheological model, the relationship between the components of deformation rates and stresses is established.This makes it possible to obtain a direct numerical solution of nonlinear problems of hot plastic deformation in the course of modeling by the finite element method taking into account the real properties of the metal at high temperatures.As a result of using the developed model of the material rheology during deformation in the hot state in the calculation decreases by 4 times compared with the use of elastic and elastic-plastic models of materials used in commercial software products based on FEM.This is due to the exception of carrying out additional iterative procedures to establish the real resistance of the material for certain thermo-speed deformation conditions.The developed model takes into account not only the processes of material hardening during deformation, but also softening (stress relaxation) in the pause after deformation.
2. Resistance to hot deformation and stress relaxation after deformation of steels at different temperatures, degrees and deformation rates is experimentally established.Stress relaxation after hot deformation is explained by recrystallization processes.The obtained results are compared with theoretical data established on the basis of the developed model of material rheology during hot deformation.It is experimentally proved that the developed model is 89...93 % describing the steel rheology during hot deformation. | 3,498.6 | 2019-03-31T00:00:00.000 | [
"Materials Science"
] |
Dynamic Modeling and Analysis of a High Pressure Regulator
Pressure regulator is a common device used to regulate the working pressure for plants and machines. In aerospace propulsion systems, it is used to pressurize liquid propellant rocket tanks at specified pressure for obtaining the required propellant mass flow rate. In this paper, a generalized model is developed to perform dynamic analysis of a pressure regulator so that constant pressure at outlet can be attained. A nonlinear mathematical model of pressure regulator is developed that consists of dynamic equation of pressure, temperature, equation of mass flow rate, and moving shaft inside regulator. The system of nonlinear and coupled differential equations is numerically simulated and computation of pressure and temperature is carried out for required conditions and given design parameters. Valve opening and mass flow rate are also found as a function of given inlet pressure and time. In the end, an analytical solution based on constant mass flow rate assumption is compared with nonlinear formulation. The results demonstrate a high degree of confidence in the nonlinear modeling framework proposed in this paper.The proposedmodel solves a real problem of liquid rocket propulsion system. For the real system under consideration, inlet pressure of regulator is decreased linearly from 150 bar to 60 bar and outlet pressure of nearly 15 bar is required from pressure regulator for the complete operating time of 19 s.
Introduction
Pressure regulator is normally a dynamic open valve that takes a high varying inlet pressure and converts it to a nearly constant and lower desired outlet pressure.It consists of set screw, working spring, main shaft, valve seat, inlet outlet chamber, sensing orifice, pressure chamber, rolling diaphragm plate, and return spring as shown in Figure 1.Constant downstream pressure is obtained by variable valve opening area.This variable valve opening area is generated due to opposite force balancing between spring loading and pressure acting on diaphragm plate in pressure chamber.Rolling diaphragm moves upwards as downstream pressure increases and moves backward as downstream pressure decreases for maintaining constant outlet pressure.
Vujic and Radojkovic [1] studied the procedure of forming nonlinear dynamic model of gas pressure regulator.The model showed the self-exciting oscillations of systems with certain amplitude and frequency without the presence of outside disturbance.The results reported the effect of each design parameter for self-exciting oscillations and suggested methods to correct them.In the end, it was proposed that a linear model is sufficient for evaluation of stability and transient response if flow through valve is laminar, dry friction is negligible, and the motion of valve and diaphragm is not constrained.These sets of assumptions are significantly contradictory to actual operations.Specifically, mixing triggers a highly turbulent phenomenon inside pressure regulators and the situation further worsens under high pressure environments.Similarly, Sunil et al. [2] developed the linear mathematical model of pressure regulator for cryogenic application pressure regulator.The model was validated with the experimental test of cryogenic pressure regulator, developed by Liquid Propulsion Systems Centre (LPSC) of Indian Space Research Organization.The effect of cryogenic temperature was measured in the regulated pressure taking into account spring load variation, design changes, and fluid property.It is interesting to note that the pressure differential equation in regulator flow volume was considered without taking the valve clearance factor and assuming the constant temperature in all regulator volumes in both studies.Shahani et al. [3] studied the dynamic equation of pressure and temperature with the assumption of adiabatic process happening in high pressure regulator.The model was simulated using numerical tools and verified experimentally.It was found that as outlet volume increases, the stability of outlet pressure increases and as spring load increases, outlet pressure increases proportionally.The control spring stiffness and diaphragm area were two sensitive parameters identified that effect downstream pressure of regulator.It was suggested that, for the better control of outlet pressure of regulator, these two parameters should be designed and manufactured carefully.
Zafer and Luecke [4] investigated the stability characteristics and showed the cause of vibration and possible design modification for eliminating the unstable vibrating model.A comprehensive linear dynamic model of self-regulating high pressure gas regulator was developed for stability analysis.Root locus technique was used to study the effect of variation of various design parameters on system dynamics.It was concluded that damping coefficient, the diaphragm area, and upper and lower volumes are most important design parameters that affect the stability.An improvement in stability was reported with the decrease in flow path between regulator body and lower pressure chamber.Similarly, Delenne and Mode [5] performed experimental and numerical simulation to identify the relative influence of several parameters on emergence and amplitude of oscillations.However, no discussion on regulator design and model parameters is made.Experimental work performed in various studies cannot be extended as they give results subject to specific geometric shape.
Rami et al. [6] developed mathematical model, performed experimental analysis, measured performance, and found operating conditions that increase stability.The numerical and experimental results were in good agreement with respect to each other.It was concluded that oscillations in downstream pressure increase for small volumes and higher upstream pressure.The study showed that opening of valve and driving pressure have less effect on oscillation of downstream pressure.Moreover, the length of sensing lines from downstream chamber to damping chambers of regulator has only little influences on the oscillation of downstream pressure.
With the advancement of Computational Fluid Dynamics (CFD), several studies are directed towards complex flow generation and visualization inside pressure regulator.Shipman et al. [7] carried steady and unsteady simulation for the analysis of gas pressure regulator for rocket feed system.Comparison of the numerical results and experimental data from NASA Stennis Space Center was found to be satisfactory.Ortwig and Hubner [8] worked on mechatronic pressure controller for Compressed Natural Gas (CNG) engines.CFD simulation for analysis of shock and vortices formation near the valve opening was performed and potential for the improvement in geometry with the help of CFD results was discussed.It was concluded that flow forces are dependent on pressure differential.Moreover, steady state flow forces are almost independent of valve opening.Saha [9] performed the numerical simulation of a pressure regulated valve to find out the characteristics of passive control circuit.Commercial software was used to analyze the flow forces on different interfaces of the moving shaft of regulator.A special User Defined Function (UDF) for varying inlet pressure and reducing the valve opening by delta amount for transient analysis of pressure regulator was written.Flow convergence was checked at each time step using UDF and calculation was allowed to move to next time step only after meeting the required criteria for convergence.Similarly, Du and Gao [10] carried numerical study of complex turbulent flow through valves in a steam turbine system.Their main aim was to understand the flow behavior through the complex configuration operating under different conditions, so as to derive the necessary information of optimizing the valve design.However, dynamic prediction cannot be performed using CFD simulations as they are very expensive and not practical in real-time environment.The model presented in this paper tries to fill this void.
From operations perspective, Yanping et al. [11] discussed the dynamic model of pressure reducing valve.The dynamic processes of valve for pressurizing, startup, and different operating conditions by numerical simulations and experimental investigation were discussed.It was concluded that increasing the area of damping hole or decreasing the volume of damping cavity can not only reduce redressing time of the pressure regulating valve, but also reduce overshoot.Moreover, increasing the stiffness of the main spring can reduce the redressing time of the pressure regulating valve.From review of relevant studies, it is clear that a complex phenomenon occurs inside pressure regulator that includes compressibility, choking, turbulence, fluid structure interaction, flow expansion with recirculation, and flow separation.
There is dearth of published literature which briefly discusses the dynamic simulation of high differential pressure regulator.In this study, a generalized nonlinear mathematical model is developed that computes regulator outlet pressure by incorporating at the same time the dynamic equation of valve clearance, differential equation of temperature in regulator volume, and equation of mass flow rate by finding pressure gradients in the inlet volume and outlet volume as a function of time.Model is simulated for two cases.For the first case, the inlet pressure of reservoir is increased from 60 bar to 150 bar at the rate of 4.74 bar/s.Results of pressure, temperature, mass flow rate, and valve opening are computed.The model is of utilitarian nature and is specifically simulated for particular application in rocket tank feed system for propulsion of rocket in second case.Specifically, the reservoir pressure is reduced from 150 bar to 60 bar at the rate of 4.74 bar/s and model is simulated for 19 s for controlling outlet pressure near 15 bar.Mass flow rate and valve opening are required as a function of time for maintaining 15-bar pressure at outlet port with respect to varying high inlet pressure.Initial parameters of regulator are set in a way that it gives initial valve opening of 2.3 mm.With the known value of constant mass flow rate for varying high inlet pressure and keeping constant temperature at inlet and outlet, an analytical solution is also proposed for dynamic equation of pressure with varying high inlet pressure as a function of time.
Geometric Description
A pressure regulator normally consists of tightening screw on its top to provide the load force on working spring.This load force controls the outlet pressure.Other main parts are inner moving shaft with valve whose basic function is to restrict the flow and provide a function of pressure drop.
Overall flow volume of pressure regulator is divided into four regions and for each flow volume (Figure 2), we developed mathematical model for pressure and temperature as a function of time.First-order forward differencing scheme is used for the discretization of dynamic equation of pressure and temperature.Equation of moving inner shaft is reduced to two sets of first order differential equations with initial conditions and computed numerically using Runge-Kutta method.Mass flow rate, valve opening, and outlet pressure are computed as a function of time and inlet pressure.An analytical solution is found for finding controlled outlet pressure as a function of time with some assumptions.
Here in is the inlet volume, 1 is intermediate volume, out or reg is outlet volume, and dam is the damping volume in cubic meter. (bar) and (Kelvin) are pressure and temperature in corresponding volumes.ṁin is the inlet mass flow rate (kg/s) from high pressurized compressed air cylinder to inlet volume and other mass flow rates correspond to flow from one volume to another volume (Figure 3).
Mathematical Modeling
For the development of dynamic equation of pressure we use the assumption of adiabatic and reversible process inside regulator ducts, so we can use the isentropic relation between pressure (), density, and specific heat ratio () It is pertinent to note that the entire volume is divided into four subvolumes, in , 1 , out , or reg and dam . in here is the inlet volume, 1 is intermediate volume, out or reg is outlet volume, and dam is damping volume.Differentiating with respect to time and after simplification, dynamic equation of pressure is found and is given in where is the valve clearance area, is the valve clearance length in mm, is the ideal gas constant, in is the inlet temperature, out is the outlet temperature, is the temperature of damping orifice, and is the mass flow through damping orifice.
Development of dynamic equation of temperature is carried out from first law of thermodynamics.Considering the density change is occurring in regulator duct, we can write sum of all energy added to ducts of pressure regulator by the energy equation as where represent total internal energy, is the rate of change of work, and is heat transfer inside and outside of pressure regulator.Here we are assuming that no heat transfer is taking place.Ḣin and Ḣout are the inlet and outlet enthalpy of air mass in regulator ducts.After simplification of energy equation, following dynamic equation of temperature is found: For the mass flow from one flow volume to other, following equation is used: if out / in < 0.528 ṁ = ( 2 + 1 ) 1/(−1) where is the discharge coefficient and its value can be taken as 0.8, 0.9, or 1.A is the corresponding cross section area for mass flow from one volume to another.Equation of moving shaft is developed from Newton formulation based on its upward and downward motion and is given by where spring and valve are the control springs and return spring rates. is the mass of moving shaft and in is the inlet pressure. reg is the regulated pressure or outlet pressure. 0 is the control spring preload or initial force set on spring. valve is the return spring precompression. is dry friction between shaft seal and regulator.It is also called lubricated or coulomb friction.Also here sign is a MATLAB5 signum function which gives the moving shaft opposite direction by observing its velocity.For all four volumes, derivatives of pressure and temperature are replaced by their time differencing for computation of pressure and temperature.Here the discretized equation of pressure and temperature for outlet volume based on firstorder forward differencing scheme is shown in ( 8) and ( 9), respectively:
Simulation Results and Discussion
Above nonlinear and coupled differential equations of each flow volumes are simulated numerically by finite difference method.Second-order differential equation of moving shaft is converted to two sets of first-order differential equations and solved numerically by MATLAB built-in routine ode45 based on Runge-Kutta method.Simulation results have been derived for two cases.In first case, we increase the inlet pressure of reservoir from 60 bar to 150 bar at the rate of 4.74 bar/s and note the outlet pressure of regulator.In second case, pressure in the reservoir or compressed air cylinder is reduced from 150 bar to 60 bar at the rate of 4.74 bar/s and overall regulator performance is checked corresponding to outlet pressure.For each case, outlet pressure, mass flow rate, temperature, and valve opening are computed.
Increasing Inlet
Pressure from 60 bar to 150 bar.Load or control spring compression on pressure regulator is initially set in such a way that initial valve clearance of regulator is 2.5 mm.When the inlet valve of regulator is opened, high pressure fluids begin to flow from reservoir to different volume of regulator.Pressure in each volume of regulator begins to increase.The pressure in outlet volume is continuously checked in damping chamber through damping orifice.As the pressure in the outlet volume of regulator reaches above 15 bar, pressure creates a force on the diaphragm plate against the load force of spring which was set for a 15-bar pressure at the outlet.As a result, the main shaft of regulator moves upward and valve clearance reaches its minimum opening position.After that, the moving shaft of regulator oscillates up and down such that pressure of nearly 15 bar is maintained in outlet volume of regulator as shown in Figure 4.The outlet pressure approaches to 15 bar around 0.8 s and valve opening reduces to its minimum level.Steady state oscillations of about 0.25 mm are observed afterwards.The magnitude of oscillation is about 0.1 mm which is negligible.Initial pressure at outlet was 1 bar, near about 0.8 seconds, and outlet pressure reaches 15 bar and then a minor oscillation in outlet pressure is seen for whole simulation time in Figure 5.
Initial valve clearance was 2.5 mm due to which high mass flow rate is seen initially form volume 1 to volume out .As the pressure in volume out reaches 15 bar, the valve opening reduces and a sudden decrement in mass flow rate can be seen in Figure 6.Subsequently, mass flow rate value oscillates near about 0.24 kg/s for maintaining nearly 15 bar at outlet.Initial temperature of volume out was 290 K (standard atmosphere temperature).With the increase in pressure from 1 to 15 bar, temperature also increases.Due to compressibility effect, temperature does not decrease sharply but it decreases smoothly as shown in Figure 7.
Decreasing Inlet
Pressure from 150 bar to 60 bar.This case corresponds to simulation of real problem of liquid propulsion system.Initial pressure and temperature in each flow volume are taken as 1 bar and 290 K. Time increment (Δ) is taken as 0.001 s.Valve opening (Figure 8) is 100% (2.3 mm) nearly for 0.7 s.Afterward outlet pressure reaches nearly 15 bar and valve opening reaches its minimum level.
Valve clearance value oscillates with magnitude of 0.02 mm and increases as well with the decrement of inlet pressure to meet the mass conservation.Initially pressure at outlet volume is 1 bar and after nearly 0.7 s, pressure at outlet reaches average value of 15 bar and control spring load force on diaphragm plate is balanced by pressure force on diaphragm plate in opposite direction.Thus a controlled pressure of nearly 15 bar is achieved as shown in Figure 9.
Initially mass flow rate increases till about 0.7 s and pressure at outlet volume reaches 15 bar, valve clearance decreases, and suddenly mass flow rate also decreases as shown in Figure 10.
Initial temperature was set to 290 K at outlet volume with pressure of 1 bar.Due to sudden increase of pressure in outlet chamber, average value of temperature suddenly increases for very short time and then steady state temperature reaches due to maintained pressure at outlet as shown in Figure 11.
Closed Form Solution of Pressure Equation
For quick analysis of outlet pressure without involving the full design parameters, an analytical solution of dynamic equation of pressure is obtained with some assumption of constant mass flow rate and different inlet and outlet temperature.For the development of analytical solution only two flow volumes are taken.Inlet volume in is the flow volume before the valve opening and outlet volume or regulated volume reg is the volume after the valve opening as shown in Figure 12.
By assuming average value of mass flow rate and using mass flow rate equation, valve opening from 150 bar to 60 bar inlet pressure is found as function of time in (10) and is shown in Figure 13.Here valve opening for analytical solution is compared with numerical computed valve opening when outlet pressure reaches nearly 15 bar.The difference in results is due to use of variable mass flow rate with full design parameters in numerical computation and only constant average mass flow rate with different inlet and outlet temperature for analytical solution.
. (10) Introduced in dynamic equation of pressure, a linear differential equation is obtained which is solved analytically and solution is given by Outlet pressure from numerical computation is compared with analytical solution as shown in Figure 14.Average constant mass flow rate used in valve opening in analytical solution does not provide any oscillation in mass flow rate so here outlet pressure smoothly reaches 15 bar in case of analytical solution for whole simulation time.This is because when the constant mass flow rate assumption is taken then valve opening is only the function of inlet pressure and inlet temperature.In this way we are here ignoring the equation of moving shaft and finding valve opening from the mass flow rate equation.The constant mass flow rate in steady state condition can be used just for quick analysis of some initial design parameters.
Conclusion
Mathematical model for flow inside a generic pressure regulator is developed and numerically simulated.Dynamic equation of pressure and temperature is developed based on the continuity and energy equation to capture the pressure and temperature change in regulator ducts as a function of time.Equation of moving shaft is developed by Newton second law of motion.First-order forward differencing technique is used to discretize the dynamic equations.Results include dynamic simulation of pressure, mass flow rate, temperature, and valve opening as a function of time and
Figure 1 :
Figure 1: Cross-sectional view of pressure regulator and its components: three-dimensional view (a) and two-dimensional cross section (b).
Figure 2 :
Figure 2: Flow volumes of pressure regulator and corresponding quantities.
Figure 3 :
Figure 3: Force balance on various area of moving shaft.
Figure 4 :
Figure 4: Valve opening as a function of time for increasing inlet pressure.
Figure 5 :
Figure 5: Outlet pressure (N/m 2 ) variation against inlet pressure (N/m 2 ) and over entire simulation time.
Figure 6 :
Figure 6: Mass flow rate at valve opening as a function of time.
Figure 7 :
Figure 7: Temperature in outlet volume as a function of time.
Figure 8 :
Figure 8: Valve opening as a function of time.
Figure 12 :
Figure 12: Two main flow volumes of pressure regulator used for analytical estimation.
𝑃Figure 13 :
Figure 13: Comparison of valve opening as a function of time. | 4,786.6 | 2016-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Raw agro-industrial orange peel waste as a low cost effective inducer for alkaline polygalacturonase production from Bacillus licheniformis SHG10
The current study underlines biotechnological valorization of the accumulated and the non-efficiently utilized agro-industrial orange peel waste to produce polygalacturonase (PGase), an industrially important enzyme with augmented demands in enzymes markets, from Bacillus licheniformis SHG10. Sequential statistical optimization of PGase production was performed through one variable at a time (OVAT) approach, Plackett-Burman (PB) and response surface methodology (RSM). The impact of introduction of six raw agro-industrial wastes (orange, lemon, banana, pomegranate, artichoke peel wastes and wheat bran) and other synthetic carbon sources separately into the fermentation broth on PGase productivity was studied through OVAT approach. Orange peel waste as sole raw carbon source in basal medium proved to be the best PGase inducer. It promoted PGase productivity with relative specific activity of 166% comparable with the effect imposed by synthetic citrus pectin as a reference inducer. Three key determinants (orange peel waste, pH of the production medium and incubation temperature) had RSM optimal levels of 1.76% (w/v), 8.0 and 37.8°C, respectively along with maximal PGase level (2.69 μg galacturonic acid. min-1. mg-1) within 48 hrs. Moreover, SHG10 PGase exhibited activity over a wide range of pH (3-11) and an optimal activity at 50°C. Data greatly encourage pilot scale PGase production from B. licheniformis SHG10.
Background
Citrus fruit is one of the commercial crops in the Egyptian market (Mohamed et al. 2010). Orange juice is one of the most consumed beverages today (Martin et al. 2010). Consequently, a high percentage of citrus fruit is used for manufacturing of juice and marmalade. Approximately 50-60% of citrus fruit is transformed into citrus peel waste (Wilkins et al. 2007). This results in accumulation of large quantities of citrus peel waste as a by-product in citrus-processing industry. Accumulated large quantities of the orange peel waste along with environmental considerations to avoid health hazards derived from unsatisfactory disposal methods addressed the indispensable need for finding alternative biotechnological solutions for waste valorization (Martín et al. 2013;Rivas et al. 2008). According to current environmental legislation, any waste could be considered as raw material as long as there is an option to develop method for its valorization (Möller et al. 2001). High value products could be manufactured upon using orange peel waste as a potentially valuable low cost resource (Martin et al. 2010;Rivas et al. 2008;Balu et al. 2012). Orange peel waste was reported to contain 16.9% soluble sugars, 9.21% cellulose, 10.5% hemi-cellulose, and 42.5% pectin as the most important components (Rivas et al. 2008). A vast number of promising methods encountered in efficient utilization of orange peel waste has been described thoroughly in the literature. Among these methods is employing this waste in enzymes industries (Siles and Thompson 2010).
The term pectinolytic enzymes (pectinases) is the generic name of a family of enzymes involved in the process of pectin degradation. This complex bioprocess is achieved mainly by a set of pectinolytic enzymes catalyzed reactions (e.g., hydrolysis, trans-elimination and de-esterification) of the ester bond between the carboxyl and the methyl ester groups of pectin (Rehman et al. 2012). Four types (polygalacturonases, pectin lyases, pectate lyases and pectin methyl esterases) of these enzymes classified based on their mode of action are involved under the generic name of this family (Alkorta et al. 1998;Hoondal et al. 2002;Kuhad et al. 2004). These enzymes have a vast number of industrial applications in food (e.g., juice clarification, refinement of vegetables fibers, extraction of vegetables oil, curing of coffee and cocoa beans) (Silva et al. 2002;Gummadi and Panda 2003;Demir et al. 2012;Quattara et al. 2008;Pedrolli et al. 2009). biopulping of papers (Sittidilokratna et al. 2007) and textiles (Basu et al. 2009). Among pectinolytic enzymes, polygalacturonases (PGase), are the enzymes of particular interest to industry. Endo-PGase (E.C. 3.2.1.67) and exo-PGase ((E.C. 3.2.1.82) catalyze the hydrolysis of internal and external α-1,4 glycosidic bond linking α-galacturonic acid residues in pectin, respectively producing shorter pectin molecular structures, decreasing the viscosity, increasing the yield of juices, and determining the crystalline structure of the final product (Souza et al. 2003).
The up to date review of literature reported a vast number of microorganisms as PGase producers mainly fungi such as Aspergillus spp, Rhizopus stolonifer, Alternaria mali, Fusarium oxysporum, Neurospora crassa, Penicillium italicum ACIM F-152 and a little bit form bacteria confined to Agrobacterium tumefaciens, Bacteroides thetaiotamicron, Ralstonia solanacearum, Bacillus spp and Enterobacter aerogenes NBO2 (Jayani et al. 2005(Jayani et al. , 2010Darah et al. 2013). Microbial pectinase(s) particularly those of fungal origin account for 25% of the global food and industrial enzymes sales (Demir et al. 2012). Although fungi are considered to be potent pectinases producers, but the drawbacks included in the physicochemical properties of these enzymes greatly limit the utilization of these enzymes on a wide industrial scale (Soares et al. 1999).
The underlying reasons behind carrying out the current study could be outlined in the following annotations; a) increased demand for commercial PGase(s) in the enzymes markets worldwide, b) indispensable need for valorization of the accumulated and the non-efficiently utilized raw agro-industrial orange peel waste in a biotechnological manner and c) necessity for continuous searching for novel PGase(s) with new characteristics to overcome the shortcomings involved in PGase(s) of fungal origin that greatly limit their utilization on a wide industrial scale. In this context, the present study aims to address sequential statistical optimization of PGase production from Bacillus licheniformis SHG10 strain upon using the raw agro-industrial orange peel waste as a sole PGase inducer and a sole carbon source in a very low cost effective medium.
OVAT results
The influence of different agro-industrial wastes (e.g., orange, lemon, pomegranate, banana, artichokes peel wastes and wheat bran) and synthetic carbon sources (e.g., citrus pectin, glucose, fructose, maltose, xylose, glycerol, sucrose, peptone, beef extract and tryptone) on PGase production by B. licheniformis SHG10 strain was studied. Table 1 revealed that orange peel waste at a concentration of 1% (w/v) was the best co-inducer sole carbon source. It enhanced the production of PGase with a relative specific activity of 166% when compared to the effect of citrus pectin (0.5% w/v). While, the substitution of orange peel waste with lemon peel waste (1% w/v) resulted in improvement in PGase level with relative specific activity of 133% of that induced by citrus All relative specific activities of PGase were related to U/mg PGase obtained upon using citrus pectinbased basal medium as production medium.
pectin. However, addition of wheat bran [(0.5% (w/v) and 1% (w/v)] or banana peel waste (1% w/v) separately instead of citrus pectin achieved a level of PGase almost quite similar to that obtained upon using citrus pectin. Conversely, other carbon sources alternative to citrus pectin such as pomegranate and artichoke peel wastes, glucose, fructose, maltose, xylose, sucrose, glycerol, peptone, beef extract and tryptone at the concentrations mentioned in Table 1 exerted suppressive effects on the productivity of PGase by B. licheniformis SHG10. Moreover, the effect of addition of some salts as supplementary substances to the production medium was tested. Data of Table 2 demonstrated that separate introduction of NaNO 3 , KNO 3 , CaCl 2 , FeSO 4 and MgSO 4 salts to the control production medium (citrus pectin based-basal medium) each at a concentration of 0.2% (w/v) exhibited diverse degrees of enhancement in the level of PGase with relative specific activity of 145%, 163.8%, 198,9%, 228.57% and 237.7%, respectively when compared to that level obtained upon using control production medium. The addition of (NH 4 ) 2 SO 4 at a final concentration of 0.2% (w/v) resulted in a level of PGase almost equivalent to that of the control production medium. On the other hand, a low level of PGase was detected in the production medium containing NH 4 Cl at a concentration of 0.2% (w/v).
Concerning the effect of agitation speed on PGase production, three different agitation speeds (100, 150 and 200 rpm) were studied. The level of PGase obtained upon conducting the fermentation process at 150 rpm was higher than that obtained upon conducting the process at 100 rpm (Data not shown). Whereas, the level of PGase obtained upon conducing the fermentation process at 150 rpm was comparable with that obtained upon conducting the process at 200 rpm (Data not shown). Accordingly, agitation speed at 150 rpm was selected to carry out further optimization experiments.
PBD results
PBD matrix, coded-real values of independent variables and experimental vs. predicted values of PGase were shown in Table 3. Whilst regression analysis and independent variables evidenced significant consequences on PGase levels were presented in Table 4. The detected PGase activity ranged from 0.0 to 3.04 U/ml reflecting the irreplaceable necessitate for carrying out optimization in order to attain the possible highest levels of PGase. ANOVA results showed that the P-value and F-value of the model were 0.00087 and 10.26, respectively. This F-value 10.26112 of the model reflects the significance of the model. However, the model P-value implies that the chance is only 0.087% that this model F-value could occur due to noise. Values of "Prob. > F" less than 0.05 were taken into account to have significant substantial effect on the outcome. Generally, significance of the coefficients has been reported to be directly All relative specific activities of PGase were related to U/mg PGase obtained upon using citrus pectin -based basal medium as production medium. c Citrus pectin based-basal medium composition as mentioned in materials and methods. Table 3 PBD with coded levels of ten independent variables and experimental vs. predicted PGase values Trial# X 1 X 2 X 3 X 4 X5 X 6 X 7 X 8 X 9 X 10 Y (PGase) (U/mL) c Exp a . Pred b .
proportional to t-test and inversely to P-value (Douglas 2001;Heck et al. 2005). Regression analysis suggested that the level of PGase was significantly affected by only three independent variables out of ten tested independent variables. These three independent variables showing significant effects at P < 0.05 were orange peel waste percent, pH of the production medium and incubation temperature. Pareto chart ( Figure 1) is a convenient way to illustrate the order of significance of independent variables affecting PGase production based on their P-values. After exclusion of the insignificant model terms (based on their insignificant P-values >0.05), a modified first order polynomial equation (1) was set in terms of coded independent variables in order to describe the linear effects of orange peel waste percent, pH of the production medium and incubation temperature on PGase level.
These three chosen independent variables recognized by PB were considered to be the main significant key determinants for PGase production by B. licheniformis SHG10. They were further studied in the next stage of the optimization plan via RSM.
RSM results
Box-Behnken design, a kind of RSM approach, was employed in this study in order to locate the optimal level of the three independent key determinants identified through PB that controlling PGase production. The design matrix along with the experimental and predicted PGase levels were displayed in Table 5. ANOVA results showed that the model F-value of 24.3 and the model P-value of 0.0013 imply the significance of the model and the likelihood (0.13%) that this F-value could occur due to noise. Moreover, adequacy of the model to Figure 1 Pareto-Chart in descending layout for PB parameter estimates of ten tested independent variables. explain the relationship between the response (output) and the significant independent variables could be measured by the small model P-value 0.0013 and the large lack of fit P-value of 0.276. The lack of fit F-value (2.77) indicates that it is not significant relative to the pure error. Non-significant lack of fit reflects goodness of the model. Aptness of the model was inferred from the R 2 value of 0.977. Regression coefficients were calculated in terms of coded values of independent variables and data were fitted to a second order polynomial equation (2).
Our data revealed that only four out of nine model terms exhibited significant effect (P < 0.05) on PGase production ( Table 6). The independent variable incubation temperature showed both linear and quadratic effect at P-values of 0.0023 and 0.00022, respectively. Dissimilarly, the independent variable pH of production medium exhibited both linear and cross interacted effect with the independent variable orange peel waste at P-values of 0.00039 and 0.0026, respectively.
In order to attain the optimized conditions, canonical analysis was carried out. Studying the overall shape of the response and determining whether the stationary point is maximum, minimum or saddle point could be achieved through canonical analysis. Shape of the response is characterized by eigen-values and eigenvectors in the matrix of second order. Directions of principle orientation for the surface are determined by eigenvectors, while signs and magnitude of eigen-values point for surface shape in these directions. Two rules of thumb explaining the concept of eigen-values and their mathematical indications were reported previously (Myers 1976). The 1 st rule states that upward and downward curvatures of the response are evidenced by positive and negative eigen-values, respectively. While the 2 nd rule states that the larger an eigen-value is in its absolute value, the more pronounced is the curvature of the response surface in the associated direction. Our data revealed that, the model has eigen-values of [λ 1 = -0.06128758, λ 7 = -1.00026881 and λ 9 = -1.02619361]. By applying 1 st rule of Myer, our negative eigen-values reflected that the predicted stationary point is maximum. However, based on 2 nd rule of Meyer, the two largest eigen-values in c U/mg: calculated as μg galacturonic acid.min -1 .mg -1 . X 1 : orange peel waste percent, X 7 : pH of the production medium and X 9 : incubation temperature. their absolute values (1.00026881 and 1.02619361) of our model conferred a pronounced curvature in the directions of two independent variables (X 7 and X 9 ). This finding to a great extent authenticated the results of regression analysis stated that X 7 and X 9 exhibited the highest significant effect in linear, quadratic and cross interacted forms on PGase level. Anchored in canonical analysis, the predicted coded stationary point was at {X 1 = 0.260620063, X 7 = 0.005901057 and X 9 = -0.215413876} to achieve a predicted Y of 2.69 μg galacturonic acid.min -1 . mg -1 . Besides, the predicted stationary point is clearly positioned inside the explored domain (model constrains).
To further explore the nature of the response surface at the stationary point, three dimensional contour surface plots were generated. Figures 2, 3 and 4 illustrated the contour surface plots for the response. Typically, the contour surface plots are based on the model, holding one independent variable constant at its optimal level where varying the other two independent variables within the domain. Figure 2 illustrated the response of the dependent variable (PGase) for the optimal level of incubation temperature. The maximal predicted level of PGase 2.69 μg galacturonic acid.min -1 . mg -1 was noticed at levels of 1.76% (w/v) and 8.0 for orange peel waste and pH of the production medium, respectively nearby to the center point of the model. However, the contour surface plot depicted in Figure 3 revealed that the maximal point of PGase at the optimal level for pH of the production medium could be reached at 1.76% (w/v) orange peel waste and 37.8°C. Correspondingly, these predicted levels of both dependent and the independent variables were further evidenced by the contour surface plot illustrated in Figure 4. Concentrations of orange peel waste greater than 1.7% (w/v) did not result in further enhancement in the level of PGase as it was revealed form contour surface plots depicted in Figures 2 and 3. This reflects that a stationary point is achieved at concentrations of orange peel waste exceeding 1.76% (w/v). Conversely higher levels of pH of the production medium and incubation temperature beyond 8.0 and 37.8°C, respectively showed adverse effect on the level of PGase as illustrated from the contour surface plot depicted in Figure 4. These results were verified by the above results of canonical analysis regarding sign and magnitude of model eigen-values.
In addition, validation of the model for PGase production was carried out experimentally by using the aforementioned predicted levels of the independent variables. Experimental data revealed that the adequacy of the model was more or less 100%.
Optimum pH and temperature for crude PGase
Data revealed that PGase SHG10 showed appreciable level of activity over a wide range of pH (3.0-11.0) ( Figure 5a). Whilst, the optimum temperature of enzyme activity was found to be at 50°C (Figure 5b).
Levels of pectic oligosaccharides
The level of pecic oligosaccharides accumulated in the fermentation broth of this bioprocess was estimated. It was found to be 200 μg galacturonic acid/mL of fermentation broth after 24 hrs. No higher levels of these substances were found in the fermentation broth beyond 24 hrs of incubation.
Discussion
PGase(s), one member of the family pectinase(s) that have a vast number of industrial applications, still attract the attention of many researchers worldwide. Due to the potential and wide applications of pectinases particularly PGase(s), researchers up till now in worldwide laboratories report the isolation and characterization of novel PGase(s) mainly from fungi and barely from bacteria. Current commercial PGase preparations available in enzymes markets are exclusively derived from fungal species. Normally, PGase (s) of fungal origin have an optima pH ranging from 3.5-5.5. This restricted acidic range of optimal pH greatly confines the extent of fungal PGase (s) utilization. Commercialization of any new enzyme industry is usually constricted to the high cost of production process. Nonetheless, the cost encounter in culture medium accounts for 30-40% of the overall production cost (Bayoumi et al. 2008). From another side, the non-efficiently utilized agro-industrial wastes are accumulated in considerable amounts that could not be abandoned. Furthermore, applying of improper waste disposal methods results in arise of health hazards and environmental problems. Here, the process of PGase production directed by B. licheniformis SHG10 was studied thoroughly from the standpoint of low cost effectiveness in conjunction with appreciable yield.
In the course of search for low cost effective medium composition to simultaneously support the growth of B. licheniformis SHG10 and induce PGase production, different raw agro-industrial wastes (orange, lemon, pomegranate, banana and artichokes peel wastes) had been introduced separately into the fermentation broth as sole PGase inducers. Our data demonstrated that there exists a profound impact imposed by carbon source on the production of PGase from B. licheniformis SHG10. Present finding reveals that raw orange peel waste is a superior PGase inducer comparable with other raw agro-industrial wastes, synthetic citrus pectin and other synthetic carbon sources being tested in this study. Present finding is in disagreement with that of Dey et al. (2011) who reported that synthetic citrus pectin exhibited the best effect as PGase inducer from Bacillus sp. AD1 relative to some agroindustrial wastes particularly lemon and orange peel wastes. It was reported that synthetic citrus pectin is the best PGase inducer from E. aerogenes NBO2 (Darah et al. 2013). In accordance with our findings, orange peel waste was the best inducer of PGase from Aspergillus niveus in submerged state fermentation (Maller et al. 2011). With regard to the impact of wheat bran on PGase productivity from our bacterial strain, data revealed that almost quite similar levels of PGase were obtained upon adding synthetic citrus pectin and wheat bran separately in the fermentation broth. (MTCC7542), respectively are in good agreement with our finding. Moreover, glucose, xylose, maltose and sucrose as sole PGase inducers were reported to exhibit inhibitory effect on PGase production from B.firmus -I-10104 in solid state fermentation (SSF). Glucose and sucrose at a concentration of 1% (w/v) was pointed out to inhibit the PGase productivity from E. aerogenes NBO2 (Darah et al. 2013). The noticeable reduction in PGase levels in presence of sugars as a sole carbon source and a PGase inducer could be attributed to the phenomenon of catabolite repression (Ahlawat et al. 2009;Cavalitto et al. 1996;Solís-Pereira et al. 1993).
It was reported that productivity of bacterial and fungal PGase(s) is greatly affected by some mineral salts added to the fermentation broth. To explore the role of these mineral salts on PGase productivity by B. licheniformis SHG10, the effect of some mineral salts mentioned above was investigated. Present data revealed that, introduction of MgSO 4 , FeSO 4 , CaCl 2 , KNO 3 and NaNO 3 at a concentration of 0.2% (w/v) to the fermentation broth containing synthetic citrus pectin as co-inducer sole carbon source resulted in variable higher appreciable levels of PGase comparable to those levels obtained at zero concentrations of these salts. In accordance with the present finding, KNO 3 at a concentration of 0.2% (w/v) was the best PGase nitrogen source inducer form B. licheniformis growing on the agro-industrial potato peels waste in SSF (Dharmik and Gomashe 2013). In contrast to our finding, neither NaNO 3 nor KNO 3 at a concentration of 0.1% (w/v) had shown stimulatory effects on PGase productivity from B. sphareicus (MTCC 7542) in citrus pectin containing fermentation broth (Jayani, et al. 2010). Similarly, NaNO 3 exhibited an inhibitory effect on PGase production from E. aerogenes NBO2 (Darah et al. 2013). However, other mineral salts such as NH 4 Cl and (NH 4 ) 2 SO 4 exhibited inhibitory effect on PGase from B.licheniformis SHG10. Addition of these two mineral salts separately to the fermentation broth displayed variable levels of PGase from different PGase producers either higher or lower than those obtained at zero concentrations of each salt (Jayani, et al. 2010;Darah et al. 2013;Bayoumi et al. 2008;Dharmik and Gomashe 2013;Kashyap et al. 2003). Concerning yeast extract, also the literature reported varied impact on PGase production from different PGase producers (Rehman et al. 2012;Kapoor et al. 2000).
Low constitutional levels of PGase from B. licheniformis SHG10 in presence of peptone, beef or tryptone were detected in fermentation broth lacking any source of pectin either in synthetic form or in raw one. However, other studies reported varied levels of PGase from different PGase producers in fermentation broth containing peptone or tryptone as additional nitrogen sources in presence of pectin source.
In the course of high yield PGase production from B. licheniformis SHG10, statistical optimization was applied to maximize the yield of PGase. Roughly identified factors through OVAT, showing a stimulatory effect on PGase production from B.licheniformis SHG10 growing on citrus pectin based-basal medium, were studied further through PB. Statistical analysis of data derived from PB revealed that none of the tested mineral salts identified through OVAT; MgSO 4 , KNO 3 , NaNO 3 , CaCl 2 and FeSO 4 ; showed significant impact on PGase productivity in fermentation broth containing orange peel waste instead of synthetic citrus pectin. The agro-industrial orange peel waste seems to be a rich substrate that could co-simultaneously provide the bacterium with all elements needed during the course of bacterial growth and PGase induction as well. This finding greatly alleviates the cost of the PGase production medium.
Substitution of high cost synthetic substrates with raw solid substrates such as agro-industrial wastes in enzyme industry based-bioprocesses is somehow feasible task implying some challenges. The prospect or the scope that a solid waste could cover all needs of a microorganism from organic and inorganic substances mandatory for microbial growth and enzyme induction is considered to be one of the major obstacles in this respect. The greater that essential nutritional elements, mandatory for microbial growth and enzyme induction, exist in suboptimal levels in raw solid substrates, the more indispensable need is to add external supplements in the fermentation broth (Sneath 1986). As a consequence, the cost of the production medium will elevate.
Optimized production of PGase from B. licheniformis SHG10 is achievable through using a low concentration of orange peel waste (1.76% w/v) in the fermentation broth as the sole carbon source and the only PGase inducer. This could be attributed to the high content of pectin included in this waste. From another side, the need for low percentage of this waste to support PGase production from B. licheniformis SHG10 implies the feasibility of the downstream process in order to remove the remaining undegraded orange peel waste. As a consequence the cost encountered in downstream process would decrease and the overall cost of the process of PGase production would also reduce.
Pertaining to optimal temperature and pH of the production medium required for PGase production, our finding was in a good agreement and disagreement with other findings (Soares et al. 1999;Dey et al. 2011;Maller et al. 2011;de Andrade et al. 2011;Das et al. 2011;Deshmukh et al. 2012). In the context of optima pH for PGase activity, the crude PGase SHG10 was observed to work efficiently under a wide range of pH covering from 3.0 to 11.0 with a slight noticeable decline in the activity at pH 11. Whilst, the optimal activity was confined to the neutral-alkaline scale. In addition, the crude PGase SHG10 showed its activity over a wide range of temperatures ranging from 37°C -50°C with an optimal activity at 50°C. The requirement of alkaline pH range (8.0 -11.0) for fulfillment of PGase SHG10 optimal activity features the potential applications of this enzyme in textile processing, degumming of plant bast fibers, treatment of pectic wastewaters, paper making, and coffee and tea fermentations. These two findings concerning optima pH and temperature of our PGase SHG10 imply a clue about the wide range of pH and temperature degrees under which SHG10 PGase could work efficiently. Whereupon, PGase SHG10 would have vast industrial promising applications where each selected combination of pH and temperature degree is a crucial factor to guarantee the success of a certain industrial application. As a consequence, PGase SHG 10 would have a privilege over fungal PGase (s) that have a restricted range of both optima pH and temperature degrees.
Regarding the issue of pectic oligosaccharides production, the literature contains a plethora of chemical methods devoted to synthesize the pectic oligosaccharides that have some reported medical applications (Olano- Martin et al. 2003a,b). In this study, data revealed the likelihood of obtaining an appreciated high level of these substances (200 μg galacturonic acid/mL) in the fermentation broth of B. licheniformis SHG10 growing on the agro-industrial orange peel waste as a sole carbon source. These recorded levels of pectic oligosaccharides resulted as a consequence of the activity of the PGase produced in the fermentation broth of B. licheniformis SHG10 on its complex substrate, orange peel waste. As a matter of fact, the more PGase produced is in the fermentation broth the more liberated pectic oligosaccharides would exist. Therefore, this greatly would necessitate the need for optimizing the process of orange peel waste biodegradation in terms of maximizing pectic oligosaccharides production in the future. However, the nature of the obtained pectic oligosaccharides could be controlled through the possible cascade of pectinase(s) that would be produced by B. licheniformis SHG10 growing under the stated conditions. This finding is considered an alternative promising approach implying some challenges towards biosynthesis of pectic oligosaccharides from the standpoints of cost effectiveness, good quality and high yield.
Conclusions
Present work addresses a cheap rapid biotechnological method to promote PGase and pectic oligosaccharides industries through bioprocessing of the agro-industrial orange peel waste via B. licheniformis SHG10. Characteristics of PGase SHG10 concerning the wide range of optima pH greatly confirm its potential biotechnological applications.
For the next future work, the authors are planning to further maximize the yield of PGase from B. licheniformis SHG10 through a molecular gene cloning approach and improve the physicochemical properties and PGase activity of the enzyme towards pectin-containing substrates through applying directed evolution methodologies. Additionally, the authors are going to optimize the yield of pectic oligosaccharides accumulated in the fermentation broth as a result of the biodegradation of orange peel waste through this bioprocess.
Bacterial strain
Bacillus licheniformis strain SHG10 was used in this study as PGase producer. This bacterium was previously isolated from Egyptian soil and identified as B. licheniformis SHG10 strain (unpublished data
Pectin-containing materials
Different pectin-containing materials were co-utilized in this study as a sole carbon source for the growth of the producer bacterial strain and as an inducer for PGase as well. Pectin-containing materials used in this study included synthetic citrus pectin, wheat bran, orange peel waste, lemon peel waste, banana peel waste, artichokes peel waste and pomegranate peel waste. The last five pectin-containing materials were collected from different sites (e.g.; local Egyptian markets, domestic effluents and agricultural fields). The collected pectin-containing materials were washed with distilled water then were allowed to dry at 60°C for five hrs. After that, the dried pectin-containing materials were prepared in the form of small cut pieces preparation before their incorporation into the fermentation broth. Synthetic citrus pectin was purchased from Sigma -Aldrich Co. While, wheat bran was obtained from flour mills companies in Alexandria, Egypt.
Media
Peptone yeast broth (Bernhardt et al. 1978) was used to activate the bacterial producer strain. PA medium is peptone yeast broth with 1.5% agar agar. Polygalacturonase core production medium of (Soares et al. 1999) with slight modifications was used in this study. This modified medium was used during the initial steps of the optimization process. The modified PGase core production medium contained the following components in % (w/v): 0.5 g pectin, 0.14 g (NH 4 ) 2 SO 4 , 0.6 g K 2 HPO 4 , 0.2 g KH 2 PO 4 , 0.01 g MgSO 4 and 0.3 g yeast extract unless otherwise stated.
Inoculum preparation
A fine touch of B. licheniformis SHG10 preserved on PA slant was streaked on PA agar and was incubated at 37°C for overnight. One colony was picked to inoculate 20 ml of PY medium in 100 ml Erlenmeyer flask. The inoculated broth was incubated at 37°C with agitation speed of 200 rpm for 4 hrs until the culture OD at 420 nm reached 0.5. Then this growing culture (seed broth) was used to inoculate the fermentation broth (production medium). The inoculum size of the seed broth used to inoculate the fermentation broth was 2% (v/v) unless otherwise stated.
PGase assay
The PGase activity was assayed by estimating the amount of reducing sugars released under assay conditions. Determination of the amount of released reducing sugars as galacturonic acid was carried out as reported previously (Miller 1959) using 2-hydroxy-3,5-dinitrobenzoic acid [(DNSA), Shanghai Orgpharma Chemical Co., Ltd., China]. Succinctly, the reaction mixture contained 0.5 mL of 0.5% citrus pectin (Sigma-Aldrich Co.) as a substrate (dissolved in 50 mM Tris-HCl, pH7.6) and 0.5 mL crude enzyme (fermentation broth). This mixture was incubated at 37°C for 20 min. After that, the enzymatic reaction was stopped by addition of 1 mL of DNSA followed by boiling for 10 min. Then the final volume was completed to 4 mL by distilled water and the developed color was measured at 540 nm. Control reactions were prepared as mentioned above except that DNSA was added prior addition of the crude enzyme. A standard curve with α-galacturonic acid (Sigma-Aldrich Co.) was established. One unit (arbitrary unit) of enzyme activity was defined as the amount of enzyme that releases one μg of α-galacturonic acid per min from citrus pectin as a substrate in 50 mM Tris-HCl, pH 7.6 at 37°C.
Protein determination
The protein content of the crude enzyme solution was performed as reported previously using Folin -Lowry reagent (Lowry et al. 1951). A standard curve using bovine serum albumin was established.
Pectic oligosaccharides determination
Pectic oligosaccharides were determined as reported previously by the method of Miller (1959). Briefly, 0.5 mL of the fermentation broth was added to 1 mL of DNSA. Then, the reaction mixture was boiled for 10 min. After that, the absorbance of the developed color was measured at 540 nm against blank (the same as the reaction except that water was added instead of fermentation broth).
Optimizing the production of PGase from B. licheniformis SHG10 Optimizing the production of PGase from the producer strain was accomplished through a three successive step plan; one variable at a time approach (OVAT), Plackett-Burman design and Box-Behnken design.
OVAT approach
OVAT was employed in this study in order to screen different independent variables that would either stimulate or inhibit the production of PGase enzyme. This approach is based on changing one variable at a time without studying the interaction among the tested variables. The effect of different agro-industrial wastes (e.g., orange, lemon, banana, artichoke, pomegranate peel wastes and wheat bran) and some synthetic carbon sources (e.g., citrus pectin, tryptone, peptone, beef extract, glucose, maltose, sucrose, xylose, fructose and glycerol) on PGase productivity by B. licheniformis was assessed. In addition, the effect of different salts such as NaNO 3 , KNO 3 , NH 4 Cl, CaCl 2 , MgSO 4 and FeSO 4 was studied as well. The exact concentrations of the aforementioned tested substances were displayed in Tables 1 and 2.
Plackett -Burman Design (PBD)
Identifying the significant main key determinants (physicochemical independent parameters) in a bioprocess along with studying the linear effect of these tested variables is achieved by applying a powerful statistical approach namely called Plackett-Burman design that developed by two statisticians Plackett and Burman (1946). In this approach evaluating the linear effect of N independent variables on the dependent variable (output of a bioprocess) is tested in a N + 1 experiment. Normally, each independent variable is studied in two levels -1 and +1; low coded level and high coded level, respectively. The design matrix was generated by a statistical software package Minitab version 15. Here, twenty experimental runs (trials) had been conducted. The following polynomial equation from the first order (Equation 3) was put in order to evaluate the linear effect imposed by the ten tested independent variables on the level of PGase enzyme: Y ¼ β 0 þ β 1 X 1 þ β 2 X 2 þ β 3 X 3 þ β 4 X 4 þ β 5 X 5 þ β 6 X 6 þ β 7 X 7 þ β 8 X 8 þ β 9 X 9 þ β 10 X 10 Where Y is the level of PGase activity, β 0 is the model intercept, X 1 -X 10 are the tested independent variables (orange peel waste, NaNO 3 , MgSO 4 , CaCl 2 , FeSO 4 , KNO 3 , pH, inoculum size, incubation temperature and incubation time, respectively) and β 1 -β 10 are the coefficient of the ten tested independent variables. The experimental runs were conducted according to the PBD matrix in 250 ml Erlenmeyer flasks with working volume of 25 ml. All experimental runs were conducted at agitation speed of 150 rpm.
Box-Behnken design
Three key determinants (orange peel waste percent, pH of the production medium and incubation temperature) identified through PBD had significant effects on the level of PGase. In order to determine the optimal level of each key determinant ((independent variable) along with the maximal level of PGase (dependent variable), a response surface methodology approach was applied here. Box-Behnken design, developed by Box and Behnken (1960), was employed in this study. Fifteen experimental runs (trials) had been conducted. The following polynomial equation from the second order (Equation 4) was set in order to estimate the effect of all possible forms of interactions imposed by the above mentioned three independent variables on the level of PGase enzyme: Y ¼ β 0 þ β 1 X 1 þ β 7 X 7 þ β 9 X 9 þ β 11 X 2 1 þ β 77 X 2 7 þ β 99 X 2 9 þ β 17 X 1 X 7 þ β 19 X 1 X 9 þ β 79 X 7 X 9 Where Y is the level of PGase activity, β 0 is the model intercept, X 1 , X 7 and X 9 are the tested independent variables (orange peel waste percent, pH of the production medium, and incubation temperature, respectively), β 1 , β 7 and β 9 are linear coefficients, (β 11 , β 77 , β 99 ) are quadratic coefficients and (β 17, β 19 , β 79 ) are cross interaction coefficients. For statistical calculations, each independent variable X was coded as Xi according to the Equation 5.
where X i is dimensional coded value of the independent variable, xi is the real value of this variable at this coded value, x o is the real value of this variable at the center point (zero level) and Δxi is the step change value. The experimental runs were conducted according to the Box -Behnken matrix in 250 ml Erlenmeyer flasks with a working volume of 25 ml. All experimental runs were conducted at agitation speed of 150 rpm.
Statistical, canonical analyses and contour plots
RSM package (R Development Core Team 2009), available from the Comprehensive R Archive Network at http:// CRAN.R-project.org/package=rsm, was used in this study to carry out multiple regression, canonical analyses and graphing of three dimensional contour surface plots.
Effect of different pH and temperature on the activity of PGase
Two buffers were used in this study to cover a wide range of pH, 50 mM sodium acetate buffer, pH 3-6 and 50 mM phosphate buffer, pH 7-11. Five different temperatures (37°C, 40°C, 50°C, 60°C and 70°C) were used to test the optimal activity of PGase. | 8,695.4 | 2014-06-30T00:00:00.000 | [
"Environmental Science",
"Agricultural and Food Sciences",
"Chemistry"
] |
Digital Commons @ Michigan Tech Digital Commons @ Michigan Tech Leveraging very-high spatial resolution hyperspectral and thermal Leveraging very-high spatial resolution hyperspectral and thermal UAV imageries for characterizing diurnal indicators of grapevine UAV imageries for characterizing diurnal indicators of grapevine physiology physiology
: E ffi cient and accurate methods to monitor crop physiological responses help growers better understand crop physiology and improve crop productivity. In recent years, developments in unmanned aerial vehicles (UAV) and sensor technology have enabled image acquisition at very-high spectral, spatial, and temporal resolutions. However, potential applications and limitations of very-high-resolution (VHR) hyperspectral and thermal UAV imaging for characterization of plant diurnal physiology remain largely unknown, due to issues related to shadow and canopy heterogeneity. In this study, we propose a canopy zone-weighting (CZW) method to leverage the potential of VHR ( ≤ 9 cm) hyperspectral and thermal UAV imageries in estimating physiological indicators, such as stomatal conductance (G s ) and steady-state fluorescence (F s ). Diurnal flights and concurrent in-situ measurements were conducted during grapevine growing seasons in 2017 and 2018 in a vineyard in Missouri, USA. We used neural net classifier and the Canny edge detection method to extract pure vine canopy from the hyperspectral and thermal images, respectively. Then, the vine canopy was segmented into three canopy zones (sunlit, nadir, and shaded) using K-means clustering based on the canopy shadow fraction and canopy temperature. Common reflectance-based spectral indices, sun-induced chlorophyll fluorescence (SIF), and simplified canopy water stress index (siCWSI) were computed as image retrievals. Using the coe ffi cient of determination (R 2 ) established between the image retrievals from three canopy zones and the in-situ measurements as a weight factor, weighted image retrievals were calculated and their correlation with in-situ measurements was explored. The results showed that the most frequent and the highest correlations were found for G s and F s , with CZW-based Photochemical reflectance index (PRI), SIF, and siCWSI (PRI CZW , SIF CZW , and siCWSI CZW ), respectively. When all flights combined for the given field campaign date, PRI CZW , SIF CZW , and siCWSI CZW significantly improved the relationship with G s and F s . The proposed approach takes full advantage of VHR hyperspectral and thermal UAV imageries, and suggests that the CZW method is simple yet e ff ective in estimating G s and F s .
Introduction
Grapevine (Vitis spp.) is one of the most commercially important berry crops in the world [1]. In grapevines, moderate water deficit is necessary to achieve desired berry quality and yield, although the effects of deficit irrigation on berry and quality are dependent upon the weather during the growing season, soil type, grapevine variety, and timing of water application [2][3][4][5]. Understanding the physiological responses of grapevine to mild to moderate water stress is fundamental to optimize deficit irrigation timing and amount [5,6]. Additionally, vine physiology is sensitive to diurnal cycles and vineyard microclimates, with even temporary stress having the potential to alter berry chemistry and vine growth [7,8]. Therefore, it is critical to account for physiological changes associated with these factors in field conditions throughout diurnal cycles rather than focusing only on pre-dawn or midday measurements, which are often used [5,6].
Current methods for estimating physiological processes include quantification of gas exchange, stomatal conductance, canopy temperature, and stem water potential [9]. These approaches are time-consuming, labor-intensive, and destructive (leaves need to be detached for stem water potential). Further, they are unsuitable for automation, subject to measurement and sampling errors, and the instrumentation required can be prohibitive in terms of cost [10,11]. More importantly, the data collected with traditional tools represent incomplete spatial and temporal characterization of key vine physiological parameters due to the time involved in taking the measurements and the capacity of the instruments themselves [12,13]. Therefore, it is necessary to have efficient monitoring systems that enable accurate tracking of key parameters governing vine function at high spatial and temporal resolution to obtain a reliable overview of vine physiology.
Hyperspectral and thermal sensors installed on field robots-, aircraft-, and satellite-based platforms are an increasingly common approach used to characterize plant physiology [14][15][16][17]. In hyperspectral remote sensing, sensors measure radiative properties of plants with hundreds to thousands of continuous narrow bands in the optical domain (0. 35-2.5 µm). This abundant spectral information increases the chance of detecting subtle physiological changes compared to multispectral data, which have a small number of bands averaged over a wide spectral region and are insensitive to narrow spectral signatures [18][19][20]. Photochemical reflectance index (PRI) and sun-induced fluorescence (SIF) retrieved from hyperspectral remote sensing are the most widely used indicators in the remote assessment of plant photosynthetic activity [21][22][23][24]. The PRI was formed to track the xanthophyll cycle, which relates to plant oxidative stress associated with photosynthesis, using changes in green reflectance centered at 531 nm [25]. SIF is a direct proxy of photosynthesis, because it detects reemitted excess light energy at 600-800 nm, from photosystems I and II, to minimize photosystem damage as a part of the plant photo-protective mechanism [26][27][28][29].
Thermal remote sensing (8-14 µm) is a popular tool but unlike the aforementioned indices that rely on factors related to photosynthetic activity, thermal data is a strong proxy for transpiration activity. Therefore, the rationale behind the application of thermal remote sensing for plant stress detection is the correlation between stress level and plant temperature increase, which is triggered by stomatal closure and reduced transpiration [30]. To overcome the effects of varying meteorological conditions on the stress and temperature relationship, the canopy water stress index (CWSI) was developed by the normalizing canopy (T c ) and air temperate (T a ) difference with the evaporative demand [31,32].
When satellite-or aircraft-based hyperspectral and thermal observations are made, the above-mentioned remotely sensed indices are affected by many factors, including soil/background, canopy architecture, and shadow due to the lack of spatial resolution [17,[33][34][35][36]. This is particularly true for highly heterogeneous fields of perennial woody crops (e.g., orchard and vineyards), where plants are planted in rows with cover crops or bare soil between the rows [15,37,38]. Further, low revisit frequency, high cost, and potential cloud occurrence limit the suitability of satellite remote sensing in agriculture, while operational complexity presents a major constraint for manned airborne platforms [39][40][41]. Alternatively, remotely sensed data from field-based platforms (poles/towers and manned/unmanned vehicles) have the capacity to assess plant health status [42,43]. However, there are shortcomings in these as well, such as that field-based remote sensing platforms are not easily transported and often offer a limited footprint [44][45][46].
Within the past few years, huge strides have been made in unmanned aerial vehicles (UAVs) and sensor technologies, which have enabled image acquisition at high spectral, spatial, and temporal resolutions over small to medium fields. Inexpensive and agile UAVs equipped with lightweight miniaturized sensors offer attractive alternatives for field-scale phenotyping and precision agriculture [44,47,48]. However, UAV-based studies have been limited by cost, experienced pilot shortages, lack of methods for fast data processing, and strict airspace regulations [44]. With the availability of lower-cost commercial UAV platforms (which are easy to operate) and sensors, improved image processing methods, and effective airspace regulations, those limitations are becoming less relevant [44,49,50]. Indeed, high spatial resolution images acquired at low altitudes have a favorable signal-to-noise ratio; further, it is possible to eliminate soil and shadow pixels with high confidence [51][52][53][54][55][56]. Additionally, image information (radiance, reflectance, and temperature) extracted from pure vegetation pixels is likely to reduce the effects of shadows and background soils, thus improving the estimation of crop biochemical, biophysical, and physiological parameters [22,35,[56][57][58].
Recently, questions have arisen regarding the effects of background and within canopy heterogeneity on SIF (sun-induced fluorescence) and CWSI (canopy water stress index) [17,38]. Hernández-Clemente et al. [17] demonstrated the effects of background pixels on SIF retrievals in monitoring forest health impacted by water stress and Phytophthora infections. Camino et al. [38] showed an improved relationship between the SIF and photosynthetic rate when SIF retrieved from the sunlit pixels of the almond tree canopy, while G s has the best correlation with CWSI, which was calculated using the coldest and purest canopy pixels (under the 25th and the 50th percentile of the canopy pixels). Therefore, it is critical to first separate non-vegetation pixels (shadows and background soils) and pure vegetation pixels, before establishing the relationship between remote sensing stress indicators such as SIF and CWSI, and in-situ measurements. Importantly, studies by Hernández-Clemente et al. [17] and Camino et al. [38] highlighted the significance of very-high spatial (VHR) resolution hyperspectral and thermal images for further understanding the effects of background and canopy structure on remote sensing stress indicators. Additionally, there is a lack of consensus on determining canopy temperature in vineyards due to the unique canopy architecture that can be divided into sunlit, nadir, and shaded zones [59]. Reinert et al. [60] and Pou et al. [61] found a high correlation between sunlit canopy zone temperature and G s and stem water potential. In contrast, Baluja et al. [62] and Möller et al. [63] showed promising results when the nadir canopy zone was used to extract canopy temperature. These findings guarantee further understanding of the effect of canopy structure on the commonly used remote sensing indicators to take full advantage of the information contained within VHR hyperspectral and thermal images.
In this study, we build on previous work and further explore applications of VHR hyperspectral and thermal images in the quantification of physiological parameters in plants. The objectives of this study are (i) to investigate the relationship between information extracted from VHR aerial images over three different canopy zones (sunlit, nadir, and shaded zones) and in-situ physiological indicators, such as stomatal conductance (G s ) and steady-state fluorescence (F s ), and (ii) to test the canopy zone-weighting (CZW) method's capacity to use aerial data to approximate diurnal physiological indicators.
Experimental Site Description and Meteorological Measurements
Ground and aerial data were collected during the 2017 and 2018 growing seasons in a 0.9 ha experimental vineyard at the University of Missouri-Columbia Southwest Research Center in Mount Vernon, Missouri, USA (37 • 4 27.17 N, 93 • 52 46.70 W). The climate of the region is continental with an average annual temperature of 15.6 • C and a mean annual rainfall of 1067 mm. The experimental vineyard consists of ungrafted 'Chambourcin' vines and 'Chambourcin' scions grafted to one of the following rootstocks: Selection Oppenheim 4 (SO4), 1103 Paulsen (1103P), and 3309 Couderc (3309C). In total, the vineyard includes four scion/rootstock combinations: 'Chambourcin ungrafted', 'Chambourcin/SO4', 'Chambourcin/1103P', and 'Chambourcin/3309C'. The ungrafted and grafted vines are planted in rows with 3 m row and 3 m vine spacing along the row (Figure 1a). The vineyard has an east-west row orientation. Vines were planted in 2009 and were eight years old at the beginning of sampling in this study. 'Chambourcin' vines were trained with a high wire cordon trellis and spur-pruned. The soil on the vineyard is a combination of sandy loam, silt loam, and loam, with an average pH of 6. Additional details of the study site are available in Maimaitiyiming et al. [64] and Maimaitiyiming et al. [65].
The 'Chambourcin' experimental vineyard consists of nine rows, each of which is treated with one of three different irrigation treatments, replacing 0%, 50%, and 100% of evapotranspiration (ET) losses. Each irrigation treatment is replicated three times (in three of the nine rows) and ET is obtained from a weather station installed at 270 m from the site. Each vineyard row includes 32 vines planted in cells of four adjacent vines of the same type ('Chambourcin' ungrafted or grafted to the same rootstocks). Within each four-vine cell, the two central vines were monitored through ground measurements.
Measurements of hourly air temperature ( • C), relative humidity (%), solar radiation (Watts/m 2 ), and wind speed (m/s) were obtained from the weather station. Vapor pressure deficit (VPD, hPa), which represents water demand of the atmosphere better than relative humidity [66], was calculated from air temperature and relative humidity using the equations of Struthers et al. [67].
Diurnal Physiological Measurements
Stomatal conductance (G s ) and steady-state chlorophyll fluorescence (F s ) were employed as important indicators of plant physiology because stomatal closure is one of the first responses to water deficit occurring in the leaves, and both G s and F s are closely correlated with net photosynthesis in grapevines and other species [9,[68][69][70][71]. Measurements of G s and F s were taken at veraison (the stage at which the berries begin shifting from green to dark red, usually late July/early August Table 1 presents details on the field and aerial data acquisition campaigns in the two growing seasons. In 2017, G s and F s were measured on 2-3 sunlit, youngest fully-matured leaves (one leaf per shoot) from main exterior shoots per vine using a porometer (SC-1, Decagon, Pullman, Washington, USA) and fluorometer (FluorPen FP 110, Photon Systems Instruments, Drásov, Czech Republic), respectively. The porometer has a manufacturer-claimed measurement range of 0 to 1000 mmol H 2 O m −2 s −1 with an accuracy of 10% [72]. The physiological measurements were taken from the leaves located on the upper third and sunny (south-facing) side of the canopy to represent a whole vine physiology, following previous similar studies [17,38,59]. In 2018, leaf gas exchange (G s , photosynthetic CO 2 assimilation rate, etc.) and F s measurements were performed on a single leaf (the same standard from the previous year was applied for leaf selection) per vine using a portable LI-6400XT infrared gas analyzer equipped with a pulse amplitude modulated leaf fluorometer chamber (Li-Cor Biosciences Inc., Lincoln, NE, USA). The leaves were measured at a controlled CO 2 concentration of 400 µmol mol −1 , at a photosynthetic photon flux density of 1000 µmol photons m −2 s −1 , and ambient conditions of air temperature and relative humidity. Eighteen vines (four scion/rootstock combinations plus 'Chambourcin ungrafted' and 'Chambourcin/SO4' combinations for each irrigation treatment) in 2017 and twelve vines (four scion/rootstock combinations for each irrigation treatment) in 2018 were monitored for physiology at the time of each diurnal flight, representing both irrigation and rootstock treatments. This experimental design was expected to cause a wider range of vine physiological variations and afforded the ideal site for high variability in a relatively small area. Additionally, due to the time per measurement needed, it would have been unfeasible to representatively sample each subgroup to correlate with diurnal hyperspectral data.
Aerial Image Acquisition and Pre-Processing
The diurnal aerial campaigns were carried out in 2017 and 2018 using thermal and hyperspectral cameras onboard UAVs ( Figure 2). Ideally, ground and aerial data collection should be carried out with the same instruments and cameras. In our case, this ideal scenario was not attainable due to limited resources and field crew availability. However, this also afforded us the opportunity to validate certain findings across multiple systems.
On 15 September 2017, a DJI S1000+ octocopter, rotary-wing platform (DJI Technology Co., Ltd., Shenzhen, China) was employed to carry an ICI (Infrared Cameras Inc.) 8640 P-Series thermal camera (Beaumont, Texas, USA) (Figure 2a,b). The aerial platform was equipped with a Pixhawk autopilot system, which enables autonomous flights based on user-defined waypoints. The ICI thermal camera was mounted on a custom-designed two-axis gimble. During the aerial campaigns, the UAV was flown at 30 m altitude above the ground with a fixed speed of 5 m/s. The ICI thermal camera has a resolution of 640 × 512 pixels in a spectral range of 7-14 µm with a 13 mm focal length, providing a ground sampling distance (GSD) of 4 cm at 30 m flight height. To successfully construct thermal orthomosaics, the flight missions were planned to have 90% forward and 80% side overlap of the thermal images, and the ICI camera was configured to capture a 14-bit radiometric JPEG image every second during the flight.
On 1 August 2018, aerial hyperspectral and thermal images were acquired using a visible and near-infrared (VNIR, 400-1000 nm) push-broom hyperspectral camera (Nano-Hyperspec VNIR model, Headwall Photonics, Fitchburg, MA, USA) installed in tandem with a FLIR (Forward-looking infrared) thermal camera (FLIR Vue Pro R 640, FLIR Systems, Inc., Wilsonville, OR, USA) onboard a DJI hexacopter (Matrice 600 Pro, DJI Technology Co., Ltd., Shenzhen, China) (Figure 2c,d). The Matrice 600 Pro is equipped with a DJI 3A Pro Flight Controller, real-time kinematic (RTK), and a Global Navigation Satellite System (GNSS) for positioning, which can provide ±0.5 m vertical and ±1.5 m horizontal accuracy. The cameras and an Applanix APX-15 global positioning system (GPS)/inertial measuring unit (IUM) (Applanix, Richmond Hill, ON, Canada) were fixed on a DJI JI Ronin-MX 3-axis gimbal, which directly connected to the aerial platform. The flight missions were carried out at an altitude of 80 m above the ground with 2 m/s forward velocity, which produced the GSDs (ground sampling distances) of 5 and 9 cm for the hyperspectral and thermal image, respectively. The hyperspectral camera records 640 spatial pixels and 270 spectral bands in the VNIR range at 2.2 nm/pixel with a radiometric resolution of 12 bits. The full width at half maximum of the camera is 6 nm with an entrance slit width of 20 µm. The hyperspectral images were acquired 40 frames per second at 2.5 ms integration time using a 12 mm focal length lens, yielding 25 • of the field of view at nadir. The FLIR thermal camera has an image array of 640 × 512 pixels with a 13 mm focal length, providing 45 • field of view (FOV). It can acquire longwave radiation in the 7.5-13.5 µm range at 14 bits, the thermals were recorded in a 14-bit JPEG format every second during the flights. The flight missions were planned for the hyperspectral camera with a 40% side overlap, which ensured the FLIR thermal images had at least 80% forward and side overlap because of the wide thermal camera FOV.
On 19 September 2018, thermal images were acquired using the ICI thermal camera. The flight missions and the ICI thermal camera settings were consistent with the 2017 aerial campaigns. Both thermal cameras used in this study are radiometrically calibrated sensors, which are composed of uncooled microbolometers. Even with the radiometric calibrations provided by manufacturers, the performance of thermal cameras may be affected by atmospheric conditions, target emissivity, and distance. To improve the accuracy of derived surface temperature, these factors were accounted for by background temperature, ambient humidity, barometric pressure, target emissivity, and distance as calibration parameters and using algorithms in associated software tools. For the ICI camera, the raw thermal images were converted to surface temperature ( • C) in 32-bit TIFF format by a proprietary equation within IR (Infrafred) Flash version 2.18.9.17 software (ICI, Beaumont, TX, USA), which allows adjusting atmospheric conditions and target properties. Similarly, before FLIR imaging, the temperature calibration parameters were entered to the FLIR UAS version 2.0.16 app, which can control the camera settings through Bluetooth connection from a mobile device. The calibrated thermal images were mosaiced using Pix4DMapper version 4.3.31 software (Pix4D SA, Lausanne, Switzerland) and georeferenced using the GCPs. Before each flight, the thermal cameras were turned on for at least 30 min for stabilization and the local atmospheric parameters were acquired from the on-site weather station. Additionally, a calibrated black body Model 1000 (Everest Interscience Inc., USA) and surface temperature of different objects measured with a thermal spot imager FLIR TG167 (FLIR Systems, USA, ±1.5 • C accuracy) were used for the assessment of the thermal products. For further details please see Sagan et al. [73] and Maimaitijiang et al. [74].
The hyperspectral image preprocessing included radiometric correction, orthorectification, and atmospheric correction. In the radiometric correction step, digital numbers of the raw 12-bit hyperspectral images were converted to calibrated radiometric values using the SpectralView version 5.5.1 software (Headwall Photonics, Fitchburg, MA, USA). In the same software, the calibrated radiance images were orthorectified using the Applanix IMU unit data as input. Further, the radiance was converted to reflectance in ENVI (Environment for Visualizing Images) version 5.5 software (Harris Geospatial Solutions Inc., Boulder, CO, USA) by the empirical line correction (ELC) method [75]. During the overpass of the aerial platform, a portable field spectroradiometer PSR-3500 (Spectral Evolution Inc., Lawrence, MA, USA) was available to provide reflectance spectra of an aerial calibration tarp with three levels of known reflectance levels (56%, 32%, and 11%), grapevines, grass, and soil, which were used as a spectral reference in the ELC method. The spectroradiometer records upwelling radiant energy in a range 350-2500 nm with a spectral resolution of 3.5 nm in the 350-1000 nm range, 10 nm in the 1000-1900 nm range, and 7 nm in the 1900-2500 nm spectral range. The targets were measured 3-5 times at nadir from 30 cm distance, and a 99% Spectralon calibration panel (Labsphere, Inc., North Sutton, NH, USA) was used to convert the target radiance to reflectance. The averaged target reflectance spectra were resampled to match the spectral resolution of the Headwall spectral camera. The radiance of the aerial images was extracted from a 25-pixel (5 by 5 pixels) region at the center of each target. Finally, the relationship between the reference spectra and the image radiance was established using the ELC method.
Extraction of Grapevine Canopy Row
To avoid the effects of shadow and soil components and due to spatial resolution discrepancy between thermal and hyperspectral images, extraction of grapevine rows is accomplished differently for thermal and hyperspectral images. Over each monitored target grapevine, a 1 m wide region of interest (width of the region was determined by canopy size) was defined and pixel values within this region were extracted for further analysis.
Canopy Row Extraction From Hyperspectral Images
Neural net classifiers were trained to extract pure grapevines row pixels from high-resolution hyperspectral images. We chose to use a neural net classifier due to the recent success of neural networks in hyperspectral image classification [76][77][78][79][80]. The neural net classifiers were implemented using ENVI version 5.5 software (Harris Geospatial Solutions Inc., Boulder, CO, USA) by providing regions of interest for five different classes, including grapevine, grass, vine canopy, shadow, and soil ( Figure 3a). More than 6000 pixels were selected for these classes, accounting for target and illumination variability. The best performance of neural net classifiers was found using 2 hidden layers and logistic activation function with 500 training iterations. For other necessary parameters, default values were used and the parameters included training threshold contribution, training rate, training momentum, and training root mean square exit criteria. The trained neural net classifiers showed an overall accuracy better than 99.33% and a Kappa coefficient of 0.991. Figure 3b shows the results of the delineated grapevine canopy boundary.
Canopy Row Extraction from Thermal Images
Pure grapevine canopy pixels required for thermal image retrieval were extracted using the Canny edge detection method [81]. Canny edge detection is a multi-step algorithm that was designed to detect magnitude and orientation of image intensity changes. This method has been proven to be effective in extracting pure canopy pixels from high-resolution thermal images [82,83]. Canny edge detection was implemented using the Open-source Computer Vision library (version 4.0.0) with Python 2.7. The detected vine canopy edges were diluted by up to five pixels along the direction of the thermal gradient for more conservative extraction of the vine pixels. Finally, the vine canopy edges were converted into polyline vector and the thermal orthomosaics were clipped using the vine edge polyline vector ( Figure 4).
Estimation of Pixelwise Canopy Shadow Fraction
Shadowing estimates of grapevine canopy were produced using the Sequential Maximum Angle Convex Cone (SMACC) method [84]. The SMACC was developed to generate spectral endmembers and endmember abundance with less expert knowledge and time [85]. The SMACC can also calculate shadow fractions of the abundances of reflectance when the sum of the fractions of each endmember for a pixel is constrained to one or less. Shadow fraction of each grapevine canopy pixels within the hyperspectral images was estimated using the SMACC tool in ENVI software. Figure 5a and b show hyperspectral reflectance spectra of different canopy zones and canopy show fractions modeled with the SMACC method.
Grapevine Canopy Segmentation
Extracted grapevine canopies from hyperspectral and thermal images were segmented into three canopy regions, corresponding to sunlit (slt), nadir (ndr), and shaded (shd) zones using K-means clustering, a machine learning-based unsupervised clustering algorithm [86] (Figures 6 and 7). K-mean clustering is an iterative algorithm that tries to find homogenous and non-overlapping clusters within data. The algorithm uses Euclidean distance as a similarity measure to minimize within-cluster variation while also keeping the clusters as different as possible. We employed K-mean clustering for vine canopy segmentation, where we grouped canopy pixels with similar shadow fraction (in the case of hyperspectral images) or canopy temperature (in the case of thermal images), depending on the type of the input image. The K-means clustering was implemented to divide vine canopies into three regions (K = 3) with maximum iterations of 100 in ENVI software. Commonly used hyperspectral reflectance indices closely related to plant physiology, pigment concentration, structure, and water content were calculated from the hyperspectral images to assess their ability to track diurnal physiological changes. Table 2 summarizes specific spectral indices grouped by their categories associated with (1) xanthophyll, (2) chlorophyll, (3) structure, (4) water content, and (5) chlorophyll fluorescence. Sun-induced chlorophyll fluorescence (SIF) emission was quantified from hyperspectral radiance images using the Fraunhofer Line Depth (FLD) principle [87]. The FLD is a radiance-based in-filling method, which takes advantage of both the narrow absorption feature of O 2 -A absorption and high spectral resolution [28]. Furthermore, the FLD method has shown reliable SIF retrieval performance using hyperspectral cameras with relatively broad spectral bandwidths (ranging between 5 and 7 nm FWHM), spectral sampling (lower than 2.5 nm), and signal-to-noise ratios of 300:1 or higher [22,88,89]. The FLD was calculated from a total of three bands (FLD3) in and out O 2 -A feature, using Equation (1) which is described in Reference [22,36].
where E out is an average value of incident solar irradiance at 750 and 780 nm, L out is an average value of target radiance at 750 nm and 780 nm, and E in and L in are incident solar irradiance and target radiance at 760 nm, respectively. The incident solar irradiance was measured at the time flight using the PSR spectroradiometer attached with a cosine corrector-diffuser (180 • ) for the entire spectral region (350-2500 nm). Chlorophyll fluorescence Sun-induced chlorophyll fluorescence SIF R stands for reflectance, and numbers are wavelengths in nanometer (nm).
Simplified Canopy Water Stress Index (siCWSI)
The CWSI was proposed by Jones [94] as: where T canopy is average canopy temperature extracted from pure vine pixels, T wet is the temperature of fully transpiring leaves, and T dry is temperature of non-transpiring leaves. Determination of T wet and T dry for conventional CWSI calculation is always climate-dependent, complex, and time-consuming [95][96][97]. Conversely, histogram analysis-based CWSI uses thermal images as a major input and it has reduced the meteorological data dependency and field measurements [82,83]. When T wet and T dry were obtained using a canopy temperature histogram which follows Gaussian distribution [98], it is necessary to remove mixed pixels that partially cover canopy and background, including soil, shadow, and weed [82,83]. In particular, simplified CWSI (siCWSI) developed by Bian et al. [82] was used in this study as this method outperformed other forms of CWSI in terms of monitoring plant water status from UAV-based high-resolution thermal images. To obtain siCWSI parameters, T wet and T dry were determined by the mean of the lowest 0.5% and the highest 0.5% of canopy temperatures, respectively. More details about the calculation of siCWSI can be found in Bian et al. [82].
Proposed Canopy Zone-Weighting (CZW) Method
Different canopy zones are expected to bring complementary information on the targeted vine physiology measurements by mitigating effects caused by illumination, viewing angle, and canopy structure. Considering certain canopy zones show higher coefficients of determination (R 2 ) than others in vine physiology estimation [59,61], a weighting strategy to different canopy zone can be detrimental for estimating of vine physiology. Inspired by our previous work on fusing various water quality variables and spectral reflectance through weighted prior decision-level fusion scheme [99], we herein proposed a new zone-weighting method, namely canopy zone-weighting (CZW), to systemically integrate canopy zone contribution, in which R 2 of each canopy zone was used to determine the contributing weight, denoted as ω i in Equation (3). The integration of multiple regression model outputs may explain a wide range of variations in a target variable compared to a single regression model [100]. The proposed CZW method is expected to leverage the strengths and limit the potential biases of using a single zone-based estimation.
A summary of the implementation steps of the CZW method is described as follows: (1) Establish relationships between aerial images' retrievals (hyperspectral indices and siCWSI) extracted from three canopy zones (slt, ndr, and shd) and grapevine physiological parameters (G s and F s ), (2) determine the contributing weight (ω) and contribution weight ratio (c) of the canopy zones to the target grapevine physiological indicator using the best relationships obtained in the previous step, and (3) calculate canopy zone-weighted image retrieval (ξ czw ) from three canopy zones using the corresponding c.
Relationships between the aerial image retrievals and grapevine physiological parameters were established in the form of the coefficient of determination (R 2 ) using five different linear and non-linear regression models (e.g., second-degree polynomial, logarithmic, exponential, and power) [24], and the best R 2 values of the canopy zones were used to calculate the ω of each zone through Equation (3): where ω i is the contributing weight of ith canopy zone, and R 2 i is the coefficient of determination that indicates the strength of the relationship between the ith canopy zone retrieval and the grapevine physiological parameters.
c of the three canopy zones to the grapevine physiological parameters were calculated by: where c i is the contribution weight ratio of the ith canopy zone. Based on the obtained c and the original image retrievals, ξ czw can be formulated by: where c slt is the contribution weight ratio of the sunlit canopy zone, c ndr is the contribution weight ratio of the nadir canopy zone, and c shd is the contribution weight ratio of the shaded canopy zone. ξ stl is the original image retrieval of the sunlit canopy zone, ξ ndr is the original image retrieval of the nadir canopy zone, and ξ shd is the original image retrieval of the shaded canopy zone. The aerial image retrievals derived from three canopy zones (slt, ndr, and shd), the combination of any two canopy zones (slt + ndr, slt + shd, and ndr + shd), the average value of entire canopy (avg), and the ξ CZW (CZW) were compared against G s and F s across all measurement time points and dates. To explore the performance of the image retrievals on the entire diurnal dataset, separate analyses were carried out for all flights together on a given field campaign date. Full relationships and their significance obtained with the linear and non-linear regression models were included as Supplementary Materials Tables S1-S3. The workflow from aerial image acquisition and preprocessing, vine canopy extraction and segmentation, and to implementation of the CWZ method and performance assessment is demonstrated in Figure 8.
Results
In the following sections, we report on several aspects of the experiments. First, we describe the environmental conditions of the vineyard and vine physiological indicators. Next, correlations of image retrievals (calculated from slt, ndr, shd, slt + ndr, slt + shd, ndr + shd, and avg canopy zone pixels and CZW method) with physiological indicators were analyzed to assess the effectiveness of our proposed approach. Finally, we show the diurnal changes in representative image retrievals within the vine canopy and their ability to track diurnal variations of vine physiological indicators.
Environmental Conditions and Diurnal Physiological Indicators of Grapevine
There was no major difference in the trends of environmental conditions for the three measurement dates (Figure 9). Air temperature and VPD increased gradually until 16:00 p.m., and then decreased sharply. Solar radiation followed a similar pattern as air temperature and VPD, but solar radiation rapidly rose only until midday. Relative humidity was at high levels in the morning, followed by a decreasing trend, and then began to increase from 16:00 p.m.. However, environmental conditions on 19 September 2018 were characterized by higher air temperature and lower relative humidity throughout the day compared to the previous two measurement dates, resulting in higher VPD values, thus more water-demanding atmosphere. All three measurement dates were calm, with wind speed of 3 m/s or lower, suitable for UAV data collection. The diurnal variations of grapevine physiological indicators on all measurement dates had a similar trend. The G s values were low in the morning, increased sharply, and reached a maximum around midday (Figure 10a,c). Afterwards, G s decreased until the afternoon. The trend was caused, most likely, by environmental conditions. In general, the diurnal change of F s followed the same pattern of G s (Figure 10b,d). On most of the measurement dates, maximum F s values occurred around noon, while lower F s values occurred in the morning and afternoon. There were no significant differences in in-situ physiological measurements at each flight time among the irrigation and rootstock treatments. Daily G s and F s showed a relatively high coefficient of variation (CV). G s had larger daily variability compared to F s for all the measurement dates. Both G s and F s on 15 September 2017, had 1-2 times higher values than the values on 1 August 2018, and this could be due to the different instruments used to determine the physiological indicators. Additionally, when the same instruments were used, lower G s and F s values were observed on 19 September 2018 than the corresponding values on 15 September 2017. This agreed with higher VPD occurring on 19 September 2018.
Relationship between the Aerial Image Retrievals and Grapevine Physiology
On 15 September 2017, the strongest correlation was established with the midday and afternoon measurements of G s for siCSWI czw (R 2 = 0.61, p < 0.01 and R 2 = 0.60, p < 0.01 respectively), followed by siCWSI avg in the midmorning (R 2 = 0.30, p < 0.05) (Figure 11a). siCSWI czw provided the strongest relationship to F s in the midmorning and midday (R 2 = 0.82, p < 0.001 and R 2 = 0.83, p < 0.001, respectively), followed by siCWSI ndr in the afternoon (R 2 = 0.52, p < 0.01) (Figure 11b). Comparing the correlations from pooled three flights, both G s and F s were strongly correlated with siCWSI czw (R 2 = 0.71, p < 0.01 and R 2 = 0.73, p < 0.01), followed by siCWSI avg with G s and siCWSI slt+ndr with F s , respectively.
On 1 August 2018, PRI, SIF, and siCWSI captured diurnal changes in G s and F s as a function of grapevine physiological response. There were no noticeable canopy structural effects, pigments degradation, and canopy water content change between irrigation treatments during the hyperspectral imaging campaign. This resulted in weak correlations (R 2 ≤ 0.30) between RE, NDVI, SR, and WI and grapevine physiological parameters (G s and F s ). Similar results were observed for both separate flights and when all flights were combined. Therefore, the results and discussion sections were focused on the results of PRI and SIF for hyperspectral data analysis. Figure 11. Circular bar plots show the relationships (R 2 ) established between siCWSI from different canopy zones and stomatal conductance (G s , (a)) and steady-state fluorescence (F s , (b)) for the three thermal flights on 15 September 2017 and all three flights combined. slt is for sunlit canopy zone, ndr is for nadir canopy zone, shd is for shaded canopy zone, avg is for average value of entire canopy zone, and CZW is for canopy zone-weighting method.
PRI more strongly correlated with G s during afternoon hours (2:00 p.m. and 3:45 p.m.) than earlier hours (11:00 a.m. and 12:45 p.m.), while SIF showed a strong correlation with G s for all four flights (Figure 12a,b). When three canopy zones were considered separately, PRI slt and PRI ndr seemed to be correlated well with G s . For SIF, there was no single canopy zone that showed a consistently strong relationship with G s (R 2 ≤ 0.60). In most cases, PRI and SIF derived from two combined canopy zones had a slightly higher correlation with G s than that of a single canopy zone. Meanwhile, slightly lower relationships were found by comparing PRI avg , and SIF avg and G s , except for the PRI avg in the afternoon, where PRI avg showed the highest correlation (R 2 = 0.96, p < 0.001) compared to any PRI from single canopy zone or combination of two. Generally, PRI czw and SIF czw improved the relationship with G s . In particular, PRI czw showed the strongest correlation in the midmorning and afternoon (R 2 = 0.35, p < 0.01 and R 2 = 0.98 p < 0.001). On the other hand, SIF czw appeared to have the closest relationship with G s in the midmorning (R 2 = 0.45, p < 0.05). When all flights were analyzed together, the relationships between PRI czw and SIF czw were stronger than the relationship with PRI and SIF derived from any single, combined, and average canopy zone pixels (R 2 = 0.70, p < 0.01 and R 2 = 0.89, p < 0.001).
Regarding F s , the obtained results for all the flights showed trends similar to those found for G s (Figure 12c,d). Both PRI and SIF showed a strong and significant relationship with F s . PRI sunlit (midmorning and midday) and PRI slt+ndr (midafternoon and afternoon) showed the best fitting with F s , while SIF czw was well-correlated with F s for all the flights, except for midmorning. When four flights combined, the best correlations were obtained with PRI czw and SIF czw , yielding (R 2 = 0.76, p < 0.001 and R 2 = 0.89, p < 0.001).
The highest correlation emerged between siCSWI czw and G s in the midmorning and afternoon (R 2 = 0.98, p < 0.001 and R 2 = 0.47, p < 0.01) (Figure 12e). However, in the midday and midafternoon, siCWSI slt+shd and siCWSI ndr respectively, showed the strongest relationship with G s (R 2 = 0.89, p < 0.001 and R 2 = 0.96, p < 0.001), followed by siCWSI czw at both time points (R 2 = 0.87, p < 0.001 and R 2 = 0.91, p < 0.001). The strength of the relationship for siCWSI ndr was the strongest with F s in the midafternoon (R 2 = 0.96, p < 0.001), followed by sCSWI czw (R 2 = 0.95, p < 0.001) (Figure 12f). The strongest correlation was observed between siCWSI slt+shd and F s midmorning and afternoon, respectively (R 2 = 0.80, p < 0.01 and R 2 = 0.88, p < 0.001). siCWSI czw was strongly correlated to F s only in the midday (R 2 = 0.72, p < 0.01). Using all four diurnal datasets, the strongest correlation was found between siCWSI czw and both G s and F s (R 2 = 0.75, p < 0.01 and R 2 = 0.74, p < 0.01). . (a,b,e) show the R 2 values between PRI, SIF, and siCWSI from different canopy zones and stomatal conductance (G s ), and (c,d,f) show the R 2 values between PRI, SIF, and siCWSI from different canopy zones and steady-state fluorescence (F s ). slt is for sunlit canopy zone, ndr is for nadir canopy zone, shd is for shaded canopy zone, avg is for average value of entire canopy zone, and CZW is for canopy zone-weighting method.
On 19 September 2018, siCWSI czw showed the strongest relationship with G s in the morning and for the combined datasets of two flights carried out on this date (R 2 = 0.70, p < 0.01 and R 2 = 0.60, p < 0.01) (Figure 13a). During the midday, the highest correlation was found between siCWSI ndr+shd and G s (R 2 = 0.59, p < 0.01). For F s , siCWSI ndr+shd and siCWSI slt+ndr had the best correlation with F s in the morning and midday (R 2 = 0.59, p < 0.01 and R 2 = 0.73, p < 0.001) respectively, which were followed by siCWSI czw at both time points (R 2 = 0.56, p < 0.01 and R 2 = 0.70, p < 0.01) (Figure 13b). When datasets from two flights were combined, siCWSI czw was the most highly correlated with F s (R 2 = 0.71, p < 0.01). Figure 13. Circular bar plots show the relationships (R 2 ) established between siCWSI from different canopy zones and stomatal conductance (G s , (a)) and steady-state fluorescence (F s , (b)) for the two thermal flights on 19 September 2018 and all flights combined. slt is for sunlit canopy zone, ndr is for nadir canopy zone, shd is for shaded canopy zone, avg is for average value of entire canopy zone, and CZW is for canopy zone-weighting method.
Diurnal Changes in Aerial Image Retrievals and Tracking Physiological Indicators
SIF and siCWSI retrievals from the VHR images on 1 August 2018 showed diurnal changes in the vine canopy (Figures 14 and 15). It can be noted that there was no visual difference between irrigation and rootstock treatments, and this was consistent with the in-situ physiological measurements. However, visual differences between sunlit, nadir, and shaded canopy zones could be easily recognized in both SIF and siCWSI images. Within-canopy variability of SIF increased until midday and then decreased (Figure 14b-e), following the gradual decline in solar radiation. The highest within-canopy variability of siCWSI was observed in the afternoon (Figure 15d), when air temperature and VPD reached the maximum values for the day. In general, the effects of diurnally changing environmental factors such as solar radiation, air temperature, and VPD on the vine canopies corroborate the need to separate canopy zones, as is shown with SIF and siCWSI retrievals. Furthermore, wide ranges of SIF and siCWSI values were found (1-6.5 W sr −1 m −2 nm −1 and 0.1-1, respectively) for each flight, which confirmed the relevance of canopy heterogeneity and the pertinence of accounting for the variability related to canopy structure. The in-situ physiological indicators measured on 1 August 2018 were compared against PRI, SIF, and siCWSI to assess the diurnal trends ( Figure 16). Diurnal PRI, SIF, and siCWSI values for each measurement time point showed agreements with G s and F s (regardless of the opposite direction shown in Figure 16a,c). Generally, these figures showed that diurnal PRI, SIF, and siCWSI followed the same pattern as that followed by vine physiological indicators during the experiment.
Discussion
Using diurnal VHR aerial images and in-situ physiological measurements, the current study investigated relationships between aerial image retrievals from different canopy zones and grapevine physiological indicators. Implemented irrigation treatments and rootstock/scion combinations in this study provided a wide range of grapevine physiological status for testing the capability of aerial images in characterizing grapevine physiology. The pure grapevine canopy pixels were extracted with high confidence from the high spatial resolution images coupled with neural network and computer vision-based methods. Then, the vine canopy was segmented into three different canopy zones using an unsupervised machine learning algorithm. Additionally, the siCWSI values were well within the range of the theoretical CWSI limit (<1), confirming that calculated siCWSI was based on pure vegetation pixels and not subjected to soil background contamination. It is worth mentioning that the employed methods in this study stood out among the limited similar studies [17,38,59] by taking full advantage of spectral information and temperature data as the basis for identifying different canopy zones and applying automated methods to streamline canopy zone segmentation from hyperspectral and thermal images.
Contribution of Different Canopy Zones to Grapevine Physiology: Hyperspectral Image Retrievals
In line with previous studies [22,[101][102][103], PRI and SIF derived from high-resolution aerial images closely followed the diurnal physiological changes of grapevine indicated by G s and F s (Figure 16). The results of sunlit, nadir, and shaded canopy zones showed different levels of correlation with grapevine physiology depending on the type of aerial images and retrievals. There were only a few instances where a single canopy zone (either sunlit or nadir) showed the strongest correlations. PRI and SIF derived from either combination of two canopy zones (slt + shd more frequent than slt + ndr) or averaged entire canopy zone pixels showed the most frequent and the strongest correlation with G s and F s measurements.
Among the recent efforts to improve the ability of PRI, some studies indicated the strong dependence of PRI on canopy shadow fraction and confirmed the importance of shaded leaves in the simulation of canopy PRI [104,105]. Zhang et al. [106] used a two-leaf (sunlit and shaded leaves) approach to improve the ability of PRI as a proxy of light use efficacy, which is closely related to G s , by accounting for sunlit and shaded leaf portion with weighted leaf area index. Takala and Mõttus [84] explained the illumination-related apparent variation in canopy PRI by considering shadow effects. The results of this study confirmed the importance of considering both sunlit and shaded canopy zones to improve the ability of PRI in tracking diurnal physiological changes. SIF is also influenced by illumination, sunlit/shaded canopy, and soil background, and these influences intensify when very-high-resolution hyperspectral images are used for analysis [17,36]. Hernández et al. [17] and Camino et al [38], who considered the effects of canopy heterogeneity on SIF, demonstrated the significance of sunlit canopy pixels in determining tree physiological status and underperformance of entire canopy pixels. This is somewhat contradictory to what was observed in this study, where SIF from entire canopy pixels also performed well in multiple instances, and this could be attributed to the discrepancy in spatial resolution of the hyperspectral images used in this study (5 cm) and previous studies (20 and 60 cm, respectively), where canopy pixels tend to be easily contaminated by soil and shaded background. In general, our results showed the improvements from combined (slt + ndr and slt + shd) or entire canopy zones over sunlit zones, implying the importance of heterogeneity within canopy structure in determining the relationship between SIF and in-situ physiological measurements.
Contribution of Different Canopy Zones to Grapevine Physiology: Thermal Image Retrieval
The results from this study regarding the relationship between the siCWSI and the physiological indicators were in agreement with those from previous reports [103,107]. Canopy temperature is affected by tree structure, which in turn affects thermal image indices such as CWSI [31,32]. Furthermore, there is a lack of consensus regarding the section of the sampling zones (sunlit, nadir, and shaded zones). Sepúlveda et al. [59] and Belfiore [108] suggested to take into consideration the effect of selecting canopy zones to analyze thermal images. Suárez et al. [33] and Sepulcre et al. [107] found a stronger relationship between canopy temperature and plant physiology in the morning than midday hours and indicated the importance of high spatial resolution thermal images to minimize the intensified midday soil effects. This was not the case in this study as siCWSI consistently showed strong diurnal correlation with in-situ measurements, suggesting that midday soil thermal effects may have been avoided with pure canopy pixel extracted from high-resolution thermal images using the computer-vision-based method. However, the results showed the slightly weaker relationship between siCWSI and physiological indicators at preharvest (15 September 2017 and 19 September 2018) than veraison (1 August 2018), while thermal images collected at veraison had a lower spatial resolution. This finding may be attributable to the different growing stage because later in the season, grapevines are more likely to be senescent [109] and in-situ measurements may not represent the physiology of the whole canopy well [59,63].
Canopy Zone-Weighting (CZW) Method Provided the Most Robust Estimates of Correlations between Aerial Image Retrievals and Grapevine Physiology
Traditionally, image retrievals were calculated over canopy pixels and averaged for each sampling location for in-situ measurements. With recent advancements in UAV and sensor technology, high spatial resolution aerial images have made it possible to identify pure canopy pixels, and sunlit and shaded portions within tree canopies [17,38,56]. When it comes to high-resolution hyperspectral imagery, Hernández-Clemente et al. and Camino et al. [17,38] suggested separating sunlit and shaded canopy zones to minimize the effects of within-canopy shadows caused by the illumination condition and canopy heterogeneity. Similarly, Sepúlveda-Reyes et al. and Pou et al. [59,61] divided vine canopy into sunlit, nadir, and shaded zones in thermal imagery and found conflicting results in terms of selecting an effective canopy zone as a proxy for physiology. The CZW method proposed in this study utilized the accumulated contribution of different canopy zones within the tree canopy instead of using a mean value averaged over a certain portion of the canopy or entire canopy. Compared to conventional ways of calculating image retrievals (using only sunlit pixels, while ignoring the shaded pixels), the CZW better characterized the grapevine canopy heterogeneity, and thus grapevine physiology. Generally, our study is the only case that revealed the complementary relationships between different canopy zones using the CZW method for tracking diurnal physiological changes.
The performance of different image retrieval from different canopy zones was evaluated for each field measurement time point and all the measurement time points together for the given field date. The availability of thermal images enabled us to investigate the robustness of the CZW method over multiple growing seasons (2017 and 2018) and stages (veraison and preharvest). When a single measurement point was considered, our results showed that PRI czw , SIF czw , and siCWSI czw were better related to G s and F s in some cases than the image retrieval from a single canopy zone, the combination of any two canopy zones, and the entire canopy. When all flights were analyzed, CZW-based retrievals always performed better than any other retrieval approaches, suggesting that CZW indeed worked well regardless of hyperspectral or thermal images when there were large variations both in image data and field measurements determined by illumination, bidirectional reflectance factor (BRDF), within-canopy heterogeneity, and diurnal physiological changes. It is worth noting here that the CWZ would have led to even better performance if the irrigation treatments had induced strong physiological changes during the flights (that was not the case here).
It is commonly accepted that recorded radiance and temperature from vegetation canopies are subject to effects of sun angle, BRDF (especially in the case of hyperspectral imaging), shadows, canopy background, structure, and leaf angle distribution, which in turn contribute to the changes in the results to a different degree. This is also true even though the aerial images used in this study were acquired at low altitudes on sunny days with stable atmospheric conditions. Additionally, these effects may vary depending on the different canopy zones used in this study. Therefore, it would be a logical follow-up effort to carry out a full analysis of the sensitivity of the relationship between image retrievals and plant physiology, focusing on normalization or mitigation of these effects specifically for each canopy zone.
Outlook
While the proposed CZW method has been demonstrated to be a promising advancement, further improvements may include the application of a pixel-weighting approach and data fusion. Indeed, leaf photosynthetic status, water content, pigment concentration, leaf angle distribution, and canopy architecture were spatially different within each canopy zone that was considered in this study. Some of these spatial changes can be explained through rich spectral information within a pixel from hyperspectral images, and thus provide important information about the plant physiology. LiDAR (Light Detection and Ranging) or photogrammetric point clouds, on the other hand, allow us to obtain accurate information about the canopy architecture and the heterogeneity within the canopy. Weighted pixels containing spectral, thermal, and structural information can be combined and will contribute to an improved understanding of plant physiology. Note that the integration of multisensory image data requires a highly accurate co-registration. Despite the relatively diverse dataset (i.e., diurnal, two growing seasons and stages for thermal images, and a single growing season and stage for hyperspectral image), further work is needed to explore the robustness of the CZW method on different growing stages of grapevines, presence of a wide range of structural and pigment changes, and tree species other than grapevines.
Conclusions
This study aimed to maximize the benefits of VHR hyperspectral and thermal UAV images to improve the relationship between the aerial image retrievals and diurnal indicators of grapevine physiology. Ultimately, this work allowed us to characterize physiological parameters at a scale and with speed not possible through other traditional measurements of vine physiology. Besides enabling extraction of grapevine canopy with high confidence from both hyperspectral and thermal images, the VHR images made it feasible to quantitatively analyze the contribution of different canopy zones for characterizing diurnal physiological indicators. We proposed the CZW method and evaluated its performance against the traditional image retrieval methods. The results indicated that PRI, SIF, and siCWSI from sunlit and nadir zones provided the best estimate of G s and F s when a single canopy zone was considered. At a single flight, PRI, SIF, and siCWSI computed over the combination of two canopy zones (sunlit + nadir, sunlit + shaded, or nadir + shaded) or entire canopy pixels showed a better relationship with diurnal G s and F s changes than the PRI, SIF, and siCWSI calculated over a single canopy zone. Importantly, the most frequent and the highest correlations were found for G s and F s with PRI czw , SIF czw , and siCWSI czw . When all flights combined for the given field campaign date, PRI czw , SIF czw , and siCWSI czw always significantly improved the relationship with G s and F s . In summary, this study first introduced the CZW concept to VHR hyperspectral and thermal UAV imageries and provided a new train of thought to the research and application of VHR images for remote assessment of plant physiological indicators. | 12,705 | 2020-10-02T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Agricultural and Food Sciences"
] |
Evidence of male alliance formation in a small dolphin community
The photo-identification of uniquely marked individuals has revealed much about mammalian behaviour and social structure in recent decades. In bottlenose dolphins (Tursiops spp.), for example, the long-term tracking of individuals has unveiled considerable variation in social structure among populations and various spatio-temporal aspects of group formation. In this study, we investigated associations among individual males in a small community of Indo-Pacific bottlenose dolphins (T. aduncus) residing in an urbanized estuary in southwestern Australia. Given the relative proximity of our study area to other populations in which complex male alliances form for the purpose of mate acquisition, we used long-term photo-identification records and social analyses to assess whether such alliances also occur in smaller and more isolated settings. Our work revealed strong social bonds and long-term, non-random associations among individual males, suggesting the occurrence of male alliances. Behavioural observations of alliances interacting with potentially receptive adult females from the estuary community and from adjacent communities, and exhibiting sexual display behaviours near females, suggest that these alliances occur in a reproductive context. As the first formal analysis indicating the occurrence of male alliances outside Shark Bay along the vast western coastline of Australia, this study complements previous research and extends our understanding of the evolutionary and ecological processes that drive alliance formation.
Introduction
Photo-identification is a widely used and largely non-invasive tool that forms an integral component of many field studies in conservation biology. This technique uses natural marks recorded in images of both terrestrial and marine taxa to recognize individual animals with idiosyncratic fur and coat patterns (e.g., zebras Equus burchelli, Petersen 1972; monk seals Monachus monachus, Forcada and Aguilar 2000; giraffes Giraffa camelopardalis, Le Pendu et al. 2000); pigmentation and spots (e.g., whale sharks Rhincodon typus, Meekan et al. 2006); and scars or other marks (e.g., nicks and notches on the flukes or dorsal fins of cetaceans, Würsig and Jefferson 1990). It has proven a remarkably useful approach in understanding life history patterns (Hammond et al. 1990) and allowing mark-recapture studies to assess abundance and apparent survival (e.g., Lebreton et al. 1992;Meekan et al. 2006;Chabanne et al. 2017b). When combined with spatial and temporal information, photo-identification can be used to assess traits such as density, sex, site fidelity, movement patterns and home range size (Brown et al. 2012;Chabanne et al. 2012;Brown et al. 2016a, b).
3
Photo-identification has also been combined with behavioural sampling and other quantitative methods to investigate social structure (e.g., Connor et al. 1992a, b;Le Pendu et al. 2000). Indeed, these techniques have enabled researchers to elucidate the variation in, and complexity of, phenomena such as male alliance formation in various delphinids, including bottlenose dolphins (Tursiops spp.), across a broad range of habitat types and population densities (Table A1). To date, male alliances have been documented in common bottlenose (Tursiops truncatus), Risso's (Grampus griseus), Australian humpback (Sousa sahulensis) and, perhaps most notably, Indo-Pacific bottlenose dolphins (T. aduncus, Owen et al. 2002;Hartman et al. 2008;Connor and Krützen 2015;Allen et al. 2017). Among coastal bottlenose dolphin populations, alliance formation spans the spectrum from no alliances (e.g., Baker et al. 2019), through a first level of alliance consisting of 2-4 closely-bonded individuals (e.g., Wells et al. 1987;Wiszniewski et al. 2012a, b), to the nested, multi-level alliance system found in Shark Bay, Western Australia (Connor et al. 1992a(Connor et al. , b, 1999 Table A1). In Shark Bay, 'first-order' alliances consist of pairs or trios that cooperate in herding single oestrus females, and these pairs or trios are members of stable teams of 4-14 males at the 'second-order' level (Connor and Krützen 2015). Second-order alliances, the core social unit of males in the population, cooperate to attack other alliances for access to females, and to defend against such attacks (Connor and Krützen 2015). Remarkably, a third level of alliance formation is evident, involving two or more second-order alliances supporting each other in the capture and defence of females from other alliances (Connor et al. 2011Randić et al. 2012;King et al. 2021). These studies have demonstrated how photo-identification, coupled with rigorous observational sampling and quantitative analyses, can provide behaviourally meaningful data on taxa that spend much of their time undetectable to researchers (Whitehead and Dufault 1999;Whitehead 2008a, b).
Several bottlenose dolphin populations in different habitats around the expansive western coastline of Australia have been studied (Allen et al. , 2016Sprogis et al. 2015;Brown et al. 2016a, b;Chabanne et al. 2017a;Raudino et al. 2018;Haughey et al. 2020) but, to date, male alliance formation has only been quantified in Shark Bay (Connor et al. 1992a, b), or speculated to exist elsewhere (Sprogis et al. 2015). For this study, we focused on a small community of Indo-Pacific bottlenose dolphins inhabiting the Swan Canning Riverpark (SCR), an urbanized estuarine system located 800 km south of Shark Bay, in which photo-identification has been conducted seasonally and yearly since 2011.
Here, we used sighting histories of the resident adult dolphins to investigate the potential occurrence of male alliances. We tested home range overlap, relatedness, and gregariousness as variables that may explain male associations with multiple regression quadratic assignment procedures (MRQAP). We also examined male association patterns using hierarchical clustering analysis, lagged association rates (LARs) and permutation procedures. Given the stable associations between individuals of the same sex and the strong bonds between some males described previously (Chabanne et al. 2012(Chabanne et al. , 2017a, we predicted that, despite the small community size, males residing in the SCR also form alliances. We discuss the findings in light of the challenges of studying mammalian social behaviour, including alliance formation, within small communities.
Study site and data collection
The Swan Canning Riverpark, Western Australia, is a 55 km 2 micro-tidal estuary comprising two rivers running through the city of Perth, reaching the Indian Ocean through the Inner Harbour of the Port of Fremantle (Fig. 1). The SCR is home to a small community of around 16 adult Indo-Pacific bottlenose dolphins that are year-round residents, exhibiting long-term site fidelity (Chabanne et al. 2017a, b). However, the dolphin community is not isolated, with genetic and demographic exchange occurring with adjacent communities residing in a semi-enclosed embayment with large areas of shallow habitats ( Fig. 1; Chabanne et al. 2021). Dolphin density in SCR is estimated to be 0.29 individuals per km 2 (min: 0.18; max: 0.42; Chabanne et al. 2017b).
Photographic identification and behavioural data were collected during boat-based surveys conducted between June 2011 and March 2017 in the SCR and following the protocols described in Chabanne et al. (2012Chabanne et al. ( , 2017b. A group was defined as any individual engaging in the same behaviour and within 10 m of another (Smolker et al. 1992). Data collected for each encountered group included dolphin group size and composition, predominant behaviour recorded during the first 5 min of encountering the group (i.e., > 50% of individuals within a group were engaged in same behaviour [travel, forage, socialise, rest, or unknown], Mann 1999), location (GPS), and environmental conditions. We estimated and reviewed the age-class of the individuals during the study period as described in Chabanne et al. (2012). Individuals were sexed through molecular analyses of tissue samples (Chabanne et al. 2021) collected via remote biopsy sampling (Krützen et al. 2002).
Data restrictions for association patterns
We examined male associations using group sightings composed of at least one adult male and for which all individuals were identified with high-quality images (Chabanne et al. 2017b). Given the long-term dataset and survey frequency, we also used temporary marks for identification of individuals with non-distinct dorsal fins. We restricted our dataset to adult males that were alive over the entire course of the study period to account for any biases on the associations between individuals associated with demographic changes. Using SOCPROG 2.9 (Whitehead 2019), we checked for the accuracy of the social representation obtained with the restricted dataset by examining the social differentiation (S, measure of variability of the associations) and Pearson's correlation coefficient (r, measure of the quality of the representation of the association pattern) following Whitehead (2008a, b). We set a daily sampling period and generated a matrix of association based on the Simple Ratio Index (SRI); the choice for using this index is justified by the high proportion of dolphins encountered being identified and the assumption that all associations were measured accurately being met (Cairns and Schwager 1987;Ginsberg and Young 1992).
We performed a multiple regression quadratic assignment procedure (MRQAP) with 1000 permutations to test the significance (2-tailed test with 0.05 p value) of three individual variables (home range overlap, relatedness, and gregariousness) on the SRIs while controlling for each other (Whitehead 2019). This preliminary test allowed the identification of individual variables that may exert undue influences on the true SRI values. We calculated home range overlap between each pair of adults based on a kernel-based utilization distribution overlap index (UDOI, Fieberg and Kochanny 2005) using the R package "Adehabitat" (Calenge 2006), followed by building a matrix using the UDOIs between each pair of males (see Fig. A1 for the 95% kernel density of each of the eight males). The matrix for gregariousness, a measure of the tendency of an individual to associate with other individuals (Godde et al. 2013), was created in SOCPROG following Whitehead and James (2015). Finally, we estimated relatedness of each pair of males using the TrioML estimator from the R package "related" (Pew et al. 2015) based on 10 microsatellite loci (Chabanne et al. 2021).
All association analyses were run using non-corrected SRIs. However, all pairwise association analyses including permutations tests were also carried out using Generalized Affiliations Indices (GAI), i.e., SRIs corrected for gregariousness (Whitehead and James 2015). In Appendix 3, we report the results using the GAIs, which did not differ from the analyses using the uncorrected SRIs.
Evaluation of male associations
We determined association strength for each male by comparing his maximum SRI with the mean SRI for all males. Associations were defined as 'strong' when their respective SRI values were at least twice the mean of all male dyads, and where each member of the male dyad ranked as each other's closest associate (Wells et al. 1987;Connor et al. 1992a, b). We further produced a hierarchical average cluster analysis to visualize the degree of associations between males and calculated a cophenetic correlation coefficient (CCC), for which a CCC > 0.8 indicates a good representation of the hierarchical structure among individuals based on their association matrix (Bridge 1993). We also carried out a changepoint analysis using the Pruned Exact Linear Time (PELT) method in R 3.6.2 (R Development Core Team 2013), package 'changepoint' (Killick and Eckley 2014), to find the threshold values of SRIs characterizing multiple levels of strength among male dyads (Bizzozzero et al. 2019;Gerber et al. 2020).
We tested for the existence of long-term preferences between males by following the Monte-Carlo resampling (i.e., permutation test) procedure established by . We considered the pattern significant when the coefficient of variation of the real association indices (SRI) was higher than expected by chance (Whitehead and James 2015; Whitehead 2019). We also extended the test to dyadic SRI values (Bejder et al. 1998) and reported 'preferred' associations when the observed number of significant dyads was larger than the expected (Whitehead 2019). Association estimates of a dyad at or above the 97.5 percentile were considered 'preference' Whitehead 2019).
We also evaluated the long-term stability of associations by calculating Lagged Association Rates (LARs, Whitehead 1995) for males using real and random SRIs. The latter were available after we tested for preferred associations using a permutation test. They reflected the random distribution of associations without the temporal context (in comparison to the null association rate [NAR], which we also plotted). We obtained standard errors and precision estimates of LARs and NARs with a temporal jackknife procedure using a grouping factor of one day (Whitehead 2008a, b). The shapes of real and random LARs were compared to understand the implications of any changing patterns of associations over time (Whitehead 2019). This analysis was repeated with males having 'strong' bonds only (see results).
The small sample size (i.e., number of males) and our sampling method (i.e., a 5-min scan sample to determine predominant group activity and group size and composition, with ad libitum recording of some behavioural events) did not allow further exploration of the functional behaviour of the strongly bonded males as part of this study. Instead, we checked for any dependency between the presence of allied males and the presence of females and their residency area (Chabanne et al. 2017a) using Pearson's chi-square statistics in the 'Stats' package in R (R Core Team 2013). Graphic mosaic plots were carried out using the 'vcd' package in R (Meyer et al. 2021).
Results
From June 2011 to March 2017, we conducted 187 surveys and tallied 304 dolphin group sightings, of which 250 group sightings were retained after excluding those with any unidentified or poorly photographed individuals. In our social analyses, we retained only sightings containing at least one adult male. A well-differentiated and accurate representation of the true social network was obtained when males were observed more than 12 times (r = 0.8 ± 0.04 SE; S = 0.9 ± 0.07 SE), leaving eight males in our dataset (Supplementary Material 1). Thus, patterns of male associations were assessed based on 175 group sightings. During the study period, no permanent emigration or deaths were recorded, meaning there was no demographic effect that could bias the males' associations (Analysis of Lagged Identification Rate, Supplementary Material 2). Thirteen and eleven adult females residing in the SCR estuary or visiting from adjacent communities, respectively, were present in 61% of these sightings.
MRQAP tests indicated that neither relatedness nor home range overlap explained the strength of association between males. However, gregariousness did so, suggesting that the SRI was affected by gregariousness (Table 1).
Mean SRI between all adult male dyads was 0.25 (SD 0.05), with an average maximum SRI of 0.65 (SD 0.27, Table 2). The first changepoint occurred at 0.77, revealing a dyad (EXT/PRI) and a triad (ARR/BOT/HII) of males ( Fig. 2), with all five males having a maximum SRI higher than the average maximum (Table 2). Such bond strengths would qualify these males as first-order alliances in Shark Table 1 Tests of the effectiveness of structural predictor variables in explaining association indices for adult male bottlenose dolphins residing in the SCR and seen alive for the entire study period (i.e., seen more than 12 times) and with MRQAP (partial correlation coefficients tested using 1000 permutations in SOCPROG) Fig. 2). A third changepoint was identified at 0.23. Since the value was lower than the mean SRI, the associations of BLA with the triad (ARR/BOT/HII) were not considered close under the criteria for male alliance formation (Fig. 2). The permutation test for long-term preferred associations among males was significant (coefficient of variation CV real data = 1.067; CV random data = 0.856, p < 0.0001). Among the closest associates, we identified the triad ARR/BOT/HII and the dyad EXT/PRI as first-order allies (Table 3).
Both real and random LAR projections for first-order ally males were well above the NAR projection, supporting nonrandom association over the entire study period. In addition, the real LAR was higher than the random LAR, confirming the long-term preferred associations among the first-order allies that do not change over time (Fig. 3).
Triad formation (Appendix Fig. A3) among our firstorder alliance members (ARR/BOT/HII) was significantly lower in the absence of females but highly dependent upon the presence of females from adjacent waters (X 2 = 53.311, df = 4, p < 0.001, Fig. 4), with these females never seen in the SCR without the resident allied males.
On two occasions, we observed two males (BOT and BLA) performing a 'rooster strut', i.e., a sexual display performed by individual males in the presence of oestrus females, during which the male bobs his head up and down at the water surface while moving forward (Connor et al. 2000, Appendix Fig. A4).
Discussion
This study aimed to identify the occurrence of male alliances in a small community of Indo-Pacific bottlenose dolphins residing in a south-western Australian estuary. Using photoidentification data of resident dolphins in the Swan Canning River Park from 2011 to 2017, we inferred the occurrence of male alliances, based on three lines of evidence. First, our detailed quantitative analysis clearly indicated the presence of closely bonded adult males who associate over a long temporal scale. Second, we invariably observed non-resident females, in some cases as much as 20 km outside their normal ranges, in the same group as the allied triad. This is suggestive of these females having been herded outside their normal ranges, as previously documented in Shark Bay (Connor et al. 1996;Scott et al. 2005;Tsai and Mann 2013). Third, we documented opportunistic behavioural observations, such as the rooster struts performed by adult males around oestrus females (Connor et al. 2000). All three lines of evidence suggest the presence of male alliances linked to reproductive purposes, as observed elsewhere (Krutzen et al. 2004;Wiszniewski et al. 2012a, b). Social bond strength among males was not influenced by their relatedness or home range overlap, but by differences in gregariousness, which is common in animal populations (Godde et al. 2013). All SCR residents share the same range (i.e., they all use the entire estuary, Chabanne et al. 2017a), and the narrow geography of the Fremantle Inner Harbour and the adjoining lower reaches of the estuary make encounters likely (Chabanne et al. 2012). At a larger scale, the SCR dolphins are more related to each other than to those of adjacent communities (Chabanne et al. 2021), likely affecting analytical detectability of the influence of the homogeneously high relatedness on the strength of male bonds in SCR.
The strongly bonded males in this small community satisfied the criteria for being classified as first-order alliances, similar to those identified in other studies (e.g., Connor et al. 1992a, b;Ermak et al. 2017). Allies within each alliance (the triad ARR/BOT/HII and the dyad EXT/PRI) were ranked closest or second closest to each other, and their bonds were described as long-term preferred and stable associations. Two single males (PEB and KWL) shared moderately strong associations (SRI > 0.20; e.g., Ermak et al. 2017;King et al. 2018) with one first-order alliance (EXT/PRI). However, their lack of mutually strong bonds does not qualify them as members of a second-order alliance, which is defined by at least two first-order alliances having moderate associations (Connor et al. 1992a, b). In addition, KWL's true affiliation Fig. 3 Lagged Association Rates from the real data (LARs; solid line) for all males (black) and for males of first-order alliances (blue) within the SCR resident dolphins. Their respective null association rates are indicated in dotted coloured lines of respective colour, and their LARs from the random data generated from the permutation test are in dashed coloured lines. Jackknife error bars are shown as the vertical lines Fig. 4 Mosaic plot of the count for occurrence of the male alliances (triad and dyad) with females residing in the SCR or from adjacent waters. Pearson residuals with values lower than − 2 are highlighted with p value provided (GAI) suggested that his association was primarily due to being highly gregarious (i.e., avoidance with all males when corrected for gregariousness) compared to the other males (Appendix 1). With only eight adult males residing in the SCR, the presence of only one alliance level is perhaps unsurprising. It is likely that there is a minimum encounter rate within a population at which the social structure might evolve to include more than one level of alliance formation Connor et al. 2017Connor et al. , 2019; Table A1). This is supported by the fact that formation of multi-level alliances in bottlenose dolphin populations has thus far only been reported from Shark Bay, Western Australia, and St. John's River, Florida, sites which have some of the highest reported densities and/or conspecific encounter rates in the genus Tursiops (e.g., Nicholson et al. 2012;Ermak et al. 2017).
Three criteria are typically proposed as driving the formation of male alliances in dolphin populations: high density or encounter rate, male-biased operational sex ratio, and little or no sexual size dimorphism Möller 2012). There is no sexual dimorphism in the community (unpub. data). However, density appears to fall within the range of those populations that do not exhibit male alliance formation (Table A1, e.g., Brusa et al. 2016;Baker et al. 2019).
Nevertheless, several factors complicate our understanding of the drivers of alliance formation in the SCR. First, the community is not strictly estuarine, but coastal-estuarine, as males occasionally range to the nearest adjacent coastal areas throughout the year, thereby gaining access to receptive females (i.e., they are not limited to females from the estuary only). Second, despite some level of genetic structure between communities, the SCR community is genetically connected to adjacent coastal communities (Chabanne et al. 2021), suggesting that male alliance formation may be best understood in the social-environment context of the broader coastal population. Third, the SCR is a relatively narrow estuarine river system, which may drive encounter rates up, thus favouring male alliance formation .
Several studies have affirmed that social factors are more important than environmental factors in the evolution of complex coalitions in mammals (Olson and Blumstein 2009;Ostner and Schülke 2014). However, He et al. (2019) highlighted a more complex evolutionary system in animal societies situated within environments where habitat configuration can drive social and ecological factors. Dolphins in the SCR appear to have stronger and more enduring associations with their peers than do dolphins in coastal habitats (Chabanne et al. 2017a). The shallow and protected estuary habitat provides resources that allow dolphins to reside year-round, with prey availability being continuous and more dependable than that in coastal habitats (McCluskey et al. 2016). Habitat and prey selection is therefore likely to influence how dolphins associate (Holyoake et al. 2010;O'Brien et al. 2020;McCluskey et al. 2021;Nicholson et al. 2021).
The difficulties in inferring the determinants of alliance formation through photo-identification-based studies are amplified by the small size of the SCR community. Studies such as this may obtain a rich, long-term, individual-specific dataset of social behaviour, which includes both group association data collected systemically and opportunistic behavioural observations. However, such studies might then be restricted in terms of the quantitative social analyses that can be conducted because of the small sample sizes and, thus, lack of power. Further, the small size of a community or population, and its discreteness as a socio-ecological unit, may-as in SCR-reflect the unique social and ecological milieu in which the community occurs (e.g., Giménez et al. 2018). Likewise, the uniqueness of this socio-ecological context and the small community/population size means that the factors determining male alliance formation may be exceptional in small populations, in the sense that the general relationships between social and ecological determinants of alliance formation might not apply. In the SCR, for example, the geography of the estuary (i.e., narrow area), the small community size, and occasional coastal-estuarine ranging patterns of males are factors that are distinctively different than those for the larger coastal communities nearby (Chabanne et al. 2017a).
Photo-identification of individually recognizable mammals is a versatile and powerful approach for the field study of marine and terrestrial mammals and has provided significant insights into our understanding of mammalian ecology and behaviour. Although our results were limited by the small size of the community in the SCR, the apparent occurrence of male alliances across the coastal waters of the Perth region provides a promising opportunity to extend our understanding of the evolutionary and ecological processes driving alliance formation. A larger, comparative approach, in which genetic, environmental, and behavioural data from several populations are used to model parameters that are predictive of complex alliances will help us to understand the evolutionary drivers of alliance formation. The SCR dolphins could serve as one important piece in this puzzle.
Appendix 3: Evaluation of the true associations between male dolphins residing in the Swan Canning Riverpark using the Generalized Association Indices (GAI)
Based on the MRQAP test, values of SRIs were significantly correlated to gregariousness while controlling for home range overlap and relatedness. We therefore verified the strength and true preferred associations among males using the deviance residual of the generalized affiliation indices (GAIs). GAIs are the residuals of a generalized linear model that was built with SRI as the dependent variable and gregariousness as the structural variable (Whitehead and James 2015) and allow the assessment of the true social affiliations unaffected by the gregariousness that could confound why some males have strong associations (Whitehead and James 2015).
As with SRI values, we tested for the existence of longterm preferences between males by following the Monte-Carlo resampling (i.e., permutation test) procedure established by . With GAIs, we considered the pattern significant when the standard deviation of the observed associations was higher than expected by chance (Whitehead and James 2015). Pairs of males with a positive value of the deviance residual of GAI were considered preferred companionships given the structural predictor variables while those with negative values indicated avoidance.
The mean and maximum affiliation indices (GAIs) were 0.01 (SD 0.11) and 0.21 (SD 0.23), respectively. The permutation test for long-term preferred associations among males was significant (SD real data = 0.185; SD random data = 0.147, p value < 0.0001). Among the closest associates, the triad ARR/BOT/HII had high positive deviance residuals (ranging from 5.90 to 6.61) supporting their strong affiliations. The dyad EXT/PRI followed with a positive deviance residual of 1.92. PEB has positive residual deviances with each member of the dyad, although the values (lower than 1.50) would best describe for casual companionships (e.g., Hunt et al. 2019). Since the difference between the residual deviances of the male BLA with the triad were large (minimum difference of 4.06 compared to a maximum difference of 0.71 between the triad), he would best be considered as a casual companion to the triad. The residual deviances values of KWL with the dyad (and any other males) revealed a strong effect of his gregariousness (GAI < − 1.50), with KWL significantly avoiding all males (Fig. A2).
Fig. A2
Dendrogram produced using hierarchical average cluster analysis with GAIs (CCC = 0.96682) of the eight male bottlenose dolphins (i.e., three-letter code) in the SCR. Levels of affiliation are displayed above with the dashed black vertical lines indicating the thresholds (− 1.50 and 1.50). Coloured solid lines denote the males forming strong affiliation (GAI > 1.50 describing for a dyad and a triad). Black solid lines are non-significant relationships while black round dot lines denote avoidance Author contributions DBHC conceived and designed the study, collected, and processed the data. DBHC analysed the data with advice from MK and SJA. DBHC wrote the manuscript, with contributions to drafting, review and editorial input from MK, HF and SJA. All authors contributed to the article and approved the submitted version.
Funding Open Access funding enabled and organized by CAUL and its Member Institutions. Funding for this research was provided by the Swan River Trust and the Department of Biodiversity, Conservation and Attractions (grant numbers IRMA: 15544, 16207, 17707 and 19031) with additional support from Fremantle Ports and Catherine O'Neill. DBHC also acknowledges the Australian Postgraduate Award from Murdoch University (Ph.D.) and the Swiss Government Excellence Scholarship (Postdoctoral Fellowship).
Data availability Given the long-term and ongoing research on the dolphin community in the Swan Canning Riverpark, data will be made available upon request to the corresponding author (DBHC).
Code availability R scripts for analysis are available in Figshare.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Ethical approval
The animal study was reviewed and approved by the Animal Ethics Committee, Murdoch University.
Consent for publication N/A.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
Fig. A3
A triad of allied SCR resident males herding a female that is resident to adjacent coastal waters in the estuary. The allied males travelled 'in formation' behind the female Fig. A4 Behavioural observations of the 'rooster strut' display (seen as a series of consecutive photos) performed by a SCR resident male in the presence of a female. As the male is at the surface, his head is arched above the surface and bobbed up and down while moving forward as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 6,805.4 | 2022-08-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Modification of PMMA Cements for Cranioplasty with Bioactive Glass and Copper Doped Tricalcium Phosphate Particles
Cranioplasty represents the surgical repair of bone defects or deformities in the cranium arising from traumatic skull bone fracture, cranial bone deformities, bone cancer, and infections. The actual gold standard in surgery procedures for cranioplasty involves the use of biocompatible materials, and repair or regeneration of large cranial defects is particularly challenging from both a functional and aesthetic point of view. PMMA-based bone cement are the most widely biomaterials adopted in the field, with at least four different surgical approaches. Modifications for improving biological and mechanical functions of PMMA-based bone cement have been suggested. To this aim, the inclusion of antibiotics to prevent infection has been shown to provide a reduction of mechanical properties in bending. Therefore, the development of novel antibacterial active agents to overcome issues related to mechanical properties and bacterial resistance to antibiotics is still encouraged. In this context, mechanical, biological, and antibacterial feature against P. aeruginosa and S. aureus bacterial strains of surgical PMMA cement modified with BG and recently developed Cu-TCP bioactive particles have been highlighted.
Introduction
Cranioplasty is a common technique for repairing bone defects in the cranium arising from cranial bone deformities, traumatic skull bone fracture, bone cancer, and infections. Surgery involves the use of biocompatible materials, and repair or regeneration of large cranial defects is particularly challenging from both functional and aesthetic point of view [1][2][3]. Poly-methyl-methacrylate (PMMA) is the biomaterial which has been most widely adopted for cranioplasty, and in some instances PMMA showed better long-term outcomes compared to other approaches based on frozen autologous bone [2,[4][5][6]. As PMMA for cranioplasty purposes is conceived, at least four surgical approaches can be distinguished: The first and simplest one is the in situ application and polymerization of PMMA, consisting of a single step procedure applied intra-operatively [7][8][9]; the second ex vivo approach uses a plaster impression taken over the cranial defect for realizing a mould into which the PMMA
Cu-TCP Particles
Cu2 + -substituted TCP powders were obtained using the precipitation technique as previously described [36]. Briefly, 0.5 mol/L solution of Ca(NO 3 ) 2 were mixed with 0.5 mol/L solution of Cu(NO 3 ) 2 and calculated amount of 0.5 mol/L (NH 4 ) 2 HPO 4 solution was added dropwise to the solution. The pH was kept at 6.5-6.9 level by the addition of ammonia solution. After 30 min, the precipitate was filtered, washed with distilled water, dried at 80 • C, and calcined at 900 • C to form the whitlockite structure.
The liquid MMA phase was added and hand mixed to the solid phase, hence the paste was poured into prismatic and cylindrical moulds in order to obtain specimens suitable for mechanical and biological investigations. For each modified bone cement composition, the batch volume consisted of 10 g of PMMA.
Mechanical Properties in Bending
The three-point bending test was performed through the Instron universal materials testing system (Model 5566, Instron, High Wycombe, UK) equipped with a load cell of 1 kN. The span length was set at 18 mm and loading rate was 1 mm/min. PMMA, PMMA/BG, and PMMA/CuTCP composites were poured into prismatic Teflon moulds in order to obtain specimens suitable for the three-point bending test (Figure 1a). Five specimens for each PMMA, PMMA/BG, and PMMA/CuTCP composite formulation were stored in a dark environment at room temperature for one week before performing mechanical testing. The bending modulus and strength were determined according ASTM D790.
Compression Strength
The compression test on cylindrical specimens was performed through the Instron universal materials testing system (Model 5566, Instron, High Wycombe, UK) equipped with a load cell of 5 kN. The loading rate was 1 mm/min. PMMA, PMMA/BG, and PMMA/CuTCP composites were poured into cylindrical Teflon moulds in order to obtain specimens suitable for the compression test ( Figure 1b). Five specimens for each PMMA, PMMA/BG, and PMMA/CuTCP composite formulation were stored in a dark environment at room temperature for one week before performing mechanical Polymers 2020, 12, 37 4 of 12 testing. A preload of 50 N was applied to each specimen for 60 s, hence the compressive test was carried out up to 50% of strain, and the compression strength was measured.
Cell Viability Assay
Bone marrow is taken from a two-year-old butchered horse. Bone marrow was collected in tubes containing sodium citrate (Vacumed). Mononuclear cells, including mesenchymal stem cells, were collected from the diluted bone marrow aspirate by density gradient centrifugation at 800 g for 10 min. The cell pellet was rinsed thrice with phosphate buffered saline (PBS, Invitrogen AG, Basel, The Switzerland), hence cells were re-suspended in an alpha-minimal essential medium (α-MEM, Gibco BRL, Life Technologies Limited, Inchinnan, UK) with 20% fetal calf serum (FCS, Gibco BRL, Life Technologies Limited, Inchinnan, UK), seeded into a flask of 75 cm 2 (Corning, Oneonta, NY, USA) and expanded in an incubator at 5% CO2 at 37 °C. The MTT (Sigma-Aldrich, Merck, Darmstadt, Germany) tetrazolium salt colorimetric assay was performed to determine cytotoxicity and proliferation of cells cultured on PMMA and PMMA-based composites. Bone marrow mesenchimal stem cells (BMMSC) of passage 3 and 75%-80% confluency were enzymatically detached and distributed at a concentration of 40,000 cells/mL into tissue culture plates, each consisting of 24 wells (Corning, Oneonta, NY, USA). Culture plates were incubated for 24 h at 5% CO2 at 37 °C. Hence, PMMA, PMMA/BG, and PMMA/CuTCP-based composites were layered into the wells. For each PMMA-based biocomposite, the MTT assay was performed in triplicate. BMMSCs growth and viability was evaluated after 24 h of incubation. The culture medium was removed from each well, replaced by a solution of 0.3 mL consisting of MTT 0.5 mg/mL in α-MEM, and incubated for 3 h at 5% CO2 at 37 °C. Hence, the solution of MTT in α-MEM was removed, replaced with 1.5 mL isopropanol, and incubated for 1 h. Finally, the concentration of formazan was quantified by optical density measurement at 600 nm through the BioPhotometer (Eppendorf AG, Hamburg, Germany).
Cell Differentiation Assay
BMMSCs capability to differentiate into chondrogenic lineage was evaluated for all PMMA and PMMA composites formulations. BMMSCs of passage 3 and 75%-80% confluency were enzymatically detached and distributed at a concentration of 40,000 cells/mL into tissue culture plates, each consisting of 24 wells. Culture plates were incubated for 24 h at 5% CO2 at 37 °C. Hence, PMMA, PMMA/BG, and PMMA/CuTCP-based composites were layered into the wells. After 24 h of incubation, the cell monolayer was stimulated toward the chondrogenic differentiation for three weeks by using α-MEM media supplemented with 1% of FCS supplemented with 0.1 µM dexamethasone (Sigma-Aldrich, Merck, Darmstadt, Germany), 6.25 µg/mL insulin (Sigma-Aldrich, Merck, Darmstadt, Germany), 50 nM ascorbic acid (Sigma-Aldrich, Merck, Darmstadt, Germany),
Cell Viability Assay
Bone marrow is taken from a two-year-old butchered horse. Bone marrow was collected in tubes containing sodium citrate (Vacumed). Mononuclear cells, including mesenchymal stem cells, were collected from the diluted bone marrow aspirate by density gradient centrifugation at 800 g for 10 min. The cell pellet was rinsed thrice with phosphate buffered saline (PBS, Invitrogen AG, Basel, The Switzerland), hence cells were re-suspended in an alpha-minimal essential medium (α-MEM, Gibco BRL, Life Technologies Limited, Inchinnan, UK) with 20% fetal calf serum (FCS, Gibco BRL, Life Technologies Limited, Inchinnan, UK), seeded into a flask of 75 cm 2 (Corning, Oneonta, NY, USA) and expanded in an incubator at 5% CO 2 at 37 • C. The MTT (Sigma-Aldrich, Merck, Darmstadt, Germany) tetrazolium salt colorimetric assay was performed to determine cytotoxicity and proliferation of cells cultured on PMMA and PMMA-based composites. Bone marrow mesenchimal stem cells (BMMSC) of passage 3 and 75-80% confluency were enzymatically detached and distributed at a concentration of 40,000 cells/mL into tissue culture plates, each consisting of 24 wells (Corning, Oneonta, NY, USA). Culture plates were incubated for 24 h at 5% CO 2 at 37 • C. Hence, PMMA, PMMA/BG, and PMMA/CuTCP-based composites were layered into the wells. For each PMMA-based biocomposite, the MTT assay was performed in triplicate. BMMSCs growth and viability was evaluated after 24 h of incubation. The culture medium was removed from each well, replaced by a solution of 0.3 mL consisting of MTT 0.5 mg/mL in α-MEM, and incubated for 3 h at 5% CO 2 at 37 • C. Hence, the solution of MTT in α-MEM was removed, replaced with 1.5 mL isopropanol, and incubated for 1 h. Finally, the concentration of formazan was quantified by optical density measurement at 600 nm through the BioPhotometer (Eppendorf AG, Hamburg, Germany).
Cell Differentiation Assay
BMMSCs capability to differentiate into chondrogenic lineage was evaluated for all PMMA and PMMA composites formulations. BMMSCs of passage 3 and 75-80% confluency were enzymatically detached and distributed at a concentration of 40,000 cells/mL into tissue culture plates, each consisting of 24 wells. Culture plates were incubated for 24 h at 5% CO 2 at 37 • C. Hence, PMMA, PMMA/BG, and PMMA/CuTCP-based composites were layered into the wells. After 24 h of incubation, the cell monolayer was stimulated toward the chondrogenic differentiation for three weeks by using α-MEM media supplemented with 1% of FCS supplemented with 0.1 µM dexamethasone (Sigma-Aldrich, Merck, Darmstadt, Germany), 6.25 µg/mL insulin (Sigma-Aldrich, Merck, Darmstadt, Germany), 50 nM ascorbic acid (Sigma-Aldrich, Merck, Darmstadt, Germany), and 10 ng/mL TGFα (Sigma-Aldrich, Merck, Darmstadt, Germany). The negative control was obtained by using α-MEM containing 1% of FCS. BMMSCs differentiation was evaluated through the Alcian Blue colorimetric assay (Sigma-Aldrich, Merck, Darmstadt, Germany) by incubating each sample into a 10% v/v formalin solution for 1 h at room temperature, washing, and rinsing in distilled water, staining for 15 min at room temperature with a solution of alcian blue in acetic acid (1% alcian blue into a 3% acetic acid solution, pH = 2.5), and finally washing thrice with 3% acetic acid solution. Intra and extra cell blue stained glycosaminoglycans were observed through inverted optical microscopy (Nikon Eclipse TE2000-U, Iowa City, USA). For each PMMA-based biocomposite, the cell differentiation assay was performed in triplicate.
Viability of Bacterial Strains
P. aeruginosa, S. aureus, and E. coli were used. Each bacterial strain was grown in a brain heart infusion (BHI, Sigma-Aldrich, Merck, Darmstadt, Germany) for 24 h at 37 • C by shaking at 250 rpm. For each PMMA, PMMA/BG, and PMMA/CuTCP composites, three samples were used, each kept for 72 h in BHI. Hence, 10 µL of bacterial suspensions, collected from the overnight growth, were transferred to each well, and anti-bacterial activity of each PMMA-based substrate was investigated following incubation at 37 • C for 24 h. For each PMMA-based biocomposite the test was performed in triplicate, and optical density at 600 nm through the BioPhotometer (Eppendorf AG, Hamburg, Germany) was measured.
Statistical Analysis
All mechanical and biological measurements were evaluated through multi-way analysis of variance, and differences observed between the means of PMMA, PMMA/BG, and PMMA/CuTCP composites, were considered significant at p < 0.05.
Mechanical Properties in Bending
PMMA and PMMA-based composites have shown a mechanical behaviour in bending typical of brittle materials: The stress linearly increases as the strain is increased, and a brittle fracture occurs at strain values of about 3% (Figure 1a). Bending modulus and strength of PMMA, PMMA/BG, and PMMA/CuTCP composites are reported in Figure 2a,b, respectively. Low amounts (i.e., 2.5% w ) of CuTCP particles significantly increased the bending modulus of PMMA (p < 0.05). However, at low concentration of CuTCP particles, the bending strength increase is not significant. In addition, BG particles have shown a slight increase of the bending modulus, but this increase is not statistically significant. For both BG and CuTCP particles, a decrease of bending properties (i.e., modulus and strength) is observed as the amount of particles is increased. In particular, only the bending modulus and strength of the PMMA/BG 90/10 are significantly lower (p < 0.05) than those of plain PMMA.
Compression Strength
PMMA and PMMA-based composites have shown a mechanical behaviour in compression typical of ductile materials: The stress linearly increases as the strain is increased up to the yielding point and an extended stress plateau is observed (Figure 1b). No specimen underwent failure up to a deformation of 50%. Compressive strength (i.e., stress plateau level) of PMMA, PMMA/BG, and PMMA/CuTCP composites are reported in Figure 2c. At any of the investigated amounts of CuTCP, the compressive strength is not different from that of the plain PMMA cement. A similar result has been observed for BG particles, only for the PMMA/BG 90/10 a significant decrease has been observed (p < 0.05).
However, with the exception of the PMMA/BG 90/10 composite, all bioactive formulations show a compressive strength higher than the limit value of 70 MPa recommended by ISO 5833.
PMMA and PMMA-based composites have shown a mechanical behaviour in bending typical of brittle materials: The stress linearly increases as the strain is increased, and a brittle fracture occurs at strain values of about 3% (Figure 1a). Bending modulus and strength of PMMA, PMMA/BG, and PMMA/CuTCP composites are reported in Figure 2a,b, respectively. Low amounts (i.e., 2.5%w) of CuTCP particles significantly increased the bending modulus of PMMA (p < 0.05). However, at low concentration of CuTCP particles, the bending strength increase is not significant. In addition, BG particles have shown a slight increase of the bending modulus, but this increase is not statistically significant. For both BG and CuTCP particles, a decrease of bending properties (i.e., modulus and strength) is observed as the amount of particles is increased. In particular, only the bending modulus and strength of the PMMA/BG 90/10 are significantly lower (p < 0.05) than those of plain PMMA.
Compression Strength
PMMA and PMMA-based composites have shown a mechanical behaviour in compression typical of ductile materials: The stress linearly increases as the strain is increased up to the yielding point and an extended stress plateau is observed (Figure 1b). No specimen underwent failure up to a deformation of 50%. Compressive strength (i.e., stress plateau level) of PMMA, PMMA/BG, and PMMA/CuTCP composites are reported in Figure 2c. At any of the investigated amounts of CuTCP, the compressive strength is not different from that of the plain PMMA cement. A similar result has been observed for BG particles, only for the PMMA/BG 90/10 a significant decrease has been observed (p < 0.05). However, with the exception of the PMMA/BG 90/10 composite, all bioactive formulations show a compressive strength higher than the limit value of 70 MPa recommended by ISO 5833. Figure 4 shows the capability of BMMSCs to differentiate, after three weeks of incubation, into chondrogenic lineage on PMMA, PMMA/BG, and PMMA/CuTCP composites, as suggested by alcian blue staining.
Cell Differentiation Assay
Plain PMMA supplemented with a chondrogenic medium (Figure 4a), BMMSCs supplemented with a condrogenic medium (positive cell control, Figure 4b), and BMMSCs (negative cell control, Figure 4c) were also used as controls. Figure 4 shows the capability of BMMSCs to differentiate, after three weeks of incubation, into chondrogenic lineage on PMMA, PMMA/BG, and PMMA/CuTCP composites, as suggested by alcian blue staining.
Cell Differentiation Assay
Plain PMMA supplemented with a chondrogenic medium (Figure 4a), BMMSCs supplemented with a condrogenic medium (positive cell control, Figure 4b), and BMMSCs (negative cell control, Figure 4c) were also used as controls.
Polymers 2020, 12, 37 7 of 12 Figure 5 shows bacterial growth on PMMA, PMMA/BG, and PMMA/CuTCP composites. Results suggest that modification of bone cements with BG and CuTCP particles are effective in reducing P. aeruginosa growth (Figure 5a). In particular 2.5% w of BG or CuTCP particles already produce a significant reduction of P. aeruginosa growth (p < 0.05). However, 5% w of BG or CuTCP particles reduces by about 80% and 75% P. aeruginosa growth, respectively. Modification of bone cements with BG and CuTCP particles are effective in reducing S. aureus growth (Figure 5b). In particular, at a concentration of 5 wt% of BG or CuTCP, a significant reduction (p < 0.05) of S. aureus growth is observed. No significant inhibition of E. coli growth has been observed (Figure 5c). Figure 5 shows bacterial growth on PMMA, PMMA/BG, and PMMA/CuTCP composites. Results suggest that modification of bone cements with BG and CuTCP particles are effective in reducing P. aeruginosa growth (Figure 5a). In particular 2.5%w of BG or CuTCP particles already produce a significant reduction of P. aeruginosa growth (p < 0.05). However, 5%w of BG or CuTCP particles reduces by about 80% and 75% P. aeruginosa growth, respectively. Modification of bone cements with BG and CuTCP particles are effective in reducing S. aureus growth (Figure 5b). In particular, at a concentration of 5 wt% of BG or CuTCP, a significant reduction (p < 0.05) of S. aureus growth is observed. No significant inhibition of E. coli growth has been observed (Figure 5c). Figure 5 shows bacterial growth on PMMA, PMMA/BG, and PMMA/CuTCP composites. Results suggest that modification of bone cements with BG and CuTCP particles are effective in reducing P. aeruginosa growth (Figure 5a). In particular 2.5%w of BG or CuTCP particles already produce a significant reduction of P. aeruginosa growth (p < 0.05). However, 5%w of BG or CuTCP particles reduces by about 80% and 75% P. aeruginosa growth, respectively. Modification of bone cements with BG and CuTCP particles are effective in reducing S. aureus growth (Figure 5b). In particular, at a concentration of 5 wt% of BG or CuTCP, a significant reduction (p < 0.05) of S. aureus growth is observed. No significant inhibition of E. coli growth has been observed ( Figure 5c).
Discussion
Over the past decade, a wide range of polymer-based composites have been investigated to repair cranial bone, and PMMA-based bone cements has been most widely adopted for cranioplasty [7][8][9]. However, surgical PMMA showed a rate of graft infection higher than 10% [6,8].
BG and HA represent the most common type of particles adopted for modifying bone cements [14,35] and we investigated the effects of BG and Cu-TCP particles on mechanical properties, biological behaviour, and antibacterial capability.
The brittle behaviour observed through bending tests of PMMA and PMMA-based composites, as well as bending modulus and strength (Figure 2a,b), is consistent with literature data reported for the Palamed bone cement [37]. On the other hand, the ductile behaviour in compression observed for PMMA and compression strength measurements (Figures 1b and 2c) are consistent with literature data reported for the stress vs. strain curves and strength of Palamed bone cement [28,29]
Discussion
Over the past decade, a wide range of polymer-based composites have been investigated to repair cranial bone, and PMMA-based bone cements has been most widely adopted for cranioplasty [7][8][9]. However, surgical PMMA showed a rate of graft infection higher than 10% [6,8].
BG and HA represent the most common type of particles adopted for modifying bone cements [14,35] and we investigated the effects of BG and Cu-TCP particles on mechanical properties, biological behaviour, and antibacterial capability.
The brittle behaviour observed through bending tests of PMMA and PMMA-based composites, as well as bending modulus and strength (Figure 2a,b), is consistent with literature data reported for the Palamed bone cement [37]. On the other hand, the ductile behaviour in compression observed for PMMA and compression strength measurements (Figures 1b and 2c) are consistent with literature data reported for the stress vs. strain curves and strength of Palamed bone cement [28,29]. 2.5 wt % of CuTCP particles significantly increased the bending modulus of PMMA (p < 0.05). However, the bending modulus decreased as the amount of CuTCP particles is increased. A similar trend has been observed in bending for PMMA/BG composites (Figure 2). No significant effect of CuTCP particles on compression strength has been observed for PMMA/CuTCP composites, while a slight but significant decrease (p < 0.05) of the compression strength has been observed for PMMA/BG 90/10 w composites (Figure 2c). With the exception of the PMMA/BG 90/10 composite, all bioactive formulations show a compressive strength higher than 70 MPa, thus satisfying the recommendation of ISO 5833. These mechanical results suggest that both BG and CuTCP particles provide a reinforcement effect for the PMMA matrix only at low amount (i.e., 2.5 wt%). The negligible reinforcement effect of the investigated particles at high concentration (>5 wt%) may be ascribed to a clustering effect of particles and to a weak particle-polymer matrix interface. Further research is needed to improve particles dispersion and particle-polymer matrix interface.
From a biological point of view, PMMA, PMMA/BG, and PMMA/CuTCP composites have shown to be non-toxic, as the MTT assay suggests that the investigated substrates induce no significant inhibition of BMMSCs growth (Figure 3). This result is corroborated by the capability of BMMSCs to differentiate, after three weeks of incubation, into chondrogenic lineage on PMMA, PMMA/BG, and PMMA/CuTCP composites (Figure 4).
Most infections related to PMMA-based bone cements for cranyoplasty can be ascribed to strains that are resistant to common antibiotic therapies. Therefore, in the last decade, a huge research has been focused on the development of novel antibacterial active agents to overcome bacteria resistance to antibiotics [14,[30][31][32]. Both BG and CuTCP particles have shown to be an interesting candidate for the modification of surgical bone cements. In particular, BG and CuTCP particles are effective in reducing P. aeruginosa growth (Figure 5a). At a concentration of 2.5 wt%, BG or CuTCP particles already produce a significant reduction of P. aeruginosa growth (p < 0.05). However, 5 wt% of BG or CuTCP particles reduces by about 80% and 75% P. aeruginosa growth, respectively (Figure 5a). S. aureus is the most common pathogen affecting cranioplasty [38]. Modification of bone cements with BG and CuTCP particles have shown to be also effective in reducing S. aureus growth (Figure 5b). In particular, at a concentration of 5% w of BG or CuTCP, a significant reduction (p < 0.05) of about 50% of S. aureus growth is observed. This result is consistent with literature data reported for growth reduction of S. epidermidis strains onto PMMA cements modified with copper doped BG particles [34]. No significant inhibition of E. coli growth has been observed ( Figure 5c). It is reported that E. coli strains can survive in copper-rich environments as they possess plasmid-encoded genes that confer copper resistance [39]. Accordingly, antibacterial results suggest that BG and CuTCP particles can be considered as promising candidates for modifying PMMA-based bone cements. However, further research is needed to further improve activity toward bacterial strains, such as E. coli.
A number of factors may be involved in the mechanism of bacterial growth inhibition or bacterial killing [40,41]. One of the key mechanism causing cell damage has been termed contact killing. This mechanism consists of the following steps: Copper dissolution from the external surface, ruptures of cell membrane, loss of cytoplasmic content and membrane potential, production of reactive oxygen species, and degradation of genomic and plasmid DNA [40,41].
Conclusions
Within the limitations of this investigation it can be concluded that the incorporation of bioactive particles (i.e., CuTCP and Bioactive glass particles) into PMMA-based bone cements do not alter mechanical properties in bending and compression. With the exception of the PMMA/BG 90/10 composite, all bioactive formulations show a compressive strength higher than 70 MPa, thus satisfying the recommendation of ISO 5833. PMMA/BG and PMMA/CuTCP biocomposites are nontoxic and they show antibacterial activity against P. aeruginosa and S. aureus bacterial strains. However, further research is needed to further improve activity of PMMA-based bone cements toward bacterial strains such as E. coli. | 5,444.6 | 2019-12-25T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Spontaneous symmetry breaking in a honeycomb lattice subject to a periodic potential
Motivated by recent developments in twisted bilayer graphene moir\'e superlattices, we investigate the effects of electron-electron interactions in a honeycomb lattice with an applied periodic potential using a finite-temperature Wilson-Fisher momentum shell renormalization group (RG) approach. We start with a low-energy effective theory for such a system, at first giving a general discussion of the most general case in which no point group symmetry is preserved by the applied potential, and then focusing on the special case in which the potential preserves a $D_3$ point group symmetry. As in similar studies of bilayer graphene, we find that, while the coupling constants describing the interactions diverge at or below a certain"critical temperature"$T=T_c$, it turns out that {\it ratios} of these constants remain finite and in fact provide information about what types of orders the system is becoming unstable to. However, in contrast to these previous studies, we only find isolated fixed rays, indicating that these orders are likely unstable to perturbations to the coupling constants. Our RG analysis leads to the qualitative conclusion that the emergent interaction-induced symmetry-breaking phases in this model system, and perhaps therefore by extension in twisted bilayer graphene, are generically unstable and fragile, and may thus manifest strong sample dependence.
I. INTRODUCTION
Moiré superlattices of various kinds have recently become a topic of great theoretical and experimental interest, spurred in large part by the discovery of superconductivity at a surprisingly high transition temperature 1,2 in twisted bilayer graphene (tBLG) with the twist angle close to some small "magic angle". In addition, other experiments 3,4 have uncovered a Mott-like correlated insulator phase in tBLG, and various experimental and theoretical investigations of this and other, similar, systems have been undertaken. For example, a gate-tunable correlated insulator phase was also found in trilayer graphene moiré superlattices 5 , and a crystal field-induced gap in encapsulated twisted double bilayer graphene has been observed 6 . A consensus that electronelectron interaction, enhanced strongly by the flatband nature of the moiré system, is the driving mechanism producing various symmetry-breaking phases in twisted bilayer graphene has emerged although the possibility that superconductivity itself may arise from electron-phonon interactions cannot be ruled out [7][8][9][10] . One work proposes a number of potential correlated insulating phases in tBLG 11 . Another, also considering tBLG, considers the interplay between van Hove singularities and the symmetry-breaking phases, and proposes different types of s-wave superconductivity 12 . Yet another 13 considers tBLG at two different fillings, 1 and 2 electrons per moiré unit cell, finding a ferromagnetic stripe phase in the former case and an approximately SU (4) symmetric insulating state in the latter. Two other works consider other moiré superlattice systems, with one 14 describing the existence of Chern bands in twisted double bi-and trilayer graphene and in hexagonal boron nitride in the presence of an applied electric field, while another 15 investigates a number of models of generic moiré systems. The sub-ject is exhaustive with hundreds of theoretical papers proposing different mechanisms for symmetry-breaking ground states using many different approximations and models, and we do not attempt a review of the subject, only mentioning a few representative recent publications where renormalization group (RG) techniques were used to study the moiré ground states [16][17][18][19][20][21][22] .
We will theoretically investigate, using extensive RG techniques, a related system to those described abovea honeycomb lattice, such as that formed by graphene, subject to a periodic potential. While we expect differences with tBLG, we expect that similar physics should arise generically in both systems. Our goal is to obtain universal effects of interactions in the most general situation, necessitating leaving out many important, but nonessential, realistic complications in the experimental moiré systems such as strain, substrates, disorder, band structure details. Such a generic theory using the minimal model of our work could be the starting point for more realistic future theories. We therefore restrict ourselves to the seemingly simple model of a honeycomb lattice subjected to an external periodic potential to mimic the essential features of twisted bilayer graphene. As we would see, even this simple system is extremely difficult to handle from the perspective of dealing with electronelectron interactions because of the greatly reduced symmetries in the interacting lattice Hamiltonian.
We begin with a low-energy effective theory, which consists of two Dirac cones. In general, the periodic potential could completely eliminate all point group symmetry from the system, leaving only (reduced) translational, time reversal, and spin SU (2) symmetries. In such a case, we may see terms that shift the Dirac cones away from their usual positions at the corners of the Brillouin zone in addition to a mass term that opens a gap (there are experimental indications of such Dirac point gaps in tBLG samples). Our main focus, however, will be on potentials that preserve a D 3 point group symmetry, in which case only the mass term may appear without any shifts in the Dirac cones. We then construct the four-fermion interaction terms that the symmetries of the system allow; we find that 22 such terms are allowed purely by symmetry, and that this number may be reduced to 10 by the use of Fierz identities. The interaction problem facing us is therefore formidable involving in general a 22-parameter RG flow even within this minimal model of this moire system. Even the reduced problem of a 10-parameter flow has never before attempted in the graphene literature.
We employ a finite-temperature Wilson-Fisher momentum shell RG procedure in this work [23][24][25] . In such a procedure, we rewrite the partition function for our system as a path integral in terms of anticommuting Grassmann fields and impose a momentum cutoff Λ on the electronic modes. Next, we divide these modes into "fast" modes, which are those within a thin shell near the momentum cutoff, and "slow" modes, which are the remaining modes. We then integrate out the "fast" modes perturbatively to one-loop order and rescale the "slow" modes and the various coupling constants to recover an action of the same form as that which we started with. This procedure yields corrections to the various terms in the action, which yield a set of differential equations that we call RG equations describing how the various coupling constants evolve as we integrate out modes. We show that, if we set the temperature to a "critical temperature" T = T c , the constants describing the fourfermion interactions diverge exponentially, but that ratios thereof tend to finite values. Therefore, we find that the coupling constants tend toward "fixed rays" rather than fixed points. These ratios in fact contain information about what symmetry-breaking phases the system is unstable to. To determine which phases these are, we note that, as we integrate out electronic modes, we also generate contributions to the free energy, which is simply given by F = −k B T ln Z, where Z is the partition function. We use this fact to calculate the free energy in the presence of "source terms" corresponding to various symmetry-breaking orders; we may then calculate susceptibilities towards these orders by taking second derivatives with respect to the source term coefficients. We find that, just above the critical temperature, any divergent susceptibility does so as a power of T −T c , with the power related to the coupling constant ratios. If a given susceptibility diverges, then we say that the system is unstable to the associated symmetry-breaking order. Our work is a highly nontrivial (because of the considerably reduced symmetry of the system leading to multi-parameter RG flow) generalization of the earlier RG work on bilayer graphene 26,27 . The imposition of the additional periodic potential considerably complicates the technical aspects of the RG flow compared with these earlier works, and leads to some qualitative differences in the results as discussed below.
We find that, in contrast to similar studies of bilayer graphene 26,27 , the only fixed rays that appear here are isolated. A number of these fixed rays correspond to multiple instabilities. As a result of these two facts, we expect that the results of integrating the RG equations will be sensitive, even qualitatively, to changes in initial conditions. This is not surprising, given the diverse orders found in the literature in tBLG. A necessary conclusion is that the symmetry-breaking phases in the system would be fragile and highly sensitive to all the details of the specific sample, and there could be considerable sample-to-sample variations in the observed phase diagram (or even in the same sample under thermal cycling). We emphasize that this finding of "unstable symmetry breaking" in our minimal model , arising from just pristine interaction effects, could only be much more complex in real tBLG samples where many nonessential realistic effects (e.g. strain, phonon, disorder, substrate) would come into play well beyond our effective low-energy RG theory. The rest of this work is organized as follows. In Sec. II, we introduce the system and our low-energy effective theory. We then describe and implement our RG procedure in Sec. III. In Sec. IV, we describe how we obtain the fixed rays, and then describe how we determine what instabilities they correspond to in Sec. V. We give our conclusions in Sec. VI, and provide further technical details of the calculation in the appendices.
II. SYSTEM AND MODEL
We consider here electrons on a tight-binding honeycomb lattice subject to a periodic potential. Such a lattice, in the absence of the periodic potential, possesses a D 6 point group symmetry, along with time reversal, spin SU (2), and discrete translational symmetries. In the "worst-case" scenario, the potential removes all point group symmetries of the honeycomb lattice, leaving only time reversal, spin SU (2), and translation symmetries (the last of these reduced by the applied potential). We assume that the applied potential is commensurate with the honeycomb lattice. If the potential is arranged in such a way as to place a maximum or minimum at a lattice site, however, then the full system will possess a D 3 point group symmetry; we will in fact focus on this case after a brief discussion of the case in which we have no point group symmetry at all.
We will adopt a low-energy effective theory of this system, which consists of two Dirac cones (valleys ± K), a sublattice degree of freedom (A/B), and spin (↑/↓). We also include all "mass" terms allowed by the symmetries of the system. We list the valley and sublattice components of all bilinears that may be formed, along with what representations they transform under both with and without the D 6 point group symmetry, in Table I. Some of the bilinears transform nontrivially under translations; these instead transform under representations of the D 3 "small group" of the wave vector, K. For the sake of a self-contained presentation, we also list the representations 28 of D 6 and D 3 and their associated characters in Tables II and III, respectively. Here, each matrix is of the form τ i σ j s k , where the τ matrix operates on the valley pseudospin, the σ matrix on the sublattice pseudospin, and the s matrix on the actual spin. We should note here that the form of the low-energy theory is completely independent of the exact details of the applied potential, though determining the values of the constants appearing therein requires a more detailed calculation.
The model Hamiltonian for the case with no point group symmetry is similar in form to the low-energy effective theory for a honeycomb lattice, but includes additional terms that transform trivially, i.e., under the A+ representation, under the remaining, reduced, symmetry group of the system with an applied periodic potential. This Hamiltonian is (1) The first two "mass" terms, the 1σ x 1 and τ z σ y 1 terms, simply displace the Dirac cones away from ± K, while the third, the 1σ z 1 term, opens a gap in the cones. We have so far covered the noninteracting Hamiltonian; we now turn our attention to the interaction terms. All such terms must, as with the noninteracting terms, be invariant with respect to the symmetries of the system. We may form such invariant terms by use of the generalized Unsöld theorem; all of these will have the form, where the matrices S i,j both belong to the same "row" of a given representation 28 . In the D 6 symmetric case, there is only one matrix per "row" for each representation; for example, is the only interaction term that we can associate with the E 2 + representation. On the other hand, in the case of no point group symmetry, we see that there are three matrices in the same "row" of, say, the A+ representation. There are therefore six distinct ways to form interaction terms within this representation. Overall, in the case with no point group symmetry, we find that there are 54 allowed interaction terms, while, in the case of D 3 point group symmetry, the number is reduced to 22. These numbers may be reduced further through the use of Fierz identities, which are, for 8 × 8 matrices, of the form, where the Λ n are all possible matrices of the form, τ i σ j s k , and we omitted the position dependence of the operators ψ for the sake of brevity (we assume that all are at the same position). As may be seen, these identities relate the various interaction terms to one another. Using these identities, we may reduce the number of independent couplings to 22 in the case of no point group symmetry and 10 in the case of D 3 point group symmetry. Even though we will not consider the D 6 case here, we note that such a symmetry allows 18 interaction terms, which may be reduced to 9 using Fierz identities.
III. RENORMALIZATION GROUP (RG) PROCEDURE
We now turn our attention to describing the Wilson-Fisher momentum shell RG technique that we will employ 24 . This technique is as follows. We begin by writing down the partition function for our system as a path integral: where the ψ are now Grassmann numbers corresponding to the coherent states of the corresponding operators and the action S is given by where H is the interacting Hamiltonian written in "normal order" (i.e., Hermitian conjugates to the left of nonconjugates) and τ is an "imaginary time". The next step is to integrate out electronic modes in momentum shells. We divide the fields into "slow" modes, denoted by ψ < , and "fast" modes, denoted by ψ > , where the fast modes are those modes with momenta within the shell, Λe −δ ≤ k ≤ Λ, and δ is a small change in a scale parameter used to characterize how many modes have been integrated out. Finally, we rescale all momenta of the remaining, slow, fields to k = ke δ to restore the region of integration over momentum to k ≤ Λ, and then rescale the fields and constants, thus recovering the original overall form of the action. This procedure may be done exactly for the noninteracting system, but must be done perturbatively once we introduce interactions. We may express this renormalization of constants in the form of differential equations (RG equations), as we will see shortly.
We now apply this procedure to the system under consideration and, from this point on, we will specialize to the case of D 3 point group symmetry for the A1+ (2), time reversal, and translations), those of the reduced symmetry group with only D3 point group symmetries, and those with no point group symmetries. The + or − in each representation name indicates how the bilinears transform under time reversal (even or odd, respectively). Corresponding to each of these "charge" representations are "spin" representations, in which the spin part of the matrix is s k , k = x, y, z; these transform oppositely with respect to time reversal to the "charge" representations (e.g., the A1 spin representation is odd under time reversal). , along with how they transform under time reversal (A "+" means the representation is even, while "−" means that it is odd).
Representation E 2C3 C 2 TABLE III: Representations of the group D3, along with how they transform under time reversal (A "+" means the representation is even, while "−" means that it is odd).
sake of relative simplicity. In this case, v F x = v F y and k 0x = k 0y = 0, as the associated terms no longer transform trivially under the symmetries of the system. We first determine how the various constants determining the theory rescale at "tree" level, i.e., at lowest order in the perturbative expansion. Performing a Fourier transform, we find that the noninteracting part of the action S 0 is where kω is a shorthand for the Matsubara sum (here, the sum is over fermionic Matsubara frequencies, ω n = (2n+1) π β , where n is an integer) and momentum integral, The interaction terms, on the other hand, are given by where 1234 is a shorthand for the integrals and sums appearing in this expression along with delta functions expressing momentum and energy conservation, and ψ(n) = ψ( k n , ω n ).
If we now perform the procedure summarized above, we find that the various constants in our theory rescale at tree level as follows: The last may be rewritten in terms of temperature as T = T e δ . These may also be recast as differential equations; letting x = x( + δ ) and x = x( ), where x is any one of the above constants, we may easily show that We now turn to one-loop corrections. These are the highest-order corrections that will appear within our RG analysis, as multi-loop corrections will be of order (δ ) k , k > 1, and thus will vanish in the resulting RG equations. We obtain contributions to m, µ, and g SU at this order; we show the diagrams corresponding to the m and µ corrections in Fig. 1 and those for the corrections to g i in Fig. 2. Before we begin determining these corrections, we need the bare Green's function for the system G 0 ( k, ω); it is given by A. One-loop corrections to mass and chemical potential We will begin with the contributions to m and µ, and with the "tadpole" diagram, Fig. 1a. Both of these contributions are first order in the interaction terms, and come from those terms containing two "slow" and two "fast" modes. This corresponds to terms of the form, Using Wick's theorem and the fact that G 0 ( k, ω) = ψ( k, ω)ψ † ( k, ω) , we find that This simplifies to where the notation, > k ω , simply means to integrate only over the "fast" momenta. These sums and integrals may easily be evaluated; we obtain and where and If we now substitute this into the expression for ∆S(tadpole), we find that We see that this represents a correction to one of either the chemical potential term or the mass term, depending on whether S i,1 = 1 8 or S i,1 = 1σ z 1. In fact, the traces in this expression will only be nonzero if S i,2 = 1 8 or S i,2 = 1σ z 1, meaning that only those terms corresponding to the A 1 + representation give a nonzero contribution via the "tadpole" diagram. We now consider the "sunrise" diagram, Fig. 1b, which corresponds to terms of the form, Diagrams representing the one-loop corrections to the four-fermion couplings gi. The solid red lines represent "fast" modes, the solid black lines "slow" modes, and the dashed lines interactions.
Evaluating the averages as before, we get Using the formulas given earlier for the integrals over the "fast" momenta, this becomes Unlike the "tadpole" contribution, the "sunrise" contribution will yield nonzero terms proportional to the coupling constants coming from all representations, not just A 1 +. The full RG equations for µ and m are of the form, When we evaluate the coefficients, however, we find that all of the B z µ,i = 0 and that all of the B 0 m,i = 0. This simplifies our later calculations, as we will see below. This also implies, as may be seen from the form of the equations, that, if we set either µ or m to zero, then we will never generate them, i.e., they will not renormalize to nonzero values.
B. One-loop corrections to four-fermion coupling constants
We next determine the corrections to the four-fermion coupling constants g SU , which are depicted in Fig. 2.
Evaluating these five contributions, we find that the RG equations for the four-fermion couplings all have the form, In arriving at this form, we evaluate integrals and sums of the form, We present the results of doing so, in the form of the functions Φ a , in Appendix A. We list the expressions obtained from each of the diagrams in Fig. 2 in Appendix B.
IV. FIXED RAYS
We now determine what the various outcomes of integrating the RG equations are. We start by showing that, if the temperature is tuned to what we will call the critical temperature, T = T c , integrating the RG equations for the four-fermion couplings g i will result in at least one of said couplings diverging exponentially. However, it turns out that ratios of these couplings tend toward fixed values; we will show later that these "fixed rays" in the space of the g i tell us what symmetry-breaking orders the system is unstable to. If the temperature is above this critical temperature, then the g i all saturate to finite values as → ∞. If, on the other hand, the temperature is below the critical temperature, then one or more of the g i will diverge to infinity at some finite value of .
In all of these cases, we need to first solve for the fixed ratios themselves. To do this, we first derive the equations for ratios of the g i with one of the couplings, which we will call g r (which of course is assumed to diverge), Doing this, we obtain We find that the behavior of the equations for large depends on how rapidly T , µ, and m increase, and breaks down into three cases: 1. T runs faster than µ and m.
Which of these three cases we consider determines the form of the equations that we have to solve to determine the fixed rays and what instabilities they represent. We now discuss each case in turn.
Case 1: In this case, the temperature T increases more quickly than µ or m for large . We will determine the large behavior of µ, m, and the g i under this assumption. In the limit of large , the Φ a functions that decrease the most slowly are In addition, the functions F 0 and F z appearing in the equations for µ and m behave as follows.
We first determine the asymptotic behavior of the fourfermion couplings g i for T = T c . The equations will all take the form, whereĀ ijk = A 2,+ ijk + A 2,− ijk . By the definition of T c , we know that, for T = T c , g i ( → ∞) → ∞. We now substitute in g i ( ) = g i,0 e δg , obtaining or (δ g + 1)g i,0 e δg = Λe (2δg−1) 8πT c jkĀ ijk g j,0 g k,0 .
This equation is satisfied if δ g = 2δ g − 1, or δ g = 1. This is consistent with our assertion that g i diverges to infinity as → ∞. We now determine the constants, g i,0 . We mentioned earlier that ratios of any two divergent g i tend to finite values; we will rewrite the above equation in terms of these ratios. If we now rewrite the above equation in terms of these ratios ρ i , we obtain, after simplification, Solving for g r,0 , we obtain where All other g i,0 may be obtained simply by multiplying g r,0 by the appropriate ratio. Next, we consider the equations for µ and m. In the limit of large , these become If we substitute the asymptotic expressions for g i obtained above into these equations, we get where We see that the equations for µ and m reduce to a pair of first-order linear differential equations. Our earlier results imply that B z µ = B 0 m = 0, so that these equations are decoupled. We just need one of either µ or m to increase more slowly than T (or even decrease); if the other increases more quickly, then that means that the corresponding parameter must be set to zero to obtain that outcome.
Case 2: Next, we consider the case in which the g i increases more slowly than µ and m, and in which |µ| > |m|, i.e., the chemical potential is outside the gap. In this case, the most slowly-increasing of the Φ a functions will be We will assume for now that the g i , µ, and m all increase exponentially for large ; we will show here that this assumption is consistent. For concreteness, we assume that g i ( ) = g i,0 e δg , µ( ) = µ 0 e η , and m( ) = m 0 e η . In this case, the equations for g i become Substituting our ansatz for g i into this equation, we get This equation is satisfied if we let δ g = 2δ g − η, or δ g = η.
We may now solve for g i,0 with a similar procedure to the previous case. Rewriting the above equation in terms of the ratios ρ i and simplifying, we get If we now denote the sum over j and k by A r (µ 0 , m 0 ), we get As before, we may obtain the other g i,0 by multiplying the above by the appropriate ratio.
With this result, we can now consider the equations for µ and m. In the limit of large , only F 0 is nonzero: We then obtain If we now make our earlier ansatz and substitute the expression for the g i obtained earlier, we obtain We note, however, that B 0 m,i = 0, so that the second equation simplifies to We thus find that, unless we set m 0 = 0, we must take η = 1, violating our assumption that T increases more slowly than µ or m. If we take m 0 = 0, then the equation for µ becomes However, our expression for A r (µ 0 , m 0 ) reduces to We expect the most likely outcomes of actual integration of the RG equations to fall within the previous case, and thus we will not treat this case further here. In concluding this, we are guided by a similar study of bilayer graphene with a band gap opened by, for example, an applied electric field undertaken in Ref. 26. We also note that the requirement that m 0 = 0 would imply a complete absence of a periodic potential, i.e., we are dealing with just a honeycomb lattice.
Case 3: Finally, we consider the case in which, once again, µ and m increase more rapidly than T for large , but this time |µ| < |m|, i.e., the chemical potential is inside the gap. In this case, the most slowly-decreasing of the Φ a are If we now make the same ansatz as before, taking g i ( ) = g i,0 e δg , µ( ) = µ 0 e η , and m( ) = m 0 e η , then the equations for the g i become Once again, this equation is satisfied if δ g = η. We may solve for this in terms of the fixed ratios as before, obtaining or, denoting the sum on j and k by A r (µ 0 , m 0 ), Now we consider the equations for µ and m. In this case, only F z is nonzero for large : The equations then become If we now make our earlier ansatz and substitute the expression for the g i obtained earlier, we obtain We note, however, that B z µ,i = 0, and thus the above becomes Similarly to the previous case, we conclude that we must set µ 0 = 0 in order to obtain consistency with our assumptions in this case. if we do so, however, we find that A r (µ 0 , m 0 ) simplifies to For similar reasons as in the previous case, we will save further treatment of this case for future work.
V. ANALYSIS OF FIXED RAYS
We now describe how we determine what symmetrybreaking phases each fixed ray represents an instability towards. To do this, we calculate the susceptibility of the system as a function of temperature near the "critical temperature". We can, in turn, do this by noting that, as we integrate out electronic modes in our RG analysis, we produce a multiplicative constant contribution to the partition function that we have been ignoring so far. These multiplicative constants in fact represent contributions to the free energy of the system. Our basic strategy for determining the susceptibilities is as follows. We start by adding "source terms" to the action, which have the form, where the matrices M i run over all possible 8 × 8 matrices of the form, τ i ⊗ σ j ⊗ s k . Note that some of the ∆ may be equal to one another if their associated matrices belong to the same representation of D 3 . These source . 3: Diagrams corresponding to one-loop corrections to the particle-hole (ph) source terms. Solid black lines correspond to "slow" modes, solid red lines to "fast" modes, dashed lines to four-fermion interactions, and wavy lines to a source term vertex.
terms correspond to different "particle-hole" (ph), or excitonic, and "particle-particle" (pp), or superconducting, order parameters. We provide a list of the representations that each of these source terms correspond to, along with what order they represent, in Tables IV (ph) and V (pp). We note that the excitonic states are very similar to those that are possible in bilayer graphene 27 due to the mathematical similarity to that case, though the physical interpretation will be slightly different. In the pp case, only those terms with antisymmetric matrices M i appear. We determine the contributions to the free energy as we integrate out modes up to second order in these source terms. Finally, we can calculate the susceptibilities to various order parameters by calculating second derivatives of the free energy: where f is the free energy per unit area.
A. One-loop RG equations for the source terms
Before we determine the free energy, we must also determine how these source terms renormalize to one loop. At tree level, the renormalized source terms are given by ∆( ) = ∆ 0 e , or We now determine the one-loop corrections, depicted in Figs. 3 and 4. The corrections for the ph source terms are given in Fig. 3. The first diagram, Fig. 3a, yields 4: Diagram corresponding to one-loop corrections to the particle-particle (pp) source terms. Solid black lines correspond to "slow" modes, solid red lines to "fast" modes, dashed lines to four-fermion interactions, and wavy lines to a source term vertex.
The second, Fig. 3b, yields where Now we consider the pp source terms. In this case, we have only one diagram contributing to one-loop renormalization, shown in Fig. 4. This diagram yields Overall, these contributions lead to RG equations of the form, We now consider the behavior of the source terms for large and at T = T c . For reasons stated earlier, we will focus only on the case where T increases List of representations of D3 and the corresponding particle-hole (excitonic) order parameters. The ± after each representation represents how the corresponding charge order transforms under time reversal. Only the valley and sublattice components of the matrices are shown. We list both the charge and spin variants of each order in the same row; the spin order has the opposite time reversal symmetry to the corresponding charge order (e.g., the ferrimagnetic state is odd under time reversal).
Representation
Matrices Order List of representations of D3 and the corresponding particle-particle (superconducting) order parameters. The letter after each representation name denotes whether the order is a singlet (s) or triplet (t) order. We omit the spin matrix in the list of matrices; it is sy for singlet orders and 1, sx, or sz for triplet orders.
more rapidly than µ and m. As stated before, the Φ a functions that decrease the most slowly in this case are Φ 2,+ ≈ Φ 2,− ≈ Λ 8πT . The equations for the ∆ i then become, after substituting the large expression for the g i , We will thus have, at most, a system of two linear equations describing a given source term. If a given term does not couple to another, then it will simply be given by ∆ i ( ) = ∆ i ( 0 )e ηi , where Otherwise, we simply solve the system of equations using standard techniques.
B. Free energy
Now that we have derived the RG equations, we turn our attention to the free energy. More specifically, we will FIG. 5: Diagrams representing one-loop contributions to the free energy from the (a) particle-hole source terms and (b) particle-particle source terms. The wavy lines represent the source term vertices and the red lines to fast electronic modes.
calculate the contribution to the free energy per unit area from the source terms alone. The diagrams that represent contributions from the source terms is shown in Fig. 5. The first diagram, Fig. 5a, represents the contribution from the ph source terms, and yields the following result for the free energy per unit area from the source terms: The second diagram, Fig. 5b, represents the contribution from the pp terms, and yields With these results, we may now derive the susceptibilities and thus determine which of them diverge for a given fixed ray. More specifically, we will determine their behavior for T close to, and just above, T c . We start by revisiting the equations for the g i . In this case, we may still use the large approximations for the Φ a , but now we assume that g i tends to a very large, but finite, value as → ∞. If we solve the equation for the g r that we divide by to obtain the ratios ρ i in this case, we get where 0 1. We now make use of the fact that, at to rewrite the above as We may now use the fact that, to first order in T − T c , where c r is a constant. We now perform a similar analysis of the equations for the source terms. Doing so, we find that, for a ∆ i that is not coupled to any other ∆ j , where C r is a constant proportional to the constant c r appearing in the equation for g r ( , T ).
If we now substitute these results into the free energy and determine the contribution from the integral on for > 0 (which is where we expect the divergence to come from), we find that it goes as (T − T c ) 2−ηi , and thus the susceptibility, will diverge with the same exponent provided that η i > 2. We note that ∆ i ( = 0 ) ∝ ∆ i ( = 0) simply due to the linearity of the equations giving ∆ i . Therefore, if the condition that η i > 2, or is satisfied, then the corresponding susceptibility exponent diverges, and we claim that the system is unstable to the associated order. A similar analysis for the case of coupled source terms shows that, provided that the system of RG equations yields at least one solution e ηi that satisfies the condition that η i > 2, then we have an instability towards the corresponding order. We note that coupling of source terms only occurs if they correspond to the same representation of the symmetry group of the system.
With these results, we are now ready to determine the leading instabilities of the system that the various fixed rays correspond to. Unlike in the case of bilayer graphene, considered in Refs. 26 and 27, we do not find any one-parameter or more families of fixed rays; all of the fixed rays are isolated. However, we find a very large number (thousands) of solutions. A number of these rays correspond to multiple instabilities simultaneously present. The fact that we only obtain isolated fixed rays, rather than any multiparameter families, indicates that the system is unstable to perturbations in the initial conditions, causing the system to converge to a different fixed ray. These two facts are not surprising, given the diversity of symmetry-breaking states found so far in the related, but not identical, twisted bilayer graphene system.
VI. CONCLUSION
We have investigated the possibility of instabilities to interaction-induced symmetry-breaking orders in a honeycomb lattice away from half-filling, i.e., the chemical potential µ = 0, subject to a periodic potential. For simplicity, we assumed that the periodic potential preserved a D 3 point group symmetry, along with translational symmetry (though reduced from that of the unmodified lattice), time reversal, and spin SU (2). This allows for a mass gap m to be present. We employ a finitetemperature Wilson-Fisher momentum shell RG procedure in this work. We derived the RG equations for the four-fermion coupling constants g i , the chemical potential µ, the mass m, and the temperature T . Also for simplicity, we focused on the case in which T increases more quickly at large RG scaling parameter than µ or m. We then showed that, at some "critical temperature" T c , the coupling constants diverge exponentially, but that ratios thereof remained finite. We finally showed how these ratios could be used to determine what symmetry-breaking orders the system would be unstable to. We found that there were thousands of isolated fixed rays, a contrast to similar studies of bilayer graphene 26,27 , where a twoparameter family of fixed rays was found in addition to only a few isolated rays. In some cases, these rays corresponded to instabilities toward several different orders, which is not surprising given the diverse orders detected in experiments on the related, though not entirely identical, twisted bilayer graphene.
Our detailed multi-parameter RG analysis within a simple minimal lattice model points to the real possibility that moiré superlattices (e.g. twisted bilayer graphene near the magic angle), may manifest "unstable symmetry-breaking" where the symmetry-breaking phases are intrinsically fragile, and the physics depends sensitively on all the details and initial conditions, even excluding the realistic complications of disorder, strain, phonon, substrate, etc. Our work is consistent with there being considerable sample dependence in the observed phenomenology of various correlated exotic phases in tBLG. Such a generic fragile "unstable symmetry breaking" scenario leads us to conclude that it is likely that experimental development would lead to the observation of many exotic ground states in the system. Similar considerations in these previous works on bilayer graphene concerning the exponents apply here as well; we expect that our procedure captures the basic qualitative behavior of the susceptibilities (i.e., whether or not they diverge), even if the exact exponents are not quite correct. We also note that many of these fixed rays correspond to multiple instabilities. Our method does not give further information other than the possibility of these orders emerging. Other methods are required to determine which of these orders actually emerges. | 9,412.2 | 2019-12-11T00:00:00.000 | [
"Physics"
] |
On the Electromagnetic Propagation Paths and Wavefronts in the Vicinity of a Refractive Object
Maxwell's equations are transformed from a Cartesian geometry to a Riemannian geometry. A geodetic path in the Riemannian geometry is defined as the raypath on which the electromagnetic energy efficiently travels through the medium. Consistent with the spatial behavior of the Poynting vector, the metric tensor is required to be functionally dependent on the refractive index of the medium. A symmetric nonorthogonal transformation is introduced, in which the metric is a function of an electromagnetic tension. This so‐called refractional tension determines the curvature of the geodetic line. To verify the geodetic propagation paths and wavefronts, a spherical object with a refractive index not equal to one is considered. A full 3‐D numerical simulation based on a contrast‐source integral equation for the electric field vector is used. These experiments corroborate that the geodesics support the actual wavefronts. This result has consequences for the explanation of the light bending around the Sun. Next to Einstein's gravitational tension there is room for an additional refractional tension. In fact, the total potential interaction energy controls the bending of the light. It is shown that this extended model is in excellent agreement with historical electromagnetic deflection measurements.
Introduction
The transmission of electromagnetic energy along rays has been of interest in our community. The question has always been, see, for example, Cheney (2004), how accurate the ray type of approximation is in representing the actual state of affairs of sending and receiving electromagnetic signals. In fact it is a high-frequency approximation derived from Maxwell's equations, leading to the eikonal equation, see p. 111 of Born and Wolf (1959). It is assumed that the signal is well quantified by the raypath description honoring the travel times of the signal, while its amplitude is stationary. This would then be a sufficient basis to, for example, image an object. In this paper, we argue that the ray approximation does not meet up to its expectations. It fails to image the boundary of an object correctly, due to the fact that the method does not support all frequencies.
In our analysis we start with Maxwell's equations in tensor format going from a Cartesian to a Riemannian geometry with a given metric tensor which is fundamental for that geometry. We argue that the geodetic path in the Riemannian geometry determines the raypath on which the electromagnetic energy efficiently travels through the medium. From the behavior of the stationarity of the Poynting vector we conclude that the metric tensor is functionally dependent on the refractive index of the medium. We introduce a symmetric nonorthogonal transformation. The metric is a function of an electromagnetic potential, which gives rise to a tension that determines the curvature of the geodetic line. This tension is also present in vacuum outside the object. To illustrate the propagating wavefronts, we consider a ray of electromagnetic energy passing a spherical object with a refractive index not equal to 1. We carry out a full 3-D numerical simulation based on the contrast-source integral equation for the electric field vector. These experiments corroborate our wave-front analysis that the geodesics support the wavefront of the numerical simulation. This result has consequences for the explanation of light bending around the Sun. Next to the Einstein's gravitational tension, there is room for an additional refractional tension. In fact, the total potential interaction energy controls the bending of the light. We conclude this paper by showing that this model is in excellent agreement with historical electromagnetic deflection measurements.
10.1029/2019RS007021
was caused by the "heaviness" of the light in reaction to the gravitational force of the Sun. Einstein (1911) conclusion was that the gravitational force is nothing else but a curvature of space. We are not debating this conclusion, but we see, based on Maxwell's equation, that there is room, next to the gravitational tension, for an additional refractional tension. In fact, the total potential interaction energy controls the bending of the light. The gravitational tension is proportional to the inverse distance to the center of the Sun. The refractional tension is frequency dependent and proportional to the third power of the inverse distance. We conclude this paper by showing that our model is in excellent agreement with historical electromagnetic deflection measurements.
Scaled Maxwell's Equations
We consider electromagnetic wave propagation with complex time factor exp(−i t), where i is the imaginary unit, is the radial frequency, and t is the time. In a source-free domain, with Cartesian coordinates x ∈ 3 , we write Maxwell's equations in the frequency domain as where E = E(x, ) is the electric field vector, H = H(x, ) is the magnetic field vector, = (x, ) is the electric permittivity, and = (x, ) is the magnetic permeability. We neglect absorption, so that all material parameters are real valued.
Next we introduce the scaled electromagnetic field vectors as in which Z = √ ∕ is the electromagnetic wave impedance. Using this scaling in Maxwell's equations, we where the refractive index n = n(x, ) follows from in which c = 1∕ √ and c 0 = 1∕ √ 0 0 are the electromagnetic wave speeds in a material medium and in vacuum, respectively. Note that Equation 3 represents the Maxwell equations for the scaled electromagnetic field vectors in a "background" medium with permittivity 0 and permeability 0 . At this point, we may not assume that the waves in this background medium travel with the wave speed c 0 . The curl operators in Maxwell's equations are replaced by the medium-dependent curl operators, which determine the spatial dependency of the electromagnetic wavefield. Let us denote the fastest path of the wave as the geodetic line. For vacuum in the whole 3 we have constant electric permittivity = 0 and magnetic permeability = 0 (hence n = 1). Then, the curl operators are medium independent and the electromagnetic waves travel with wave speed c 0 along straight geodetic lines. In a vacuum subdomain in the vicinity of an object with refractive index n ≠ 1, we are not allowed to conclude that the geodetic lines in that subdomain are straight. The geodetic lines are not equivalent to the raypaths in optics. The latter paths follow from a high-frequency approximation of Maxwell's equations. In the neighborhood of domain , these optical rays in vacuum remain straight when they pass , because within the ray approximation the interaction with matter in is neglected. However, the presence of the object leads to diffraction of the incident wave and this may influence the path of propagation. In fact, the geodetic line may become curved. Although, with the help of present-day computer codes a more or less complete solution of Maxwell's equations is possible, the structure of the geodetic lines is hard to observe from the numerical solution. We therefore investigate the nature of Maxwell's equations in a different coordinate system.
Maxwell's Equations in Tensor Notation
We introduce a Riemannian geometry with position vector x and symmetric metric tensor and its conjugate as, for example, Synge and Schild (1978):
10.1029/2019RS007021
Note that the Einstein summation convention with repeated indices is employed. In tensor notation, the scaled Maxwell's equations of (3) are written as the contravariant equations: where E i , H i , and i are the electric field vector, the magnetic field vector, and the partial derivative in the Riemannian geometry, respectively. These vectors are defined as The permutation tensor ijk is related to the Levi-Civita symbol as i k = e i k ∕ √ g, where g is the determinant of the metric tensor g ij . Using this relation, we obtain the covariant equations It is obvious that any solution of the Maxwell equations in a Riemannian geometry needs the specification of the refraction index and the impedance in whole space. Both material parameters occur in the curl operators.
To investigate the energy transport in the Riemannian space in more detail, we introduce the complex in which the asterisk denotes complex conjugate. Next we contract the complex conjugate of the left equation of (8) with E i and the right equation of (8) with H i * . Adding the two results and combining various terms, while using Taking the real part of (10) and multiplying the result with n √ g, we arrive at This equation represents the conservation law of energy transport in the Riemannian geometry. In the Cartesian space ( √ g = 1), this conservation law is written as Within the accuracy of geometrical optics, the curvature of the ray is determined by the refractive index only, see p. 114 of Born and Wolf (1959). In view of (11) and (12), we choose the metric tensor g ij to be a function of the refraction index n only.
Specification of the Metric Tensor
In this section, we make two specific choices for the metric tensor and investigate the consequences for the wave propagation.
Orthogonal Transformation
Choosing for the simple orthogonal transformation yields the diagonal metric tensor This choice of transformation just scales the local spatial behavior of the refraction index. It does not take into account the global refraction dependency. Later in this paper, we show that the geodetic line evaluation based on this metric leads to the well-known ray theory. In a vacuum domain outside the object , it leads to propagation along a straight line in the Cartesian space. Inside the object , this transformation leads to curved paths according to the standard ray theory based on a high-frequency approximation of the wavefield.
Nonorthogonal Transformation
If the refractive index is equal to 1 in the whole space (vacuum), the transformation is trivial (g = 1). As a consequence, the raypaths are straight lines, both in the Cartesian and Riemannian space. But we surmise that the presence of object changes the geodetic structure in the vicinity of the object. Let us assume that g is a twice differentiable function not equal to one inside , then g ≡ 1 does not hold at all points in vacuum. This implies that g is a harmonic function determined by its values at the interface of the object. In other words, the presence of object changes the direction of energy transport in each finite domain, because g ≢ 1. Physically, it means that a wave approaching the object is "feeling" the object before it has reached it. It will propagate along a path where the variation of √ g Re{S k } is stationary. The direction will change at locations where √ g ≢ 1.
In this way the object shows its emergence to the incident wavefield, see also Feynman (1964), Chapter 26.5 A more precise statement of Fermat's principle.
In order to develop a transformation and hence a geodetic formulation that adheres the global dependency of the refraction index, we proceed as follows. The divergence of x k (x) (trace of the operator) is taken from the contraction of the second relation of (13) as Subsequently, we introduce the difference vector between the spatial points x k and x k as in which f k is a continuous and differentiable vector field. Next taking the divergence of f k and using (15), According the Helmholtz decomposition theorem, see p. 38 of Helmholtz (1858), a twice continuously differentiable vector field is uniquely determined by its curl-free component and its divergence-free component. Note that curl-free component of f is uniquely determined by (17). However, the divergence-free part of f is obtained as curl f = curlx, because curl x = 0. However,x is still unknown. Hence, a coordinate transformation can only be constructed if we require that the tension f is curl free or in other words that This means that the transformation matrix is required to be symmetric. After a differentiation of (16) with respect to x k , it follows that the new nonorthogonal transformation is given by Using the property that the tension f is curl free, together with the expression for the divergence of f, see (17), the Helmholtz decomposition theorem for a curl-free vector provides us the unique expression:
10.1029/2019RS007021
Obviously, f k is the tension due to the difference in refractive index with respect to vacuum. We define Φ as the refractional potential and we denote f k as the refractional tension. This representation is valid under the condition that n − 1 vanishes at the boundary surface of the domain . Equations 18 and 19 define our transformation matrix. They hold for any distribution of the refractive index inside domain . Note that the expression of the refractive potential yields a nonzero value outside and this confirms that the refractive index distribution inside the object determines the spatial coordinate transformation not only inside this object but also outside. Hence, the geodetic lines in the vacuum domain around are influenced by the inner refractive index of the object.
Substitution the expression for the tension f into (18) yields the transformation matrix as In order to compare this transformation matrix with the one of (13), we analyze the domain integral on the right-hand side of (20) in more detail. When x ∈ , the evaluation of the domain integral has to be interpreted as the Cauchy principal value, where the contribution around the singular is excluded symmetrically and calculated analytically (Fokkema & Van den Berg, 1993). To this end, we consider the contribution of the integration over a spherical domain with vanishing radius and center point x.
we observe that (20) becomes Here, f denotes that the integral has to be interpreted as its Cauchy principle value. We now assume that the refraction index is a slowly varying function in space. For decreasing distance |x−x ′ |, we observe that in the integral on the right-hand side of (22) the value of n(x) − n(x ′ ) vanishes. In addition, for increasing distance, the value of 1/|x−x ′ | vanishes. If we neglect the integral completely, we have established that the transformation matrix becomes identical to the one of (13). Hence, we confirm the well-known fact that the standard ray theory is only applicable for slowly varying refraction index and for locations far away from significant changes in the refraction index. In conclusion we remark that on the right-hand side of (22) the first term represents the local and orthogonal part of the transformation, while the second term stands for the global part.
Construction of the Geodetic Line
In the construction of the geodetic line we consider the scalar arclength ds along the curved geodetic line, given by At this point, we switch to the matrix representation of the tensors and introduce the curvature matrix i as a representation of the symmetric transformation tensor x ∕ x i of (18), namely, Since the matrix k is real and symmetric, an eigenvalue decomposition with positive eigenvalues exists and the sum of the eigenvalues is equal to the trace. By inspection of (18), we learn that in the transform geometry the coordinate axes are spanned by the components of the tension f. The latter is directed in the normal direction to the surface Φ = constant. This normal is an eigenvector belonging to the matrix i , because
10.1029/2019RS007021
In our further analysis we choose as the principal unit vector of the transformation, complemented with the other two eigenvectors 1 and 2 with corresponding eigenvalues 1 and 2 . Since the trace of a symmetric operator is equal to the sum of the eigenvalues, from (15) it follows that + 1 + 2 = 3n. The location of the local frame is such that the vector = { 1 , 2 } is tangential to the surface Φ = constant, while is perpendicular to it. Next we consider the scalar arclength ds of (24). Using the eigenvalue decomposition, we may write Introducing the unit vectorŝ To investigate the dynamic behavior, see p. 114 of Born and Wolf (1959), we consider the optical length of the geodetic path, which is defined by the actual length of the path times the index of refraction. Hence, the left-hand side of (27) represents the optical length of the path. Therefore, we introduce the virtual refractive index n g along the geodetic path as In general, the eigenvalue decomposition has to be determined numerically, except for a rotationally symmetric medium or a horizontally layered medium. In the latter case, the eigenvalues are identical and equal to n. Then, the ray theory applies, because n g = n and the horizontally layered medium has no curvature.
The virtual refractive index n g (x,ŝ) controls the path of the geodetic line in a similar way as the refractive index n(x) controls the path of optical rays. Note that the virtual refractive index is not only determined by the local position of the geodetic line but it also depends on the direction of the geodetic line at this position. We construct this geodetic line by considering the classic differential equation for the evolution of an optical raypath, see p. 121 of Born and Wolf (1959), but we replace the physical refractive index n by the virtual counterpart n g , namely, where x j = x j (s) is the trajectory of the geodetic line and s is the parametric distance along this trajectory, whileŝ is the tangential unit vector along the geodetic line. We note that this differential equation applies to refractive indices, which are invariant for the direction of the geodetic path. However, the explicit Euler integration of this differential equation updates the ray position and ray direction, so that only the previous information of position and direction about the associated path segment is used. The path directions are taken to be constant during each integration. This is consistent in keeping the refractive index constant during the update step.
Radially Inhomogeneous Medium
At this point we use spherical coordinates. For a rotationally symmetric configuration, the eigenvalues can be determined analytically. The refractional potential is given by The tension depends on R only and is directed in the radial direction. Hence, = = 0 and the radial component R = R Φ is given by
10.1029/2019RS007021
From (25) it follows that the eigenvalue in the radial direction is given by while the eigenvalues in the tangential directions follow from the trace, In view of the axial symmetry of our configuration, the tangential eigenvalues are the same. We therefore confine our analysis to the plane in which the geodetic path is defined. Hence, we suffice with the computation of , namely, The eigenvalues depend only on R f R . From (17) we observe that From (32) it follows that and The virtual refractive index is then obtained as, compare (28), whereŝ R = cos( − ) andŝ = sin( − ) are the components of the unit vectorŝ. Here, is the angle betweenr and the x 1 direction, while is the angle betweenŝ and the x 1 direction.
Numerical Reconstruction of the Geodetic Paths for a Sphere
We consider a sphere with a radius a = 60 m. In the inner domain, the refractive index is n = 1.5. To avoid numerical problems, we require that the refractive index is a continuously differentiable function as function of R. We apply a cosinus type or tapering of the refractive index at the boundary region. As width of this region we take Δa = 0.01 m. Within this region, the refractive index varies from 1 to 1.5, for decreasing R, namely, Differentiation with respect to R yields R n(R) = −0.25 Δa sin Note that this function is continuous for all R; it vanishes everywhere, except in the small boundary region of 0.01 m width. We remark that, for the present refractive index, the integral of the tension f R is calculated analytically. The numerical construction of the geodetic path is performed by an Euler integration of (29). In order to have enough integration steps for a geodetic line, passing the boundary region of the sphere, we take an integration step of 0.1Δa. The starting point of all geodetic paths is x S = (−120,0, 0) m. To facilitate the computation of the wavefront curvature propagating along a set of geodetic lines in the plane x 3 = 0, we compute a large set of geodesics starting with an angle of 0.1 • between each other. For each path in the plane x 3 = 0, we store the locations and the travel times to reach each location. The wavefront curvatures are computed by connecting the points of equal travel time of the various paths. In Figures 1 and 2, we show some geodetic lines in the plane x 3 = 0 of the Cartesian space and some wavefronts for a travel time of t = 0.25, 0.50, and 0.75 μs, respectively. In Figure 1, we present the raypaths using the orthogonal metric tensor of (14). In other words, in our computer code we have replaced the virtual refractive index n g with the actual index n and we get the results of the standard ray theory. There are two remarkable observations. First, there is always a "shadow zone," where there is no wavefront. In this domain the wavefront of the arrival time seems to disappear abruptly. Second, at the points where these shadow zones arise, there is a duplication point. This is a point where a very small displacement of the raypath results in large refraction. Note that this phenomenon occurs despite the fact that the refractive index is continuously differentiable.
These types of artifacts are due to the local character of ray theory, which is based on a high-frequency approximation of the wave equations, in which the object does not "feel" the arrival of a passing wave. In Figure 2, we present the geodesics using the virtual refractive index n g . First, we note that there are no shadow zones and that all geodesics are bent outside the sphere. This bending of the geodesic line is stronger for a path closer to the sphere. It is obvious that the wave inside the sphere travels slower than outside. Given the continuous property of n g , the wavefront cannot break and travels in a rather peculiar way along the interface of the sphere. One may wonder to what extent this theoretical model of wave propagation along geodesics describes the actual situation of wave propagation. In the next section, we discuss a validation against the numerical solution of a full-wave integral-equation model as solution of Maxwell's equations.
Verification Using Contrast-Source Integral Equations
In the case that the magnetic permeability is equal to the vacuum permeability, the unknown electric-field vector E(x, ), for x ∈ , may be obtained from a contrast-source type of integral equation. For a frequency independent permittivity, we define a contrast function and a contrast source w E as respectively. We then have the integral equation for the 3-D unknown contrast source distribution w E ; see, for example, Zwamborn and Van den Berg (1992) and Abubakar and Van den Berg (2004): Here, E inc (x, ) represents the known electric-field vector of the incident wave in the whole space, in absence of the contrasting domain . In the present case we take the electric-field distribution of an electric dipole oriented in the x 1 direction. Furthermore, the Green function G is given by For the numerical solution of the integral equation we define a regular grid of square subdomains. This grid includes the scattering object . At grid points outside , we enforce the contrast function to 0. The integral equation is solved iteratively, using the BiGSTAB method developed by Van der Vorst (1979). In view of the convolution structure of the integral operator, the operator is computed efficiently by using fast Fourier transform ( In Figure 3, for t = 0.5 μs, we show this power distribution, both for the incident wavefield E inc and for the total wavefield E. In the left picture, the center of the spherical wavefront is the blue curve. This is the reference of the position of the wavefront. Note that the wavefront vanishes at x 1 = 0, because the dipole is oriented in the horizontal direction. The right picture of the total wavefront is now used as benchmark of the wavefronts predicted by the ray theory and the geodesic theory. We make similar images for t = 0.25 μs and for t = 0.75 μs and use the three images as overlays over those of Figures 1 and 2 to arrive at Figures 4 and 5. Comparing the middle pictures of Figures 4 and 5, we observe that the inner wavefront is delayed and an extra bending of the wavefront occurs to bridge the difference in wave speed along the interface of the sphere. The standard ray method leads to a shadow zone at the boundary of the object, while the geodetic theory indeed predicts this extra bending. In the right picture of Figure 5, the extra curvature of the actual wavefront, consisting of two wavefronts with different wave speeds, is predicted by the geodetic theory. The transition between outer and inner wavefront is bridged by a wavefront propagating along the interface. Physically, we interpret it as a surface wave which is launched along the interface of the sphere; see Figure 6. Note that the wavefronts in ray theory are orthogonal to the raypaths, but in the latter figure it is obvious that the wavefronts in the vicinity of the object are not orthogonal to the geodetic lines. In the three pictures, we observe that the wavefronts at the boundary of the object propagate along this boundary, with an amplitude decaying in the perpendicular direction: acting as a surface wave.
Additional Bending of Radio Waves by Sun's Refractive Index
We now turn to the consequences of our findings. They shed another light on the bending of electromagnetic waves while closely passing an object with a contrasting index of refraction. Fokkema and Van den Berg (2019) discussed the phenomenon that a light ray passing the Sun has an additional deflection outside the object, which is controlled by the refractional potential in a similar way as the mass-density potential changes the path in the theory of gravity which was shown by Einstein (1911). This means that the total potential energy is a superposition of a gravitational and an electromagnetic constituent of Sun's interior. The electric permittivity and magnetic permeability determine the velocity of light c 0 in vacuum. In material media the electromagnetic wave speed c is less. As we have shown this permits us to characterize the medium by its index of refraction n = c 0 ∕c. The deflection intensity is constituted by the refractional deflection which is frequency dependent and proportional to R −3 (Fokkema & Van den Berg, 2019), while the gravitational deflection is frequency independent and proportional to R −1 . In the remainder of this article we look at historical data for evidence for our claim.
Validation on Historical Data
An overview of optical deflection measurements are given by Von Klüber (1960), Will (2015), and Shapiro (1999). Based on the gravitational model, the deflection angle is given by d GR = R ∕R, where = 1.75 (in arcsec) is the Einstein value and R is the effective radius of the Sun. Mikhailov (1959) analyzed in detail the six eclipses during the period 1919-1952. These historical optical deflection measurements are tabulated by Pathria (2003), and he concluded that the spatial dependence is correct, but the spreading of around the Einstein value is significant in the near region of the Sun. Shapiro (1964) and Shapiro (1971) suggested that a more accurate deflection measurement follows from radio interferometry. In radio experiments, Sun's corona effects the (frequency-dependent) deflection to a larger extent than in the optical experiments. Seielstadet al. (1970) showed discrepancies up to 20% in neglecting the coronal effects. Muhleman et al. (1970) incorporated the coronal plasma effect and observed a spreading from 10% to 15%. Later radio experiments confirm this frequency dependence, while the spatial variation differs from the inverse-distance relation. This was explained by extending the GR model with the local bending due to the frequency-dependent coronal medium. However, satisfactory fitting to the measurements was only possible in a restricted range of R; see, for example, Figure 1 of Merat et al. (1974). Fokkema and Van den Berg (2019) investigated the optical-deflection data collected by Merat et al. (1974) and showed that the fitting to the measurements over the whole radial range of observations improved substantially, once the additional bending by Sun's interior refractive index is taken into account. In the model the frequency dependence of the data has not taken into account. In this paper, we consider the radio deflection data, which are certainly frequency dependent. At this point, we return to the work of Merat et al. (1974). They conclude on basis of radio deflection observations made by Muhleman et al. (1970), that for R < 5 R deviations from the Einstein prediction become statistically significant. They have collected the whole set of radio deflection measurements into four samples; see the fifth column of Table 3 of their paper. The weighted mean of the distance R/R S has been given, together with the range of deviations of every measurement. The deviations from the GR effect are significant for R/R S < 5. But that is mainly due to the frequency dependency of the data. After subtraction of the gravitation term, 1.75 R S /R, we obtain the electromagnetic constituents of the radio deflections and denote them as d EM . Since we surmise that the upper and lower bounds are related to different frequency ranges, we consider the upper and lower bounds of measured deviations separately. They are denoted by their superscripts. Hence, we have two sets of four data points, namely, d EM ∶= d upper and d EM ∶= d lower , respectively.
Influence of the Naked Sun
Let us first consider the additional bending of an electromagnetic wave passing the Sun, and we neglect the presence of the corona. Following the pure gravity light-bending theory of Maccone (2009), we also denote this as the naked-Sun situation. For small values of the refraction index of the Sun, Fokkema and Van den Berg (2019) have shown that the electromagnetic deflection is asymptotically given by To find the unknown factor B from the four data points d EM ∶= d upper , we carry out a least squares fit, which minimizes the residuals where R 0 is the smallest value of R on the geodetic line. The value of R 0 ∕R is often denoted as the impact parameter. The minimum residuals are given in the third column of Table 1. The mean of these residuals (46), the deflection function d EM (R) is presented as the solid blue line in the left picture of Figure 7. A similar procedure for the four data points d EM ∶= d lower is carried out. The minimum residuals are given in the third column of Table 2. The mean square of these residuals amounts to 7.3%. Substituting the resulting value of B in Equation 46, the deflection function d EM (R) is presented as the solid blue line in the right picture of Figure 7. We now observe that the deflection curve is negative. This is typically the effect of plasma of the outer region of the Sun. The large discrepancies of the two curves with the measured data may be explained by the "coronal mantle" outside the Sun.
Influence of the Coronal Mantle
In the corona, we only take into account the local effect of the refractive index of the corona. In order to include the plasma effects of the corona, we start with the refractive index described as a superposition of powers of R ∕R, with constant factors p , namely, The data under consideration are obtained for R > 3R S , and we employ the refractive index described in Muhleman et al. (1970), namely, where p 1 = 6 and p 2 = 2.33. We conclude that the electromagnetic deflection may be written as For the range of R > 3R we determine the coefficients C p 1 and C p 2 by a least squares fitting of (50) to the four data points. For the upper bounds we define the residual error as The minimum residuals are given in the fourth column of Table 1. The mean of these residuals amounts to 6.3%. Substituting the resulting value of the coefficients C p 1 and C p 2 in (50), the deflection function d EM (R) is presented as the dashed blue line in the left picture of Figure 7. A similar procedure for the lower bounds of the data yields the dashed blue line in the right picture. The discrepancies with these data points are presented in the fourth column of Table 2, with a mean error of 3.3%.
Influence of the Naked Sun and the Coronal Mantle
For small deflections, we take a linear superposition of the naked-Sun part and the mantle part (the corona). We conclude that the total electromagnetic deflection may be written as When we apply a least squares fitting procedure of this function with three unknown coefficients to four data points, we observed that the system matrix is heavily ill posed and impossible to invert numerically. A stable result is obtained by preconditioning. We rewrite (52) as 10.1029/2019RS007021 where C 1 = C p 1 ∕B and C 2 = C p 2 ∕B. This nonlinear equation is solved with an iterative Gauss-Newton method. As starting values we take zero values for C 1 and C 2 and determine B by a direct least squares minimization. After carrying out a few Gauss-Newton iterations, a stable result is obtained. The minimum residuals are given in the fifth column of Table 1. The mean of these residuals amounts to 1.0%. Substituting the resulting value of the coefficients C p 1 and C p 2 in (52), the deflection function d EM (R) is presented as the red line in the left picture of Figure 7. A similar procedure for the lower bounds of the data yields the red line in the right picture. The discrepancies with the data points are presented in the fifth column of Table 2, with a mean error of 0.2%.
Under condition that we keep the GR prediction unchanged, we claim that the near-field correction due to the tension of the Sun's interior refractive index is a prerequisite to obtain an accurate model in solar gravitational lensing; see, for example, Eshleman (1979) and Maccone (2009).
Conclusions
The propagation of electromagnetic energy over the fastest paths is investigated: (1) using the standard ray theory and (2) a novel approach based on the theory of geodesics. The analysis of raypaths showed that there is always a "shadow zone." Moreover, where this zone arises there is a duplication point. These types of artifacts are due to the local character of ray theory and are not present in the geodetic description, which has a global character. In addition, the geodesics bend in the vicinity of the object and the wavefronts are nonorthogonal to the geodetic lines. At a curved boundary of the object it predicts the propagation of surface waves, where both the wavefront and geodetic propagate parallel to the boundary surface. For a spherical object, the geodetic wavefronts are verified using a full 3-D numerical simulation based on a contrast-source integral equation. The conclusion is that the present theory of geodesics offers a reliable physical insight in the actual propagation of electromagnetic waves.
The theory of geodesics has consequences for the explanation of the light bending around the Sun; namely, next to Einstein's gravitational tension there is room for an additional refractional tension. With this extended model historical "radio light" deflection measurements has been investigated. The conclusion is that it explains these measurements very well. It adds a significant correction to solar gravitational lensing and interstellar radio communication. On 12 August 2018, the Parker Solar Probe mission (http:// parkersolarprobe.jhuapl.edu) has been launched with dedicated instruments for measuring the electromagnetic fields and two-way radio transmissions with the Earth station at different frequencies (Sokol, 2018). This mission will create excellent conditions for collecting the electromagnetic properties of the Sun.
Data Availability Statement
All numerical results can be reproduced by using data and information available in the listed references, tables and figures included in the paper; in particular the geodetic lines are constructed numerically via a predictor-corrector version of the recursive scheme of (38) of Fokkema and Van den Berg (2019). | 8,584.8 | 2020-11-15T00:00:00.000 | [
"Physics",
"Geology"
] |
Radio Frequency Interference Site Survey for Thai Radio Telescopes
Radio astronomical observations have increasingly been threaten by the march of today telecommunication and wireless technology. Performance of radio telescopes lies within the fact that astronomical sources are extremely weak. National Astronomy Research Institute of Thailand (NARIT) has initiated a 5-year project, known as the Radio Astronomy Network and Geodesy for Development (RANGD), which includes the establishment of 40-meter and 13-meter radio telescopes. Possible locations have been narrowed down to three candidates, situated in the Northern part of Thailand, where the atmosphere is sufficiently dry and suitable for 22 and 43 GHz observations. The Radio Frequency Interference (RFI) measurements were carried out with a DC spectrum analyzer and directional antennas at 1.5 meter above ground, from 20 MHz to 6 GHz with full azimuth coverage. The data from a 3-minute pointing were recorded for both horizontal and vertical polarizations, in maxhold and average modes. The results, for which we used to make preliminary site selection, show signals from typical broadcast and telecommunication services and aeronautics applications. The signal intensity varies accordingly to the presence of nearby population and topography of the region.
Introduction
In addition to their specifications, environmental conditions are the key factors in determining the telescope's performance and therefore limit research topics of interest. The first key consideration is Radio Frequency Interference (RFI) from telecommunication and wireless technology, where the incident power can be more than 15 orders of magnitude stronger than brightest radio sources [1]. Strong interference may saturate the receiver system, where the system response is no longer in the linear regime, and therefore completely prohibit scientific observations. Weaker level of interference may instead degrade the quality and cause data loss. The Office of National Broadcasting and Telecommunications Commission of Thailand (NBTC) has entirely complied to Zone 3 International Telecommunication Union (ITU)'s spectrum allocation [2][3], where a number of spectrum windows have been assigned for radio astronomy observations. However, several preferred frequency bands indicated by ITU-R documentation [4] are not protected by ITU and NBTC. In addition to RFI environment, at radio frequency more than ~10 GHz, atmospheric conditions become also important due to the resonance absorption of the rotational molecular bands of water (H2O) at 22 GHz and Oxygen (O2) at 60 and 119 GHz. More absorption lines can be found at higher frequency [1]. The water absorption line is the main reason for candidate locations to be located in the Northern Part of Thailand, where atmosphere is dry during winter time. The Oxygen absorption can only be mitigated by having a high altitude site.
National Astronomical Research Institute of Thailand (NARIT) has started the project called "Radio Astronomy Network and Geodesy for Development" (RANGD) in 2016 to expand the potential of science and technology development in the country. The main infrastructure consists of 40-meter and 13-meter radio telescopes, and advanced laboratories for engineering. Here, we focus on the frequency range from 20 MHz to ~115 GHz, which will be covered by both radio telescopes. Radio Frequency Interference (RFI) measurements have been conducted at the three candidate sites in 2016, described in Section 2, and the results and discussion are in Section 3.
Measurement
The equipment setup consists of R&S ZVL6 spectrum analyzer, R&S HE300 directional antenna (20 MHz to 7.5 GHz) and a low loss 2-feet RF cable. The settings are summarized in Table 1. The antenna was mounted on a rotator for azimuth scan automation. For each pointing, i.e. of polarization and of azimuth direction, the data were recorded in maxhold mode, where only the peak intensity were recorded, and average mode, where the values are averaged together. The two modes complement each other, that the maxhold mode has the advantage of detecting strong intermittent signal, while the average mode can detect weak persistent RFI.
ITU's documentation [5] defines the spectral flux density (SFD), Sf, as in Equation (1) below, which is the relationship between the receiver gain and the antenna gain [5] [6]:
Results and Discussion
The spectral flux density (SFD) is calculated with equation (1). The SFD plots, which include the data from all azimuth directions, from the three sites are shown in Figure 1 for maxhold mode, and in Figure 2 for average mode. Firstly, it appears that the noise floor in average mode is lower, hence has a better detection limit for persistence signal, than that in maxhold mode by approximately 10 dBW/m 2 /Hz. Typical broadcast and telecommunication emission, i.e. FM (87 -108 MHz), TV (510 -790 MHz), Mobile (870 -960, 1,805 -2,170 MHz) and 481 MHz), are present at all sites, where Site A has the strongest and most polluted RFI environment. The level at Site B is slightly better than Site C for the observing window between ~1,000 -1,800 MHz, which is for spectral line observations of HI (~1,420 MHz) and OH (1,600 -1,700 MHz) and for pulsar observation. This window is normally allocated for aeronautical radar and communication (960 -1,400 MHz), and satellite applications (1,525 -1,660 MHz). This means the ideal location should not be too closed to airports and takeoff-landing path, while the telescope location makes no difference with interference from Geo Stationary Orbit satellites. Site C appears to have best overall RFI conditions throughout the frequency range for both maxhold and average modes. And the fact that it is only thirty kilometers from NARIT head office makes it our best candidate. Although, the RFI detrimental threshold recommended in [5] ( approximately at -230 dBW/m 2 /Hz, which is 80 dBW/m 2 /Hz lower than our noise floor, it has been suggested that such limit assumes the condition that the RFI source is located in the main beam of the radio telescope. The ITU recommendation ITU-R RA.1513-2 clarifies that such case will cause only 2% data loss at 5-degree elevation [7]. More intensive RFI measurements at Site C have been planned in the near future. | 1,374.8 | 2017-09-01T00:00:00.000 | [
"Engineering",
"Physics",
"Environmental Science"
] |
Study of effects of burner configuration and jet dynamics on characteristics of inversed diffusion flames
The effect of the geometric parameters and jet dynamics of the port array inverse diffusion flame (IDF) burner on the flame characteristics were investigated experimentally using liquefied petroleum gas (LPG) fuel. The geometric parameters are the central air flow area, the fuel jets flow area, the number of the fuel jets and the radius of the pitch circle around which the fuel ports are arranged. The jet dynamics are the air and the fuel flow rates and their momentum flux. Three burners were used for this purpose. The analysis of the results of the IDF jet dynamics showed that two flame fronts formed in the entrainment zone of the IDF, one of them in the co-flowing jet formed by the fuel and the central air jet and the other is in the peripheral submerged jet formed by the fuel jets and the ambient air. Beyond the neck, the central air jet, the fuel jets and the two flame fronts flow as one flame torch. At high air, Reynolds number and low fuel Reynolds number, corresponding to the primary equivalence ratios near unity, the IDF produced by the burner of pitch circle radius 10 mm are short blue color sootless flames. As the air Reynolds number was gradually decreased to a low value corresponding to a primary equivalence ratio 4.5, the flame was changed to yellow diffusion flame with high soot concentrations. The results also showed that the flame characteristics were highly affected by the number of the fuel ports. The flame became more bluish and shorter as the number of the fuel ports increases to 36 ports.
INTRODUCTION
Flames formed by combustion of gaseous fuels may take two forms: premixed flames or diffusion flames.Premixed flames occur when the fuel and the oxidizer are mixed upstream of the delivery to the combustion zone.Diffusion flames occur when the fuel and oxidizer are separately delivered to the combustion zone (Elmahalawy et al., 2002).
Due to their non-premixed nature, the normal diffusion flames have a wide flammability range even in turbulent states.But the high soot loading characteristic limits the application of these flames in domestic applications for which clean combustion is required.Premixed flames are cleaner and burn more intensely; but, the operating range is narrow due to the flash back or lift-off of the flame (Sze Lip Kit, 2007).These drawbacks of the premixed flames and the relatively small effective heating area of high temperature and safety requirement limited their applications in industrial application and impingement heating.
IDF as a kind of diffusion flame with an inner air jet being surrounded by outer fuel jets in either confined or unconfined conditions shows no flashback, less soot loading than normal diffusion flame, low NOx and has a wide range of flammability (Salem, 2007).These features make the use of IDF burners feasible and are becoming of growing interest in industrial heating processes, but few studies have been performed using the IDF configuration compared to the normal diffusion flames literature.
The majority of the previous studies of the IDFs are concerned with the flame shape, height, temperature distribution, oxygen, carbon dioxide, and carbon monoxide concentrations as well as NOx and soot emissions.Wu et al. (1984) described a map of different IDF types against different combination of air jet and fuel jet velocities.The IDF reported by (Wu et al., 1984) is a blue bell shaped laminar flame attached to the air jet exit.At high overall equivalence ratio, an orange-yellow cap is developed upon the blue reaction zone.They also compared between the laminar normal diffusion flames (NDF) and IDFs.Huang et al. (1997) conducted similar research on the double concentric jets using a central air jet and an annular fuel jet.Takagi et al. (1996) studied experimentally and numerically the difference between normal and inverse laminar diffusion flame, using nitrogen diluted hydrogen fuel.Partridge et al. (1999) seemed to be the first to offer literature on the no formation characteristic of IDF.Sze et al. (2006) carried out experiments to investigate the shape, height, temperature distribution, and NOx emission of two unconfined IDF, one with circumferentially arranged ports (CAP) and the other with co-axial (CoA) jets, both using LPG as fuel.They observed that, the entrainment zone can be observed only at high air Reynolds number larger than 2500.The CoA flame in most cases is similar to a diffusion flame.They also reported that, a negative pressure region is formed between the jets resulting in the flow of the fuel jets towards the central air jet within the entrainment zone.Sze et al. (2004) conducted experiments on IDF burning butane.The IDF has a central air jet surrounded by 12 circumferentially arranged fuel jets each 2.4 mm with center to center distance 11.5 mm similar to that used by (Sze et al., 2006).They investigated the temperature distribution, heat transfer characteristics, flame height, and species concentration of impinging IDF.They reported that the height of the entrainment zone increased slightly with increase in Re air and o .They found that the IDF have two regions of maximum temperature around the flame centerline.The maximum temperature zone appears close to the top of the entrainment zone.Sze et al. (2006) concluded that the centerline oxygen concentrations in CAP flames decreased steeply with the flame height until reaching a minimum value as it is depleted by combustion and then Barakat et al. 129 increased due to the entrainment of atmospheric air into the flame tail where combustion has been almost completed.Dong et al. (2007) stated that seven IDF structures have been observed altogether.The experiments were carried out on an IDF burner as used by (Sze et al., 2004(Sze et al., , 2006)).They found that, fuel/air jet entrainment is a crucial factor determining the flow and flame structures of the concentric co-flowing IDF.Mikoski (2004) studied the flame structure and soot and carbon monoxide (CO) formation were studied in laminar co-flowing co-annular IDFs.(Choy et al., 2012), reports an experimental investigation of the pollutant emission and noise radiation characteristics of both open and impinging IDFs, produced by five burners of different air port diameter (d air ) and air-to-fuel spacing (S).The effects of d air , S, overall equivalence ratio o and nozzle-to-plate spacing, H, on the pollutant emissions of CO and NOx and the noise radiation are examined.They concluded that, for each flame, the fuel jets are shifted towards and entrained into the air jet once they flow out of the fuel ports.This is because a low pressure zone was created around the root of the air jet due to its high velocity.They also concluded that as d air increases, the pale blue outer layer overlapping the deep blue inner reaction cone changes its color, with more yellow-colors present in this layer.They stated that the reason is simply that at a fixed air flow rate, a smaller d air results in a higher air jet velocity or Re, which enhances air/fuel mixing and promotes premixed combustion.Mikofski et al. (2007), studied the flame structure of laminar IDFs to gain an insight into soot formation and growth in under ventilated combustion.In the work of (Zhen et al., 2010), the thermal and emission characteristics of a swirl-stabilized turbulent IDF burning LPG were studied.
A study of the effect of the pitch circle distance, S, between the center of the air port and the center of the concentrically arranged fuel ports and the air jet Reynolds number as well at constant fuel flow rate, constant air and fuel jets discharge areas, constant fuel Reynolds number and constant jet velocity, constant number of the fuel ports was undertaken by Salem (2007).Four burners having 10, 15, 20 and 25 mm fuel ports pitch circle radii were used to investigate the effect of the pitch circle distance, S, on the flame characteristics.Each burner had 12 fuel ports with 2 mm diameter each.The 10 mm pitch circle radius burner gave the most favorable flame.He concluded that, the nature of the flow of the fuel and air jets in the CAP burners make their flame either of the IDF and NDF.At large center to center spacing or low air Reynolds number the flame becomes a normal diffusion one.
From the above studies, it is found that the IDFs characteristics are affected by the jet dynamics and the burner configuration.The jet dynamics are defined by the central air and the fuel jets momentum flux, which for any given burner determine the primary equivalence ratio ( p ), the fuel and the air jet Reynolds numbers.The burner configuration parameters are the pitch circle distance, the number of the fuel ports as well as the fuel and the air flow areas.Moreover, no study has been performed on the number of the fuel ports and little study has been performed on the effect of the change of the pitch circle distance on the flame structure.Therefore, the present investigation is concerned with the theoretical and experimental study of the influence of the IDF burners geometry and the jets dynamics on the nature of the free jet flames from IDF burners of port array type.The development of the co-flowing and the submerged jet flows is discussed.The formation of a fuel-air mixture boundary layer in both types of jet flow and the associated flame fronts are presented.The ambient air entrained by the submerged jet, its role and magnitude are investigated.The effect of the number of the fuel jets, the variation of the air jet Reynolds number and consequently the primary equivalence ratio, on these characteristics is given while keeping both of the total fuel flow rate and the fuel jets velocity unchanged.The results of the set of burners whose pitch circle radius 10 mm are presented in this paper.Discussion of the results on the background of the theory of heat, momentum and mass transfer in jet flows to cast some light on the effect of the burner geometry and the jet dynamics on the characteristics of the IDFs is presented.
EXPERIMENTAL SETUP
The experimental setup is shown schematically in Figure 1.The burner consists of two parts, namely, the burner head which is made of stainless steel plate and the fuel chamber as shown in (Figure 2).
Gaseous fuel LPG, (60%C4H10 and 40% C3H8 by volume) was fed into the gas burner from a 37 liter LPG bottle at a gauge pressure of (5000±100) pa, after passing through a pressure reducer valve fitted in the LPG bottle outlet.This valve is used to keep the outlet gaseous fuel pressure at 3000 pa.The feeding fuel line is equipped with a calibrated pressure gauge, 6000 pa maximum pressure and placed after the pressure reducer valve (Figure 1).A regulator valve is used to control the fuel flow rate.The fuel delivery line is constructed from a seamless pipe, 12 mm diameter whose end is connected to a calibrated fuel rotameter used to measure the fuel flow rate.Air is forced through a pipe of 15 mm inner diameter and 125 cm length into the IDF burner from a 1000 Liter size compressor tank (Figure 1).The IDF is effected by supplying the air to the burner through one central port.The air mass flow rate is measured using a calibrated rotameter.The gaseous fuel supplied to the burner issues from it through a number of equal diameter fuel ports.The fuel ports of each burner are equally spaced around the air port on the same pitch circle.The inlet temperature of the air and the fuel are measured by two pairs of calibrated thermocouples type J.The inlet measured temperatures were 28°C and 25°C respectively.
Three sets of IDF burners each of which consists of three different pitch circle radius were used.The distances, S, which are the pitch circle radii, are 10, 15 and 20 mm (Figure 3).The corresponding pitch circle diameters are 20, 30, and 40 mm.Each set consists of three IDF burners that have the same pitch circle radii, but the number of the fuel ports in the three burners are different being either 12, 24 or 36 fuel ports.The total fuel flow area, which is the summation of the area of all the ports of the burner, was the same for all of the burners regardless of the number of ports used.The air flow area in all the burners was unchanged as the air outlet port was the same for all burners at 6 mm as shown in Table 1.The total fuel flow area in this work was 1.25 times that of the air port.The fuel flow area was equal to that used by Salem (2007).The air flow area used by Salem (2007) is 1.36 times the area in this study.
In the present study, the fuel flow area was kept constant by decreasing the ports diameter if the number of ports was increased as shown in Table 1.The fuel ports were opened by a laser drilling machine.
The appearance of the IDF under different operating conditions was obtained using a digital camera with 12 Mega pixels, 50 frames per second imaging rate.The position of the camera relative to the flame was the same during all shots in order to keep the height and size scales the same for all shots.The flame temperature was measured using type S (Pt-Pt Rh 10%) thermocouples.The type S thermocouple wire diameter is 0.1 mm.The diameter of the ceramic tube in which the thermocouple wires were enclosed is 3 mm and its bead size is almost 200 µm to minimize flame disturbance during measurements.During the experiments, the thermocouple was aligned with the burner such that it can be moved in a 2-D space in order to obtain the flame temperature distribution in axial and radial directions.The thermocouple bead was aligned co-axially with the air jet centerline each time before measurement.The temperatures reported have been corrected for radiation loss with the method suggested by (Bradley and Matthews, 1968).The maximum temperature correction in this work was 144°C.The gas species concentration, on dry basis, is measured using flue-gas analyzer, model Land-Lancom III.
All tests are carried out with a fixed fuel flow rate 0.025 L/s (25×10 -6 m 3 /s).The fuel Reynolds number varied due to the decrease of the fuel ports diameter as their number was increased.The decrease of the fuel Reynolds number is limited, as it was 377, 266, 218 for the 12, 24, and 36 fuel ports respectively, compared to the change in the air jet Reynolds number which varied from 10446 to 4703 as shown in Tables 2. Four experiments have been carried out on each of the burner groups at four primary equivalence ratios of 0.9, 1, 1.2, and 2 corresponding to air jet Reynolds number 10446, 9379, 7831 and 4703 respectively.Other experiments were carried as needed to investigate the flame characteristics at operating conditions outside the range of the air or the fuel flow rates.
As the fuel flow rate during these tests was kept constant, the change in the air Reynolds number necessarily leads to a corresponding variation of the primary equivalence ratio p.In this case any change of the flame characteristics as the central air Reynolds number was varied is the combined effect of the variation of both of the central air Reynolds number and the primary equivalence ratio, p.
The base case, selected to present the subject of this paper which is concerned with the flame nature, the co-flowing and submerged fuel-air mixture boundary layers, the flame fronts and the entrained ambient air at different number of the fuel ports, is the results obtained by the three burners whose pitch circle is 10 mm.Reference to results obtained by other burners is given when it is necessary.
To ensure the repeatability, each run of experiments was conducted two times and the averaged values are reported and employed to perform the error analysis.The error analysis was performed with the method of Kline and McClintock (1953).The precision of all flame height measurements was ±0.5 mm, and the precision in air flow rate measurements was ±10%.With 95% confidence level, the minimum and maximum uncertainties in the flame temperature measurements are 2.6% and 10.4%, respectively.
The minimum and maximum uncertainties of CO measurement are 2.1 and 9.5%, respectively.The uncertainty of the NO measurement ranges from 2.3 to 12.3%.The minimum uncertainties of CO2 and O2 are 1.1 and 1.3%, respectively, while their maximum uncertainties are 8.3 and 9.2%, respectively.
Relationship between burner design parameters and jet dynamics
The influence of the burner design parameters on the value of the Reynolds number is examined having in mind that the value of the Reynolds number, Re, can be written in the following form, 1, can be changed to a higher value for the same fluid by decreasing the diameter while keeping the mass flow rate at the same value or increasing the mass flow rate while keeping the fluid port diameter without change.The decrease of the Reynolds number can be achieved in a similar manner.The effect of the variation of the Reynolds number on the jet momentum flux and on the flame characteristics such as the flame structure, its temperature distribution and the concentration of the species of combustion will not be the same in both cases, even if the value of the Reynolds number is the same in both cases.Similarly the value of the primary equivalence ratio, p , may be varied to take a particular value by varying the fuel flow rate, by changing the central air flow rate or by the variation of each of them.In this case also, the nature of the flame in each case will not be the same.
The distance between the centerline of the central air jet and the centerline of the peripherally arranged fuel ports, S, as well as the number of the fuel ports, N f , exert an influential effect on the characteristics of the IDF flame through their influence on the momentum and mass transport between the air and the fuel jets and on the peripheral surface area of the fuel jets in contact with both of the central air jet and the ambient air as well.The lower pitch circle distance, S, allows better entrainment and mixing of the co-flowing air and the fuel jets, while the increase of the fuel jets peripheral area increases the rate of the central air-fuel mixing.In this case also, better mixing of the entrained ambient air is achieved.
The above discussions clearly show that the burner construction, the air and the fuel flow rates, their momentum flux and their Reynolds numbers are strongly interrelated.These parameters together determine what type of flame that would be produced under any particular set of burner configuration and fuel and air flow rates.For these reasons, the influence of the CAP burner design parameters, as well as the fuel and the central air Reynolds numbers and the primary equivalence ratio, p , on the nature of the IDF flames, should be examined on the basics of the currently well developed theory of the dynamics of jet flows.Such an analysis is necessary because the IDF flames encompass, in the same time, two distinic types of jets, mainly the co-flowing jet stream and the submerged jet one as discussed below.A flame front is formed at the air-fuel interfacial surface of each jet in the entrainment zone; a feature which the normal diffusion flames, NDF, do not possess.
Co-flowing central air and fuel jets
The central air jet and each of the fuel jets at their plane of emergence from the burner top surface are separated by a distance, d s , which is equal to the center to center distance, S, minus half the sum of the air jet and the fuel jet exit diameters.
As the two jets get out from the ports, they immediately transversally expand.After a very short distance from the plane of emergence the two jets adjoin and flow in the downstream direction or along the jet path, forming a coflowing jet (Schilichting,1960) and (Abramovich, 1963).As the two jets emerge in the same direction at two different velocities, an unstable surface having a discontinuity in the velocity is formed at the boundary of the two jets.This interfacial instability gives rise to the formation of eddies that move randomly transversally and along the flow causing high momentum exchange between the two co-flowing streams that leads to smoothening of the velocity.Eddies formation continues downstream or upward with the flow direction beyond the point at which the two jets merge.This situation gives rise to a continuing turbulent exchange of momentum, mass and heat transfer at the interface of the two streams.Such a case leads to the formation of a fuel-air mixture boundary layer of finite thickness at the interface of the air and fuel streams.This boundary layer is often termed the shear layer.
The co-flowing boundary layer thickness at the point of merging is zero and it increases as the jets move upward the point of merging.In this work, this layer will be referred to it briefly as the co-flowing or the inner boundary layer.Figure 4 depicts the velocity distribution in this case, but the boundary layer thickness will be much less than that of Figure 4. Point (a) identifies the inner that is, the air side edge of this boundary layer, while point (f) identifies the outer edge that is, the fuel side edge of the co-flowing boundary layer.The transversal distances b1 and b2 in Figure 4b represent the portion of the boundary layer in the air jet and the fuel jets sides respectively.
At the initial region of the co-flowing boundary layer, the transport of the air into the developing co-flowing admixture boundary layer is by turbulent mass transfer.The laminar as well as the turbulent boundary layer equations for momentum, energy and mass transport, for which similar solutions exists, apply for this case and gave good results for the dimensionless distribution of each of the temperature and concentration in jet flow of gaseous mixtures.
The air concentration in the fuel stream side of the boundary layer is not transversally or axially uniform and the fuel concentration in the air side of it is also as such.The boundary conditions shown in Figure 4b for the concentration of the fuel and the air at points (a) and (f) of the boundary layer imply that the fuel and the air concentrations in the air-fuel mixture boundary layer are mirror images.The fuel-air mixture in the boundary layer is lean in the air jet vicinity, while it is rich in the fuel jet neighborhood.Therefore, somewhere in the boundary layer there will exist a region in the co-flowing boundary layer in which a combustible air-fuel mixture is formed.This region extends between the central air port exit and that of the fuel jets ports.Once a sufficiently preheated combustible mixture is formed anywhere in this boundary layer, combustion takes place at this location in the coflowing jet boundary layer.
Figures 5a, b, c) show the radial temperature distribution at different flame heights from the burner rim for the B10N12 burner at different primary equivalence ratios, p .The thick arrow at the bottom of each figure Barakat et al. 135 refers to the position of the fuel jet ports centerline.The dotted vertical line represents the ambient air side edge of the fuel port.In the analysis of the IDF centerline temperature distribution, it was found that for a small vertical distance from the burner tip that equals about 20 mm, the flame temperature is lowest at the flame centerline.Beyond this height, the temperature increased with the height to reach a peak value in the reaction zone before the flame end, and then it begins to decrease as the post flame zone is approached.These figures show the presence of high temperature zones in the distance between the central air port and the fuel jet ports at equivalence ratios 0.9, 1.2 and 2, at the low distances near the burner tip from Z=1.5 mm up to Z=20 mm which represent the entrainment zone.The high temperature zone is 4 mm from the central air jet for the high jet momentum flux case at the Reynolds number 10446 corresponding to p =0.9 as shown in Figure 5a referred by the thin arrow.This is due when the interference between the air and fuel jets becomes strong enough, at high air Reynolds number, the fuel jets will be sucked to the co-flowing inner zone formed near the air jet.This zone shifts to almost half way the distance between the central air jet and the fuel jets at radial distance 6 mm for the moderate Reynolds number, 7831 corresponding to p =1.2, as shown in Figure 5b.It is moved nearer to the fuel jets when the interference is weak, at low air Reynolds number, 4702, corresponding to p =2, the air and fuel jets develop separately with each behaving as a single jet, as shown in Figure 5c.As the flame height increased over Z ≥ 20 mm, a gradual heating up of the air core takes place.This heating up continues in the upward direction and the centerline flame temperature increases, as shown in (Figures 5a, b, c), from the flame height about Z=40 mm.At this flame height, the maximum radial flame temperature is close to the air jet centerline.
The lower heat release rate (Figure 5) in the central air side of the co-flowing jet, which exists to the left of the point of the highest temperatures, is due to the lower fuel concentration as the air port is approached, the lower mixture temperature as well as the presence of more air there which causes an additional decrease of the jet temperature.In fact, for the system of multiple jets, the flow field is very complicated.
The submerged fuel-ambient air jet
In the mean time, the fuel jets which emerge in the stagnant ambient air form free submerged jets.The momentum efflux from the fuel ports causes a pressure drop around the fuel jets.This pressure drop can be evaluated considering that the stagnant ambient air pressure away from the jets is p , that of the entrained air at the fuel jet emergence is p, and the entrained ambient air as it adjoins the fuel jets, by virtue of the nonslip condition, moves at the fuel jet velocity, then: (3) Where, air and v fuel are the air density and the fuel velocity respectively.As a result of this pressure drop the ambient air is drawn to the base of the submerged jets and is entrained with them due to the momentum exchange caused by the turbulent friction at the interfacial surface between the submerged jets and the ambient air.The submerged jets also expand as they flow upward or downstream continually entraining ambient air.Due to this entrained ambient air, the mass of the jets increases as the jets spread and flow upward or downstream.Turbulent mixing and turbulent mass transfer at the interface of the submerged jet and the ambient air lead to the formation of another air-fuel mixture boundary layer in which the concentration of both of the species vary transversally and upward or downstream-wise.This boundary layer differs from that second flame front mentioned by Abramovich (1963), as each of them is formed by a different driving potential.Figure 6 represents an enlarged orientation for the velocity distribution in the initial region in the entrainment zone of the developing submerged air-fuel mixture boundary layer.Point (a) on this figure represents the airside end of this boundary layer, while point (f) is its fuelside end.In this case also, the dimensionless velocity distribution is given by a similarity solution.At the exit of the fuel jets and as the fuel-air mixture boundary layer is developing, the dimensionless concentrations of both of the air and the fuel in the submerged boundary layer mixture will show also an analogous similarity solution as that of the velocity in this boundary layer.The strength of the air-fuel mixture in this boundary layer depends upon the fuel jet momentum flux and the scale of turbulence there.In the present case the rate of burning will be much lower than in the co-flowing boundary layer because the fuel jets velocity and consequently its momentum flux are low.
The temperature distribution in Figure 5 also shows that, although of the quenching effect of the entrained ambient air, the temperature at the center of the fuel jets and also that at the ambient-air-side of the fuel jets are much higher than at the center of the air jet.As mentioned above the rate of fuel burning at this flame front and consequently the rate of heat release there is much less than that of the co-flowing boundary layer.Therefore the temperature in this region will not be as high as that midway between the fuel and the central air jets.Such a high temperature in the ambient air side of the flame, which also extends some millimeters to the right of the fuel jet port into the ambient air region, is due to the existence of the flame front formed by the fuel-air mixture boundary layer of the submerged fuel-ambient air jet.
Formation of the flame fronts
In both of the co-flowing and the submerged boundary layers, the steady state concentration of the fuel is maximum at the fuel-side end of the boundary layers that is, at point (f) (Figure 4 and 6), and it is minimum at the air-side end of the boundary layer at point (a) in the same figures.As mentioned before, analogy between the dimensionless concentrations of the air and the fuel in the fuel-air mixture exists in the initial entrainment region of the jet during the stage of the boundary layers formation.The dimensionless distribution of the air concentration, as the boundary conditions at points (a) and (f) in Figure 4 and 6 imply, is nearly the inverse of that of the fuel, a mirror image form.
This reveals that somewhere in the two co-flowing and the submerged boundary layers of the fuel-air mixture, two circumferential zones having combustible mixtures will be formed; one of them is in the co-flowing boundary layer at the interfacial surface between the two co-flowing jets and the second is in the submerged boundary layer.This means that at least two flame fronts will exist in such IDF flames.As a sufficiently heated combustible mixture in any of these boundary layers is formed, burning of the fuel-air mixture takes place there.
Each of the air-fuel mixture boundary layers consists of two regions.A region in which the reactants are preheated by the turbulent eddies of the hot combustion products to almost the combustion temperature and where a negligible amount of the reactants are burned.The rest of the air-fuel boundary layer is a much thinner reaction region in which the bulk of the combustible mixture is burned.The latter zone is the flame front of very thin thickness.
Because of the presence of the center to center distance, d s , given by equation 2, between the air and the fuel jets, the formation of the outer fuel-air boundary layer starts before the co-flowing fuel-air mixture boundary layer regardless of its relative strength, which depends upon the fuel jets momentum flux.The time lag between them is definitely very short due to the narrow separating distance between the co-flowing jets and the short distance after which the two co-flowing jets merge as they immediately expand.
In the steady state, the initiation of combustion in this zone may precede that at the inner fuel-air mixture boundary layer of the co-flowing jets, if the fuel jet momentum flux is quite high to produce a combustible mixture in the submerged boundary layer, regardless of its strength, before that in the co-flowing boundary layer.
The two photographs shown in Figure 7 for the IDF entrainment zone produced at an air jet Reynolds number 13341 and 17788, with two relatively high fuel jets Reynolds numbers of 1256 and primary equivalence ratios, p =2.34 and 1.78 depict clearly these flame fronts, an inner co-flowing flame front enclosing a cold air core and the submerged flame front which extends in the ambient air beyond the fuel jet exit ports.These flame fronts merge together at the flame neck as shown in the figure .The postulation presented above regarding the formation of two flame fronts, one of them in the submerged boundary layer at the outer envelope of the fuel jets and the other one in the co-flowing air-fuel mixture boundary layer at the central air-fuel interface, shows up in the measured temperature distribution given in Figure 5.This figure shows the temperatures at the outer boundary layer of the flame near the flame base in the entrainment zone about 300 to 550°C where the submerged boundary layer of the jet is formed.A high temperature zone in between the central air port and the fuel jets exists also in this figure where the temperature is about 700°C, while the temperature at the center of the port of the central air jet at the base of the flame, Z1.5 mm, is much lower than the flame temperatures at the two mentioned zones.This observation confirms the presence of the two flame fronts.
Effect of the number of fuel jets
Among the burner design parameters that affect the characteristics of the inversed diffusion flames is the number of the fuel ports.As the number of the fuel jets is changed from N f1 to a higher number N f2 , while the total Barakat et al. 137 fuel ports area, the total mass flow rate of the fuel and the fuel jet velocity are all kept constant, the total fuel momentum and the fuel momentum flux remain also the same.In this case, the total circumferential surface area of the fuel jets at the emergence of the jets from the fuel ports per unit height of the flame will increase as the number of the fuel ports increases.
In this case if the interfacial surface area per unit length of the height of the higher number of the fuel ports is A fN2 and that of the lower number of the fuel ports is A fN1 , the ratio (A fN2 /A fN1 ) will be higher than unity and it is given by: In this case the interfacial surface area between the fuel jets and the central air jet as well as that between the fuel jets and the ambient air, at which the jets exchange momentum, heat and mass with each of the central and ambient air, becomes larger as the number of the fuel jets increases.As a consequence the fuel-air mixture boundary layers, that develop as described before at the fuel-central air interface and at that of the fuel-ambient air, become larger.As a result the formation of the combustible mixture zones becomes time-wise higher and area wise larger.
The inter-air-fuel jets surfaces at the jets emergence at which the discontinuity in the jets velocity exists as described before also increase, giving rise to more interfacial instabilities of the flow.The increase of the region of instability of the flow leads to an increase in the turbulent eddies there, which once they are formed they move down stream that is, along the flame height, and decay.The agitation caused by these eddies as they form, move and decay randomly enhances the turbulent momentum, heat and mass transport in the entrainment zone.The ultimate effect of such flow dynamics is a high rate of formation of combustible well mixed fuel-air mixtures of higher burning rates.
For a particular burner, these effects take place in the IDFs at different scales depending upon the fuel and the air momentum fluxes, their ratio, the value of the pitch circle radii, the number of the fuel ports and the flow areas of the central air and fuel ports.
The influence of the number of the fuel ports on the flame color is noticed (Figures 8 and 9).
The flame colors produced by the high number of the fuel jets 24 and 36 that have lower fuel Reynolds numbers of 266 and 218 respectively, which are shown in (Figures 8 and 9), at two different equivalence ratios 0.9 and 1.2 are of more-dark blue ones than that of the flames produced by the burner that has the lower jets, B10N12, located on the same pitch circle radius of 10 mm, although the fuel jet Reynolds number of the 12 ports burner is higher.This is a result of the better mixing and the rapid formation of combustible fuel-air mixtures of higher burning rates by the larger number of the fuel ports Effect of the number of the fuel ports at Re air =10466 or p =0.9 and burner discussed above.
The presented experiments showed that a large number of the fuel jets issuing from smaller diameter ports are more efficient than a lower number of fuel jets of larger diameters located at the same pitch circle having a high Reynolds number from which the fuel emerges at the same fuel total flow rate and the same velocity.
The 12 ports flame in Figure 8a is higher and thicker than the 24 port flame of Figure 8b and both of them are longer and thicker than the flame produced by the 36 ports burner.This phenomenon indicates also that the larger number of the fuel ports results in higher burning rates that consume the gaseous fuel at higher rates producing shorter and thinner flames.
Effect of each of the air and fuel jet momentum flux and their Reynolds numbers on the nature of the flame
If the central air jet momentum flux is quite high and the fuel jet momentum flux is low, which is the case in the present work, the produced flame will be an IDF one having a flame neck which is a characteristic feature of these flames.The flames shown in Figure 8 and 9 typify these flames.These figures represent the IDFs at high air Reynolds numbers obtained from the burners B10N12, B10N24 and B10N36 respectively.The air Reynolds number in these flames is a high value of 10466 and 7831, while the fuel Reynolds numbers are low and equal 377, 266 and 218, and the primary equivalence ratios for these flames are 0.9 and 1.2.The fuel jet outlet velocity and the fuel mass flow rate are the same as the total fuel jets area of the flames in Figure 8 and 9 is the same.The flame color in the figures except that of the left flame in Figure 9 is blue and sootless indicating complete combustion in these cases at such a high air Reynolds number the turbulent momentum and mass transfer cause the formation of strong well mixed fuel-air mixture co-flowing boundary layer of higher burning rate.In this case the produced flame is an IDF.
This type of the IDF continued to form as the central air mass flow was decreased and consequently the air Reynolds number is decreased, while the primary equivalence ratio, p , increased until a low Reynolds number of 2081 was reached.In this case, the flame color continued changing from being bluish soot free at the high air Reynolds number to become yellow sooty longer flame that lost appearance of the IDFs at the low airr number or high primary equivalence ratios p =4.5.Figures 10 and 11 which are obtained at Re air =3762 and 2681 respectively and corresponding to a primary equivalence ratio 2.5 and 3.5 represent this mode of the flame variation.
If the momentum of the two jets is low, a yellow sooty neckless flame NDF, is formed as that produced by the low momentum free jet diffusion flames.The IDF in this case may retain, by its definition, its name but looses the shape, the structure and the characteristics of the IDF flames and takes the appearance of a NDF. Figure 12 B10N12 B10N24 B10N36 shows a schematic sketch for this case.The photos in this figure, were obtained by gradually decreasing the central air jet mass flow rate until this flame was formed at a low air Reynolds number, 2081, a fuel Reynolds number, 218 and a corresponding high primary equivalence ratio p =4.5, depict clearly such a NDF.Such NDFs at these Reynolds number levels took place at pitch circle radius 10 mm for all fuel jet numbers, 12, 24 and 36.
The flame neck
As the central air jet issues from its port at the high rate of momentum flux as in the high Reynolds number range of the present work, a pressure drop in the zone around the air jet, superimposed upon that created by the fuel jets, will take place.This pressure drop is much higher relative to that caused by the fuel jets because of the high air jet momentum flux compared to that of the fuel jets.
The ratio of the air jet momentum flux to the momentum flux of the fuel jet in this work varies from 195 at the lowest air Reynolds number 4702 to 964 at the highest air Reynolds number 10446 as shown in Tables 2. According to Equation 3, the pressure drop caused by the issuing central air jet is 964 times that created by the fuel jets at the highest air Reynolds number 10466 to 195 times that of the fuel jets at the low air Reynolds number 4702.
Under these conditions a negative pressure gradient exists between the fuel jets and the central air jet; the pressure in the vicinity of the central air jet outlet is the lowest.As a result the fuel streams decline towards the central air jet as represented by the warped stream lines of the fuel jets in Figure 13.
The fuel is therefore drafted to tang with the air jet forming the entrainment zone which ends with the inverted bowel-shaped flame base.A neck of the least cross sectional area is formed as the fuel and the air jets completely tang.The distance at which the two jets tang will be shorter as the air jet momentum flux increases and vice verse.During this process mixing of the central air and the fuel takes place.The extent of this mixing depends upon the geometry of the burner and the jet dynamics.Beyond the neck zone the two jets merge forming a single plume.After merging at the neck, the flame expands transversally as it flows upward and becomes of larger diameter to some of the flame height.
After then the flame size begins to diminish until eventually disappears when the fuel is wholly consumed.Therefore the shape of the flame takes the form shown in Figure 14.
Effect of the number of fuel ports, central air jet Reynolds number and primary equivalence ratio on the flame neck
The inverted-bowell shaped necked entrainment zone appears in the photographs of the high air Reynolds number 10466 or at the primary equivalence ratio, p =0.9, flames (Figure 8).The formation of the flame neck for the three burners B10N12, B10N24 and B10N36 at the high and medium values, in this investigation, of the air Reynolds number of 9739 and 7831, the corresponding primary equivalence ratios are 1 and 1.2 is presented in Figure 15, while that formed at the lower Reynolds number 4702 and a higher primary equivalence ratio p=2 is shown in Figure 16.
The figures for the flame entrainment zone given in Figure 15 and those given in Figure 16 show a blue color flame in the bottom portion of the entrainment zone followed by a yellow color flame that extends to the neck.The effect of the number of the fuel ports as well as the air Reynolds number and the associated primary equivalence ratio, p , on the shape, size and color of the flame in the entrainment zone is demonstrated in Figures 15 and 16.The yellow color on the top portion of the photos in Figure 15 at the higher Reynolds number 9379 and p =1 occupies smaller area compared to the yellow area on the neck of the flames at the intermediate Reynolds number 7831 and p =1.2 shown in Figure 16.The photos in (Figures 15 and 16) demonstrate that for the burner B10, the entrainment zone height decreases as the air Reynolds number increases.This can be stated differently that Figures 15 and 16 reveal that as the primary equivalence ratio, p , increases the entrainment zone height (the neck height) increases also.
On the other hand, Figure 17 which give the variation of the entrainment zone height with the primary equivalence ratio, p , at the different number of the fuel ports 12, 24 and 36 indicates that the height of the entrainment zone increases with the increase of the primary equivalence ratios p , due to the lower rate of momentum exchange at the lower air momentum flux in this case, which results in a decrease of the rate of mixing and the rate of formation of the combustible mixture in the air-fuel boundary layers.
From Figure 17, also it can be concluded that the height of the entrainment zone decreases as the number of the fuel ports increases.This effect is due to the higher momentum exchange at the interfacial surface area of the fuel jets and the central air that is discussed before.This higher momentum exchange completes the fuel jet tangent to the air jet in a shorter height.
On the other hand, if the air jet velocity is low and the fuel jet velocity is also low, the negative pressure developed around both of the air and the fuel jets will be of the same order of magnitude.The drafting of the fuel jets towards the central air jets that takes place due to the existence of the negative pressure gradient in the direction of the central air jet, will not take place.Intermixing between the central air jet and the fuel jets as well as that between the fuel jets and the ambient air will take place due to the interfacial exchange of momentum and mass at much less level than if the velocity of either of the fuel or the air was high.The flame neck ceases to form and the produced flames will take the form of a normal diffusion one.This case is represented by the flame given in Figures 12, which is produced by the continuous decrease of the central air Reynolds number until this diffusion flame was obtained at a central air Reynolds number 2081, fuel Reynolds number 218 and p= 4.5 using the three burners B10N12, B10N24 and B10N36 at the same fuel flow rate.
The limiting case is reached when the central air flow ceases, Re air =0 and the resulting flame will categorize that of the self or naturally aspirated diffusion flames.
When the burner B10N36 which has 36 fuel ports was used at the high primary equivalence ratio p =2 corresponding to the air Reynolds number 4072, the blue color occupied most of the entrainment zone followed by a yellow color which covers a small portion that extends to the neck as shown in Figure 16.The blue color of the entrainment zone produced by the burner of the larger number of the fuel ports B10N36 occupies a larger portion of the flame neck if compared to that encountered in the case of the burners B10N12 and B10N24 at the same primary equivalence ratio p =2.This is due only to the fact that the circumferential surface area of the larger number of the fuel jets in contact with the air for the 36 fuel jets of the burner B10N36 allowed more local mixing and entrainment of the central air as well as the ambient air in the entrainment zone than in the case of the B10N12 and the 24 fuel jets burners which have less fuel ports.
It is obvious that the early mentioned burner parameters together with the two jets momentum and of course the primary equivalence ratios are the decisive factors that define the nature and the characteristics of the flames from CAP burners.
The study of the influence of these parameters cannot be undertaken collectively, but in determining the effect of any of these parameters, the other parameters must be kept unchanged except of course those parameters which are directly coupled with the parameter under consideration such as the change in the primary equivalence ratio, p , whenever the air Reynolds number was changed.
Conclusions
1.All the flame characteristics are controlled by the burner geometry and the jet dynamics.For this reason the performance of a port array IDf burner cannot be predicted based on the performance of another burner unless the conditions of the geometric and dynamic similarities are fulfilled.2. A fuel-air mixture boundary layer is formed at the interfacial surface of the central air-fuel jets after a very short distance from the plane of emergence.In the downstream direction the two jets form a co-flowing jet along the jet path.3. The fuel jets flowing in the surrounding ambient air form a submerged jet that leads to the formation of another air-fuel mixture boundary layer, called the submerged boundary layer.4. Two flame fronts are formed in the entrainment zone due to the existence of the two fuel-air mixture boundary layers maintained above.5.The entrainment zone increases with the decrease of the air Reynolds number.This increase is quite small for the burner which has the small center to center distance, S=10 mm.However it decreases with the increase of the number of the fuel ports.6.As the number of the fuel ports increases the centerline flame temperature increases provided that the total fuel flow area is kept constant.
Nomenclature
p : Primary equivalence ratio, dimensionless which is the ratio between the stoichiometric air to fuel ratio to the actual primary, (the central air jet), air to fuel ratio.).
o : Overall equivalence ratio, dimensionless which is the ratio between the stoichiometric air to fuel ratio to the summation of the central air jet plus the flow rate of the entrained ambient air to fuel ratio.
Figure 1 .
Figure 1.Schematic of experimntal set up
FigureFigure 3 .
Figure 2. The Burner mass flow rate of the fluid, µ is the fluid viscosity which measured at the ambient temperature and d is the fluid exit port inner diameter; the fluid being either the fuel or the central air.The value of the Reynolds number, as given by Equation
Figure 4 .
Figure 4.The velocity distribution and the air and the fuel concentrations, xa&xf of the co-flowing boundary layer; a) initial pattern, b) pattern an instant later.
Figure 6
Figure 6.a) The submerged jet velocity and concentration boundary layer, b) The stream-lines in a circular free jet.
Figure ( 7 )Figure 7 .
Figure (7) photos for burner B20N12 at constant fuel flow rate 0.083 L/s of two air flow rates showing the two flame fronts and the cold core
Figure 10 .
Figure (10) The Effect of the number of the fuel ports at Re air =3762 and Figure 10.Effect of the number of fuel ports at Reair=3762 and p =2.5.
Figure 11 .
Figure (11) The Effect of the number of the fuel ports at Re air =2681 and p
Figure 13 .Figure 14 .
Figure 13.Streamlines in the neighborhood of the boundary layer of initial region of the jet.
Figure 15 .Figure 16 .Figure 17 .
Figure 15.Photos showing the entrainment zone of IDF for three burners having pitch circle radius 10mm, and different number of the fuel jets at p= 1 and 1.2.
Table 1 .
Dimensions and symbols of the IDF nine burner heads.
Table 2 .
Conditions of experiments. | 11,992.4 | 2013-10-31T00:00:00.000 | [
"Engineering",
"Physics"
] |
pH-Responsive Particle-Liquid Aggregates—Electrostatic Formation Kinetics
Liquid-particle aggregates were formed electrostatically using pH-responsive poly[2-(diethylamino)ethyl methacrylate] (PDEA)-coated polystyrene particles. This novel non-contact electrostatic method has been used to assess the particle stimulus-responsive wettability in detail. Video footage and fractal analysis were used in conjunction with a two-stage model to characterize the kinetics of transfer of particles to a water droplet surface, and internalization of particles by the droplet. While no stable liquid marbles were formed, metastable marbles were manufactured, whose duration of stability depended strongly on drop pH. Both transfer and internalization were markedly faster for droplets at low pH, where the particles were expected to be hydrophilic, than at high pH where they were expected to be hydrophobic. Increasing the driving electrical potential produced greater transfer and internalization times. Possible reasons for this are discussed.
INTRODUCTION
Our group first demonstrated the formation of liquid-particle agglomerates via an electrostaticallydriven process in 2013 (Liyanaarachchi et al., 2013). A liquid droplet was produced at the end of an electrically grounded stainless steel capillary above a bed of particles, resting on a substrate to which a negative potential of several kilovolts was applied, causing the particles to jump from the bed to the pendent water droplet. These initial experiments produced metastable agglomerates consisting of a water drop filled with (hydrophilic) glass beads. The new electrostatic process was soon extended to hydrophobic particles , which remained embedded at the air-liquid interface instead of entering the droplet. These were genuine "liquid marbles"-liquid drops encased in a shell of non-wetting particles-of the type first observed well over a decade ago (Aussillous and Quéré, 2001), whose remarkable physical properties (Aussillous and Quéré, 2006;McHale and Newton, 2015) rapidly recommended them for a variety of potential applications (Bormashenko, 2017;Oliveira et al., 2017), including gas sensors (Tian et al., 2010), bioreactors (Arbatan et al., 2012a,b), and encapsulation media (Eshtiaghi et al., 2010;Ueno et al., 2014), pressure-sensitive adhesives (Fujii et al., 2016a) and materials delivery carriers (Paven et al., 2016;Kawashima et al., 2017).
The conventional method of forming liquid marbles consists of rolling a droplet on a powder bed. This direct contact method cannot be used to form metastable hydrophilic particle-liquid aggregates, since the liquid would simply soak into a bed incorporating hydrophilic particles.
When applied to hydrophobic particles, the new electrostatic method, which did not involve direct contact, was able to circumvent some of the physical limitations of direct-contact formation-for example, larger ratios of particle to drop size were achieved than has previously been considered possible (Eshtiaghi and Hapgood, 2012;Ireland et al., 2016). The electrostatic method has in addition been used to manufacture a new class of liquid marble complexes that include both hydrophobic and hydrophilic particles, resulting in core-shell structures, for a variety of applications, e.g., delivery and controlled release of pharmaceutical powders and water-efficient washing/collection of powder contaminants . Manufacture of even more complex layered structures may also be feasible.
In this context, stimulus responsive materials, whose wettability can be "switched" by various external stimuli (pH, light, temperature, etc.) take on a special significance (Fujii et al., 2016b), as they have the potential to provide even more control over the formation of structured liquid marble complexes. One could envisage a pH-responsive powder being used to safely transfer a liquid through a given environment at high pH, to be released when it reaches a low-pH environment. The utility of these types of mechanisms in drug delivery, for example, is clear. A number of studies have investigated the formation and stability of liquid marbles incorporating stimulus-responsive materials (Fujii et al., 2011;Nakai et al., 2013;Yusa et al., 2014;Paven et al., 2016). In these cases, the marbles were formed by the conventional direct-contact method. Our group has recently attempted to transport stimulus-responsive particles to pendent water droplets by means of the electrostatic method described above in order to assess liquid marble or dispersion formation. The first stimulus-responsive material explored was polystyrene (PS) particles carrying pH-responsive poly[2-(diethylamino)ethyl methacrylate] (PDEA) colloidal stabilizer on their surfaces. PDEA is a weak polybase with a pK a of 7.3 that is soluble in aqueous media below pH ∼7 because of protonation of its tertiary amine groups (Bütün et al., 2001). At pH 8 or above, PDEA exhibits either very low or zero charge density, and hydrophobic character. It was confirmed that the powders obtained from pH 3.0 and pH 10.0 aqueous dispersions had hydrophilic and hydrophobic characters, respectively. The PDEA-PS powders prepared from pH 3.0 dispersions jumped to a pendent distilled water droplet to form an aqueous dispersion droplet. Conversely, the powders prepared from pH 10.0 did not transfer resulting in no liquid marble formation which was attributed to the cohesive PDEA colloidal stabilizer which resulted in larger grains. This technique may therefore immediately provide a means of preparatively sorting PDEA-PS powders obtained from aqueous dispersions at different pH. A more detailed account of these behaviors, and a comparison between direct-contact (rolling) and electrostatic formation of PDEA-PS marbles, is provided in a recent article by our group (Kido et al., 2018).
One key advantage of the electrostatic aggregate formation technique is the ability to control the final product by changing the kinetics of the formation process via adjustments to the driving potential, drop-bed separation, and bed shape and structure. A pH-responsive particle provides an additional tool for controlling the electrostatic formation kinetics, as it allows the rate at which the particles penetrate the air-water interface to be altered. This paper thus investigates the kinetics of particle transfer and internalization by the droplet of PDEA-coated polystyrene (PDEA-PS) particles during electrostatic formation. Here we focus on PDEA-PS particles that were dried from a solution at pH 3, as these were consistently transported to the pendent droplet.
METHODOLOGY Particle Synthesis and Experimental Method
The PDEA-PS particles were synthesized by dispersion polymerization using PDEA homopolymer as a colloidal stabilizer in the same way reported previously (Sekido et al., 2017). The PDEA-PS particles were nearly monodisperse and had a number-average diameter of 2.20 µm and a coefficient of variation of 2%. PDEA loading was determined to be 2.66 wt% by elemental microanalysis and XPS studies on the PDEA-PS particles determined the surface coverage by PDEA to be approximately 47%. These results indicate that the PDEA is mainly located at the surface of the PS particles. Aqueous dispersion of PDEA-PS particles with a pH value of 3.0 (40 g, 10 wt%, adjusted using HCl aqueous solution) was dried at 21 • C, 0.1 MPa and 42.8-87.4 RH% in air. The obtained dried cake-like white agglomerate was ground into powder using a pestle and mortar.
A schematic of the experimental apparatus is shown in Figure 1. A bed of dried PDEA-PS particle powder, depth ∼1 mm, was supported by a 1 mm thick glass slide, which in turn rested on a stainless steel plate connected to a high voltage power supply. The metal plate was held at a constant negative potential relative to earth of 1.5, 2.0, or 2.5 kV, and was gradually raised at a rate of 50 µm·s −1 toward a pendent drop on the end of an earthed metal capillary syringe of 1.2 mm outer diameter. The drop liquid was either MilliQ water (pH 5.6), or an aqueous buffer at pH 3.0 (0.1 M potassium hydrogen phthalate/HCl) or 10 (NaHCO 3 /NaOH) (De Lloyd, 2000). The nominal drop volume was 5 µL, as dispensed by a syringe pump (Harvard Apparatus 11Plus), before the application of the electric field and loading with particles. When the separation between the particle bed and drop became sufficiently small (between ∼0.7 mm for 1.5 kV and ∼2.0 mm for 2.5 kV), the powder was transferred across the gap to the drop, before becoming internalized within the drop. A video camera was used to record the entire process from the start of particle transfer to the completion of particle internalization.
Preliminary Observations and Hypotheses
Preliminary electrostatic aggregation experiments with PDEA-PS particles revealed several interesting behaviors. The powder tended to jump to the drop not as individual particles, but as grains of substantial size containing many individual particles (Figure 2). For more information on the granular properties of this material, please refer to Kido et al. (2018). These initially attached to the air-water interface, before gradually being internalized by the droplet. Thus, the initially-smooth drop surface (a) took on a "jagged" appearance as it was encrusted with PDEA-PS grains (b), then gradually became smooth again once particle/grain transfer ceased (c-e). This cessation normally corresponded to depletion of that part of the particle bed that was subject to transfer to the drop, i.e., the section of the bed where the electrostatic force was sufficient to counterbalance gravity and cohesive or static friction forces (Ireland et al., 2015). Since the particles were all eventually internalized by the drop, no stable liquid marbles were formed. It can be argued, however, that metastable liquid marbles were manufactured. These might be viewed as exhibiting "delayed instability"-starting as fullystable liquid marbles, but losing that stability at a rate that can in principle be controlled by pH. This property may be useful in applications where the rate of release of a liquid into the surrounding environment needs to depend on the pH.
Given these observations, the process is conceptualized in terms of two main kinetic parameters-the time-scales for transfer of grains from the bed to the surface of the drop, and for internalization of grains attached to the air-water interface. The precise mechanism of internalization is not known, and may have included wicking of water into the interstices of the multiparticle PDEA-PS grains, as well as engulfment of the entire grain. The rate of both processes was expected to depend on the wettability of the particle surface. Since the PDEA surface of the particles was hydrophilic at pH values below the pK a of 7.3 and hydrophobic at higher pH values, it was hypothesized that the internalization process would be more rapid at low pH values of the drop liquid than at high values. A long internalization time would presumably promote an accumulation of grains at the drop surface, and it was hypothesized that this would suppress transfer and attachment of additional grains to the drop surface. Lower drop pH values were therefore expected to result in more rapid transfer as well as more rapid internalization.
Given the evolution of the droplet silhouette shown in Figure 2, the characteristic internalization and transfer times were estimated from video footage of the process using a twostage transfer-internalization model and a fractal-based analysis technique. These characteristic times were compared for different values of the driving potential and drop pH, to test the above hypotheses.
FIGURE 2 | Electrostatic transfer, followed by internalization, of PDEA-PS powder (dried from a solution at pH 3) into a water droplet, for a pH 5.6 droplet and a driving potential of 2.5 kV.
Data Analysis
The fractal dimension of each aggregate's silhouette outline, as it evolved over time, was used to characterize the transfer and internalization kinetics. It was selected as an appropriate measure because of its robustness and scale-independence (Rasband, 1990). The fractal dimension gives a measure of the degree of "convolutedness" of the aggregate outline (Falconer, 1990). A drop outline without particles, or with entirely internalized particles, was smooth, with fractal dimension close to 1. The PDEA-PS powder tended to transfer to the drop surface in the form of multiparticle grains rather than individual particles, and there was thus a substantial increase in the fractal dimension of the drop outline when non-wetted grains were present at its surface. The scale-independence of the fractal dimension was considered to more than compensate for the fact that there was no obvious exact correlation between the fractal dimension and the number of particles at the bubble surface. Since we were chiefly interested in the rate constants for the process, not the actual numbers of particles, this was a secondary consideration. The fractal dimension was calculated using a standard box-counting algorithm, details of which are provided in the Supporting Information.
Model
To assist in interpretation of kinetic data, a simple model of the transfer to and internalization of particles by the drop was developed. The drop surface is conceptualized as possessing a number of sites, N d , to which particles (or in this case, grains) can attach. These sites can either be directly on the air-liquid interface, or may represent adhesion of incoming particles/grains to other particles/grains. The section of the particle bed beneath the drop possesses N b such sites. We let n d and n b be respectively the number of occupied sites on the drop surface and bed. It is assumed that the rate of change of the number of particles or grains in the bed is equal and opposite to the rate at which they jump to the drop, and that this is proportional to n b . It is further assumed that only those particles/grains that jump to an unoccupied site on the drop surface are able to attach. The probability of this is proportional to the fraction of sites on the drop surface that are unoccupied, i.e., 1 − n d /N d . Thus where τ t is a time constant for particle transfer. In addition to this transfer process, which fills sites on the drop surface, particles/grains enter the interior of the drop, resulting in formerly occupied sites at the drop surface becoming unoccupied. The internalization rate is assumed to be proportional to the number of occupied states at the drop. Thus, the rate of change of the number of occupied sites on the drop surface is given by where τ i is the time constant for the internalization process. Since we are not able to measure absolute values of n b, n d , N b, or N d , but only relative values, we can reduce the number of model variables by introducing the following ratios: in terms of which Equations (1) and (2) take the simpler forms with only three instead of four parameters. All of the sites on the bed are initially occupied, and those on the drop surface are initially unoccupied, so When ν is reduced to zero, the region of the bed from which particles are able to jump is depleted (we see this physically as a region of bare substrate underneath the drop or aggregate). Figure 3 shows a plot of fractal dimension of the aggregate outline as a function of time for the transfer-internalization process of PDEA-PS particles to a droplet of pH 5.6 water under a 2.5 kV applied voltage. The increase in the fractal dimension during particle transfer and subsequent decrease with internalization is clearly apparent. Note that the fractal dimension is between 1.1 and 1.3, as we would expect for an outline that is somewhat convoluted in two dimensions, but is not near being space-filling.
Extraction of Parameters
The fractal dimension vs. time data (Figure 3) were now fitted to Equations (6) and (7). Since, as already mentioned, there was no clear correlation between µ and the fractal dimension d, a linear correlation of the form was used, where A and B are auxiliary constants corresponding to the proportionality of µ and d and the fractal dimension of FIGURE 3 | Plot of fractal dimension of the aggregate outline during particle transfer and internalization, with drop pH 5.6 and driving potential 2.5 kV.
the smooth drop outline, respectively. It was discovered that a fit in which τ t , τ i , η and the two auxiliary constants were adjusted simultaneously could produce one or more spurious solutions. This was prevented by first determining τ i , A and B.
The time at which grains ceased jumping from the bed to the drop was estimated by inspecting the video footage. With no particle transfer, both sides of Equation (6) and the first term on the righthand side of Equation (7) become zero, and the solution of (7) is simply an exponential decay with time constant τ i . A curve of this form was fitted to the data after cessation of transfer (the section of the data in Figure 3 after c, ∼0.9 s) using total least squares regression, with τ i , A and B as adjustable parameters, weighted equally for variation along the time and fractal dimension axes. Once τ i was determined, τ t and η were found using a total leastsquares fit of a numerical solution of Equations (6) and (7) to all the data. Uncertainties in the parameters were determined at a confidence level of 95% using 400 Monte Carlo simulations per fit, as in the method of Hu et al. (Hu et al., 2015). The uncertainty in both time constants was of the order of 0.1 s, and that of the parameter η was also ∼0.1. The parameter A ranged from 0.03-0.037 for 1.5 kV experiments, 0.03-0.046 for 2.0 kV experiments, and 0.06-0.1 for 2.5 kV experiments, reflecting the different coverage patterns observed at different driving potentials. B, the 'baseline' fractal dimension for a bare drop, varied slightly from ∼1.135 at 1.5 kV to 1.15 at 2.5 kV, probably due to greater electrostatic elongation of the drop at higher driving potentials. Figure 4 shows the transfer and internalization time constants respectively as a function of the driving potential and the drop pH. It is clear from Figure 4A that the transfer process is slower for a drop pH of 10, compared to drops at lower pH values. Figure 4B indicates that the internalization time follows a similar trend with drop pH. The latter trend is consistent with our expectations-since the pK a of PDEA is 7.3, we would expect the particles to be wettable at pH 3 and 5.6, and nonwettable at pH 10. The increase in transfer time with drop pH was also expected, as slower internalization means that transferred particles vacate drop surface states more slowly, and it was hypothesized that the rate of transfer would have an inverse relationship to the extent of occupation of the drop surface (recall Equations 6 and 7). Two different mechanisms are proposed for this hypothesized relationship. The first involves physical obstruction: a grain jumps from the bed but encounters a part of the drop surface already occupied by a non-wetted grain, and simply bounces off rather than attaching to the airwater interface. The second proposed mechanism involves an accumulation of net negative charge at the drop surface due to the presence of non-wetted charged grains, which have made only partial contact with the water and thus have not been fully neutralized. This accumulation of net negative charge at the drop surface would be expected in turn to decrease the electric field between the drop and the bed. The increase of internalization time with driving potential initially seems counter-intuitive. It is hypothesized that at larger driving potentials, multiple layers of grains are able to adhere to the drop. Only a fraction of the adherent grains (the innermost) is in direct contact with the liquid at any given time. In our simple model, the internalization rate is proportional to the number of occupied sites at the drop surface. No distinction is made between the time taken for grains in contact with the liquid to be internalized, and the time spent by adherent grains in the outer parts of the multilayer, waiting for inner grains to be internalized by the drop before making contact with the liquid themselves. The modeled internalization time is actually the sum of these two distinct processes, and will thus tend to be longer for multilayer coverage (as at higher driving potentials). Figure 5 shows the relationship between the transfer and internalization time constants, for all driving potentials and pH values. The 2.5 kV experiments exhibit a larger transfer time constant for the same internalization time constant than those at lower driving potentials. The reasons for this are not yet clear. A clue may be provided by Figure 6, which shows the parameter η (recall Equation 5), representing the ratio of the total number of available sites for grains in the bed to that on the drop surface, as a function of pH. This suggests that in the 2.5 kV case, the number of available sites on the drop surface was relatively large for the number of grains able to jump to them, compared to the lower-potential cases, between which there is no significant difference. It seems reasonable that an increasing driving potential would increase the total number of available sites in both the bed and at the drop, but it is not clear why it would preferentially increase the number at the drop compared to those in the bed. This may be related to the shape of the electric field. For a larger driving potential, the electrostatic force is able to overcome cohesive forces in the bed and the particle/grain weight at a greater drop-bed separation. Thus, the geometry of the system is different during transfer at different driving potentials. On the other hand, pH appears to have only a weak effect, if any, on η. This is consistent with our model, which assumes that pH influences the rate at which occupied sites become vacant by internalization, not the actual number of those sites. Figure 7 shows a schematic of the electric field lines, and photographs of the corresponding stage of particle transfer, for a droplet at pH 5.6 and all three driving potentials (note that due to inertia and gravity, the particle trajectories differ somewhat from the field lines). For aggregate formation at lower driving potential values, the field lines and particle trajectories are concentrated under the lower part of the droplet, and originate from a much smaller area of the bed, with relatively direct particle trajectories. At higher driving potential values, the grains come from a much larger area of the bed and are transferred to the entire surface of the droplet, often via long, curving trajectories. It is hypothesized that this change in the morphology of the grain trajectories results in a larger relative increase in the number of available sites on the drop surface than it does the number of grains in the bed able to jump. Assessment of this hypothesis will require numerical modeling of the field shape and particle trajectories. We plan to explore the relationship between field morphology and the transfer kinetics in more detail in subsequent studies, using the COMSOL Multiphysics finiteelement modeling environment (COMSOL Inc., Burlington, MA). FIGURE 6 | Ratio of total number of available sites in the particle bed to those at the drop surface vs. pH, for all driving potential values.
CONCLUSIONS
Our group's novel electrostatic method for manufacturing liquid marbles and other types of liquid-particle agglomerates was applied to a stimulus-responsive material, polystyrene particles coated in pH-responsive poly[2-(diethylamino)ethyl methacrylate] (PDEA), whose surface was hydrophilic at low pH and hydrophobic at high pH. The formation of aggregates with these PDEA-PS particles was modeled as two coupled processes, namely transfer of particles from the bed to the droplet surface, and internalization of the particles into the droplet. Since all particles in these experiments were eventually internalized, stable liquid marbles were not formed-instead, metastable liquid marbles were created. The kinetics of the transfer/internalization process were characterized using fractal analysis of video footage to track the "roughening" and "smoothing" of the droplet silhouette as grains were transferred to it and then internalized. Consistent with expectations, the characteristic time FIGURE 7 | Schematic of electric field lines and photographs of particle trajectories for equivalent stages in the particle transfer process (at approximately the time of the peak fractal dimension) at different values of the driving potential (and hence drop-bed separation). The photographs are all for a drop at pH 5.6. for internalization was found to be markedly more rapid when the pH of the drop was below the pK a of PDEA (i.e., when the particle surface was expected to be hydrophilic) than when it was above the pK a (when it was hydrophobic). Grain transfer was also found to be more rapid at low pH, since more rapid internalization made more surface sites available for grains to jump to. The variation of transfer and internalization times with driving potential was broadly as expected, although full explanation of some aspects will require detailed modeling of the field morphology.
AUTHOR CONTRIBUTIONS
KK performed the experiments under supervision of the other authors. PI developed the model and performed the analyses. SF synthesized and characterized the particles. PI, SF, GW, and EW wrote the article. | 5,710.6 | 2018-06-14T00:00:00.000 | [
"Materials Science"
] |
Experimental Demonstration of High-Sensitivity Underwater Optical Wireless Communication Based on Photocounting Receiver
: In this paper, we propose a high-sensitivity long-reach underwater optical wireless communication (UOWC) system with an Mbps-scale data rate. Using a commercial blue light-emitting diode (LED) source, a photon counting receiver, and return-to-zero on–off keying modulation, a receiver sensitivity of − 70 dBm at 7% FEC limit is successfully achieved for a 5 Mbps intensity modulation direct detection UOWC system over 10 m underwater channel. For 1 Mbps and 2 Mbps data rates, the receiver sensitivity is enhanced to − 76 dBm and − 74 dBm, respectively. We further investigate the system performance under different water conditions: first type of seawater ( c = 0.056 m − 1 ), second type ( c = 0.151 m − 1 ), and third type ( c = 0.398 m − 1 ). The maximum distance of the 2 Mbps signal can be extended up to 100 m in the first type of seawater. aluminum hydroxide concentration and ( b ) received optical power (ROP) under
Introduction
With the expanding area explored by human beings, the observation and utilization of the underwater world is growing increasingly important. Various underwater sensors, unmanned vehicles, and nodes are deployed underwater to transfer and collect information.
To build an underwater transmission link, both cable-and wireless-based methods are utilized. Cables or fibers can offer a stable communication link with high transmission speed but limit the freedom of the communication terminal for a long-reach link.
The traditional method for underwater communication is to use acoustics, which is a medium of sound. The attenuation of the sound wave in water is acceptable, which is competent for ultralong-reach communication up to tens of kilometers. However, underwater acoustic communication is limited by the huge transmitted power, low data rate, and large latency [1]. Due to the skin effect, electromagnetic waves suffer from huge attenuation when propagating in water. Thus, it is hard to realize long-reach underwater communication using electromagnetic waves. Studies have shown that the visible spectrum from blue-green wavelengths suffers less attenuation caused by underwater absorption and scattering than electromagnetic waves [2]. Benefiting from the rich bandwidth resource of a laser diode (LD), a Gbps-scale underwater optical wireless communication (UOWC) system within tens of meters is feasible [3]. However, a strict tracking and alignment system is required after long-distance transmission due to the narrow beam and small divergence angle of the LD source. Moreover, most of the reported UOWC links are conducted in tap water with avalanche photodetectors (APDs) for optical signal detection, which may not be so attractive and available for some long-distance transmission scenarios requiring a large optical power budget and photon-scale detection, e.g., internal communications with Mbps data rates between autonomous underwater vehicles or underwater sensor nodes in underwater dynamic conditions [4]. Thus, a high-sensitivity detector combined with a large-coverage-area light source is indispensable to build a reliable communication link with respect to unpredictable channel obstructions and the various conditions of the sea. Photomultiplier tubes (PMTs), possessing the capability of single-photon detection, are the most widespread vacuum electronic devices in every field of experimental studies including optical communication, biology, space research, and chemistry. Compared with silicon photomultipliers, PMT needs high voltage to drive the device. However, the PMTs are not sensitive to temperature and have a lower noise level. Due to the sensitivity of PMT to background noise and magnetic fields, it is more suitable to build a PMT-based long-range UOWC link for deep-sea implementation.
Before building a long-range experimental UOWC system, the underwater channel conditions need to be investigated to establish the system parameters such as the optimal transmitted optical wavelength, modulation scheme, signal baud rate, and beam aperture. Because underwater data transmission using a light beam is not an easy mission in the presence of high water absorption and scattering, characterizing the underwater optical channel property to achieve appropriate system parameters is of crucial importance to enable a high-reliability and high-quality UOWC link.
In this paper, we consider a comprehensive underwater channel model to simulate the property of an underwater optical communication link by taking the practical system parameters into account. Under the guidance of the simulation results, we propose and experimentally demonstrate a long-reach Mbps-scale UOWC scheme with high receiver sensitivity based on a light-emitting diode (LED) transmitter and a PMT receiver. The proposed system can significantly relax the alignment requirement especially after longdistance transmission. Bit-error-ratio (BER) performance enhancements for 1 Mbps, 2 Mbps, and 5 Mbps after 10 m transmission are experimentally investigated under different water turbidities with an adaptive decision threshold (DT). The receiver adapts to the changing of signal level. With added attenuation, the maximum link loss at an attenuation coefficient of 1.33 m −1 is up to 99 dB at λ = 448 nm. The achievable maximum distances for a 2 Mbps data rate in the first type of seawater (c = 0.056 m −1 ) are up to 100 m and 134 m at 1 W and 10 W transmitted electrical power, respectively.
Operation Principle
Compared with free-space atmospheric laser communication, the UOWC system faces some unique challenges.
(i) Spectrum for communication: blue or green wavelengths should be dedicated to the UOWC link due to the water absorption effect, rather than infrared wavelengths (C-band 1530-1565 nm and L-band 1565-1625 nm) for an atmospheric free-space link enabled by well-established fiber-optic technologies and optoelectronic devices and components.
(ii) Channel condition: affected by seawater, the underwater optical transmission channel is quite complicated. When the modulated light propagates through seawater, it suffers from absorption and scattering. Seawater absorption means that part of the photon energy launched into the seawater is converted into other forms of energy, such as thermal and chemical. Scattering refers to the interaction between light and seawater, which changes the optical transmission path. Both absorption and scattering cause the loss of optical signal energy at the receiver, resulting in a reduction in signal-to-noise ratio and communication distance. As illustrated in [5], the link loss for a realistic 10 m green-light UOWC system can vary from 6.6 dB to 95.5 dB due to dynamic underwater channel conditions from a clean ocean to turbid harbor seawater.
Due to the variability of underwater channels, a robust long-reach UOWC link must be designed against a link loss of roughly up to 100 dB. Meanwhile, the link must be able to tolerate the dynamic changing underwater channels without breaking off. Although linear detectors including APDs have shown their abilities to detect multi-Gbps optical signals transmitted by LD sources, their sensitivities are typically limited by thermal noise [6]. On the other hand, photon-counting detectors can achieve very high sensitivities with moderate data rates on the Mbps scale. In this paper, we propose a reliable longreach UOWC scheme using LED and PMT, whose concept is illustrated in Figure 1. Due to the advantages of its large light beam, compact structure, low cost, and low power consumption, LEDs are proposed as viable candidates to provide a transmission data rate of several Mbps or even up to hundreds of Mbps for implementing an alignmentreleased UOWC system. In the demonstration, a commercial LED transmitter is modulated by a predesigned return-to-zero on-off keying (RZ-OOK) with half power semi-angle of 1.25 • [7]. With increasing transmission distance z, the receiving radius D r at the detection area increases, which significantly relaxes the alignment requirement. To achieve high receiver sensitivity and long distance, a typical and practically implemented photocounting receiver is used, which is the PMT combined with a pulse-holding circuit to detect photolevel signals. The received photoelectric current is characterized by a series of discrete rectangular pulses with certain width, whose number satisfies a Poisson distribution. In the demonstration, we propose a digital adaptive DT algorithm for signal recovery. The value of DT is adjusted as a function of the received signal level to achieve the minimum BER value. to tolerate the dynamic changing underwater channels without breaking off. Although linear detectors including APDs have shown their abilities to detect multi-Gbps optical signals transmitted by LD sources, their sensitivities are typically limited by thermal noise [6]. On the other hand, photon-counting detectors can achieve very high sensitivities with moderate data rates on the Mbps scale. In this paper, we propose a reliable long-reach UOWC scheme using LED and PMT, whose concept is illustrated in Figure 1. Due to the advantages of its large light beam, compact structure, low cost, and low power consumption, LEDs are proposed as viable candidates to provide a transmission data rate of several Mbps or even up to hundreds of Mbps for implementing an alignment-released UOWC system. In the demonstration, a commercial LED transmitter is modulated by a predesigned return-to-zero on-off keying (RZ-OOK) with half power semi-angle of 1.25° [7]. With increasing transmission distance z, the receiving radius at the detection area increases, which significantly relaxes the alignment requirement. To achieve high receiver sensitivity and long distance, a typical and practically implemented photocounting receiver is used, which is the PMT combined with a pulse-holding circuit to detect photolevel signals. The received photoelectric current is characterized by a series of discrete rectangular pulses with certain width, whose number satisfies a Poisson distribution. In the demonstration, we propose a digital adaptive DT algorithm for signal recovery. The value of DT is adjusted as a function of the received signal level to achieve the minimum BER value. is the divergence half-angle. z is the transmission distance. is the radius at the detection area.
LED Transmitter
In our experiment, a commercial low-cost LED at a peak wavelength of 448 nm was employed as the transmitter. The path loss of light caused by water absorption and scattering can be dominated by the Beer-Lambert law.
where is electrical-to-optical conversion efficiency of LED, and represent the coefficients of absorption and scattering, respectively, is the total loss due to both effects, and is the underwater transmission distance. is the channel loss, given by exp(−cz). and are the transmitted electrical power and received optical power (ROP), respectively.
The radiation pattern ( ) of the LED obeys the Lambertian model, defined as where is the angle of irradiance, and = 0 is the maximum radiation power angle, i.e., the direct state.
is expressed as the Lambertian emission order of the beam directivity, which is related to the half-power angle / of the LED, written as
LED Transmitter
In our experiment, a commercial low-cost LED at a peak wavelength of 448 nm was employed as the transmitter. The path loss of light caused by water absorption and scattering can be dominated by the Beer-Lambert law.
where η is electrical-to-optical conversion efficiency of LED, a and b represent the coefficients of absorption and scattering, respectively, c is the total loss due to both effects, and z is the underwater transmission distance. L ch is the channel loss, given by exp(−cz). P t and P r are the transmitted electrical power and received optical power (ROP), respectively. The radiation pattern I(φ) of the LED obeys the Lambertian model, defined as where φ is the angle of irradiance, and φ = 0 is the maximum radiation power angle, i.e., the direct state. m 1 is expressed as the Lambertian emission order of the beam directivity, which is related to the half-power angle φ 1/2 of the LED, written as The detected optical power by the photon counting receiver at the receiving plane A e f f through the distance z is defined as follows [8]:
Underwater Channel
In an underwater environment, the transmitted light is greatly influenced by the optical properties of water. Underwater particles can cause energy attenuation and divergence of the beam. In this section, Kopelevich channel modeling is used as a volume scattering function (VSF) to investigate the extinction coefficient of natural water by simulation [9,10]. The specific form of this model is presented in [9].
Absorption coefficient a and scattering coefficient b denote the spectral absorption and scattering rate of unit interval, respectively. In this paper, we consider fulvic acid, humic acid and chlorophyll as the main absorption components of water [11,12], which can be expressed as follows: where λ indicates the light wavelength, and a ω (λ), a c (λ), a f (λ), and a h (λ) are the absorption coefficients caused by pure water, chlorophyll, fulvic acid, and humic acid, respectively. The variables of a 0 c , a 0 f , and a 0 h represent the chlorophyll, fulvic acid, and humic acid characteristic absorption coefficients, respectively [13][14][15]. The two constant parameters k f and k h are 0.0189 m −1 and 0.0111 m −1 . C c , C f , and C h indicate the concentrations of chlorophyll, fulvic acid, and humic acid in water (C 0 c = 1 mg/m 3 ). The values of C c are given in Table 1. C f and C h are expressed as follows: We adopt a small and large particle scattering model to get the scattering coefficient of different types of water, which is a weighted summation with a pure water scattering coefficient [16].
where b ω (λ) indicates the scattering coefficient of pure water, b 0 s (λ) and b 0 l (λ) denote the scattering coefficients caused by small and large suspended particles, respectively [16,17], and C s and C l are the concentrations of both types of particles in water.
The extinction coefficient c(λ) is the sum of the absorption coefficient and scattering coefficient. VSF is a very important parameter in underwater channel modeling. It indicates the ratio of scattered intensity (solid angle ∆Ω centered on θ) to total incident light intensity at a specific scattering angle. θ indicates the scattering angle. In our model, we adopt the Kopelevich model as the VSF. Compared with the traditional Henyey-Greenstein model, the Kopelevich model not only covers small and large particles, but also can be more accurately applied to high turbid water [9].
VSF for underwater application can be expressed by the combination of pure water, small particles, and large particles [16].
where p R (θ), p s (θ), and p l (θ) indicate the probability density functions for pure water, small particles, and large particles, respectively. For the Kopelevich model, the total seawater scattering coefficient can be modeled as follows [9]: where C s and C l are the concentrations of small and large particles, respectively. We set the weight (unit energy) for each photon, and the energy attenuation of the transmitted light beam is equivalent to the change in weight. We define four main parameters at the transmitter: the wavelength λ, the maximum half-divergence angle θ max , the zenith angle θ, and the azimuth angle ϕ. Initially, each photo is launched into water with the given maximum half-divergence angle θ max and unit weight. The initial departure direction of the photon is determined on the basis of random variables θ and ϕ. The direction is generated according to [−θ max , θ max ] for θ and [0, 2π] for ϕ. The direction vector of emitted photons is (sin θ cos ϕ, sin θ sin ϕ, cos θ). After traveling at a certain distance called the free path, emitted photons might lose their energy and change their transmission direction due to collision with particles in the underwater medium. Using a probability model, the free path can be expressed as follows [18]: where ξ is a random variable which obeys a uniform distribution within (0, 1]. Due to the collision with particles in the underwater medium, emitted photons lose their energy and change their transmission direction. It is assumed that the weights of emitted photons before and after collision are W pre and W post , which satisfy Equation (15) [18].
Once scattering occurs, the transmission direction of emitted photons is changed. The new direction vector P 2 after collision is dependent on the old direction vector P 1 , scattering angle θ, and azimuth angle ϕ, as shown in Figure 2. Random variable ϕ satisfies a uniform distribution within [0, 2π].
The extinction coefficient ( ) is the sum of the absorption coefficient and scattering coefficient. VSF is a very important parameter in underwater channel modeling. It indicates the ratio of scattered intensity (solid angle ΔΩ centered on ) to total incident light intensity at a specific scattering angle.
indicates the scattering angle. In our model, we adopt the Kopelevich model as the VSF. Compared with the traditional Henyey-Greenstein model, the Kopelevich model not only covers small and large particles, but also can be more accurately applied to high turbid water [9].
VSF for underwater application can be expressed by the combination of pure water, small particles, and large particles [16].
where ( ), ( ), and ( ) indicate the probability density functions for pure water, small particles, and large particles, respectively. For the Kopelevich model, the total seawater scattering coefficient can be modeled as follows [9]: where and are the concentrations of small and large particles, respectively. We set the weight (unit energy) for each photon, and the energy attenuation of the transmitted light beam is equivalent to the change in weight. We define four main parameters at the transmitter: the wavelength λ, the maximum half-divergence angle , the zenith angle , and the azimuth angle . Initially, each photo is launched into water with the given maximum half-divergence angle and unit weight. The initial departure direction of the photon is determined on the basis of random variables and . The direction is generated according to [− , ] for and [0, 2π] for . The direction vector of emitted photons is (sin cos , sin sin , cos ). After traveling at a certain distance called the free path, emitted photons might lose their energy and change their transmission direction due to collision with particles in the underwater medium. Using a probability model, the free path can be expressed as follows [18]: where is a random variable which obeys a uniform distribution within (0, 1]. Due to the collision with particles in the underwater medium, emitted photons lose their energy and change their transmission direction. It is assumed that the weights of emitted photons before and after collision are and , which satisfy Equation (15) [18].
Once scattering occurs, the transmission direction of emitted photons is changed. The new direction vector P2 after collision is dependent on the old direction vector P1, scattering angle , and azimuth angle , as shown in Figure 2. Random variable satisfies a uniform distribution within [0, 2π]. For a single photon, VSF can be considered as the probability density function of the scattering angle. The generating methods of scattering angle for different VSFs are definitely different. As for the Kopelevich model, we use the acceptance-rejection sampling method to get the random scattering angle. According to the old transmission direction vector (ux i , uy i , uz i ), the scattering angle θ, and the azimuth angle ϕ, the transmission direction vector after scattering is represented by (ux i+1 , uy i+1 , uz i+1 ) [19].
Photocounting Receiver
After several scattering events, the photons have a chance to be detected by the receiver. Since the solid angle ∆Ω of the photon scattering space is small enough, it can be assumed that the VSF among ∆Ω is constant. The variable p(θ) of the scattering direction satisfies By changing it into the integral of the solid angle, we get Thus, the reception probability of the emitted photon is Considering the conditional probability of free path, the final reception probability becomes where r r is the position of receive window, and r i is the position where the final scattering before detection happens. In our model, the threshold setting of the photon weight is 10 −4 , as shown in Table 2. Path loss and impulse response are crucial. We can calculate the path loss by summation of all products of reception probability and receiving photon weights. As for each scattering event, the position prior to scattering is available; thus, the entire path of the photon before detection is recorded. The channel response can be calculated so long as we count the receiving intensity in a given time slot. In summary, we can get the flow chart of the Monte Carlo model as shown in Figure 3. The channel responses of different wavelengths in four types of water are shown in Figure 4. It can be seen from Figure 4a-d that the optimum transmission wavelength is switched from 450 nm (blue) to 595 nm (red) when the water condition is changed from pure to turbid harbor. Moreover, a clear multipath channel characteristic is observed due to heavy scattering as illustrated Figure 4d, which is consistent with the results in [17]. The theoretical analysis and impulse response results under different water conditions guide the design of the experimental system. We can select the optimal wavelength according to the different water conditions to achieve the maximum data rate and the maximum transmission distance. Figure 5 shows a schematic diagram of our experimental UOWC system using a blue LED source and PMT receiver (Hamamatsu, model CR315). An inclination angle of 5° is introduced to the transceiver, which causes huge attenuation to build a non-line-of-sight link. All the signal processing modules are implemented offline by MATLAB. At the transmitter, a pseudo-random bit sequence (PRBS) is generated and then sampled by an arbitrary signal generator (AWG) running at 10 MSa/s (1 Mbps), 20 MSa/s (2 Mbps), 50 MSa/s (5 Mbps), and 100 MSa/s (10 Mbps). Then, the baseband signals combined with a DC bias are injected into the LED. Compared with LD, the LED-based transmitter has no need of Figure 5 shows a schematic diagram of our experimental UOWC system using a blue LED source and PMT receiver (Hamamatsu, model CR315). An inclination angle of 5 • is introduced to the transceiver, which causes huge attenuation to build a non-line-of-sight link. All the signal processing modules are implemented offline by MATLAB. At the transmitter, a pseudo-random bit sequence (PRBS) is generated and then sampled by an arbitrary signal generator (AWG) running at 10 MSa/s (1 Mbps), 20 MSa/s (2 Mbps), 50 MSa/s (5 Mbps), and 100 MSa/s (10 Mbps). Then, the baseband signals combined with a DC bias are injected into the LED. Compared with LD, the LED-based transmitter has no need of strict alignment or high emission power. A real-time oscilloscope is used to convert the analog signal into the digital domain. Simple digital signal processing (DSP) algorithms are applied at the receiving end, such as synchronization, decision, and BER calculation. The data length of each frame is 1151 bits, of which 127 bits are used for synchronization. We use multiple frames of information to increase the number of calculated bits. The number of effective bits used to calculate the BER was 20,718. To avoid synchronization problems, we increased the number of synchronization header bits. Unlike the conventional waveform sampling amplitude demodulation method, the photoncounting pulse signals need to be judged. When the amplitude of the sampled pulse is above the decision threshold voltage (DTV) V D , one photon is counted. Final decisions on symbol "1" or "0" are made by the counted average values in each symbol. Thus, the BER value can be calculated according to the hard threshold n th . Some key parameters of the proposed UOWC system are summarized and listed in Table 3.
Experimental Setup and Paraeters
are applied at the receiving end, such as synchronization, decision, and BER calculation. The data length of each frame is 1151 bits, of which 127 bits are used for synchronization. We use multiple frames of information to increase the number of calculated bits. The number of effective bits used to calculate the BER was 20,718. To avoid synchronization problems, we increased the number of synchronization header bits. Unlike the conventional waveform sampling amplitude demodulation method, the photon-counting pulse signals need to be judged. When the amplitude of the sampled pulse is above the decision threshold voltage (DTV) VD, one photon is counted. Final decisions on symbol "1" or "0" are made by the counted average values in each symbol. Thus, the BER value can be calculated according to the hard threshold . Some key parameters of the proposed UOWC system are summarized and listed in Table 3.
Attenuation Coefficient Measurement
Water quality significantly impacts the BER performance. The PMT receiver is more sensitive to optical power than other light-sensitive devices such as an APD. Ambient light may annihilate signals. Thus, the experimental system should be thoroughly shaded with black nonreflective material. Our experimental channel was a 10 m long water tank with a volume of 3 m 3 . Light absorption and scattering in seawater are caused by inorganic salts and planktonic plants. Some previous studies have shown that a similar effect of aluminum hydroxide or magnesium hydroxide to seawater is observed on the light of particles [20]. In the experiment, we added different concentrations of aluminum hydroxide to pure water to simulate seawater with different degrees of turbidity, i.e., pure seawater, clean seawater, coastal seawater, and harbor seawater, characterized by the parameters of attenuation coefficients.
In the experiment, we could not directly measure the relationship between the attenuation coefficient and the aluminum hydroxide concentration due to the presence of an off-angle at the transmitter. A preliminary experiment was carried out using an LD with very narrow divergence angle and a high-sensitivity optical power meter. Because of the
Attenuation Coefficient Measurement
Water quality significantly impacts the BER performance. The PMT receiver is more sensitive to optical power than other light-sensitive devices such as an APD. Ambient light may annihilate signals. Thus, the experimental system should be thoroughly shaded with black nonreflective material. Our experimental channel was a 10 m long water tank with a volume of 3 m 3 . Light absorption and scattering in seawater are caused by inorganic salts and planktonic plants. Some previous studies have shown that a similar effect of aluminum hydroxide or magnesium hydroxide to seawater is observed on the light of particles [20]. In the experiment, we added different concentrations of aluminum hydroxide to pure water to simulate seawater with different degrees of turbidity, i.e., pure seawater, clean seawater, coastal seawater, and harbor seawater, characterized by the parameters of attenuation coefficients.
In the experiment, we could not directly measure the relationship between the attenuation coefficient and the aluminum hydroxide concentration due to the presence of an off-angle at the transmitter. A preliminary experiment was carried out using an LD with very narrow divergence angle and a high-sensitivity optical power meter. Because of the reflection and absorption caused by the glass wall, we used Equation (21) to measure the relative attenuation coefficient. The results are shown in Figure 6a. We can see an approximate linear relationship between the aluminum hydroxide concentration and the attenuation coefficient. The parameter c is the measured attenuation coefficient, and c 0 is the attenuation coefficient of pure seawater with a value of 0.056 m −1 . The shaded tank was filled with pure water. Then, we added aluminum hydroxide powder to the water at a mass of 3 g each time and measured the ROP as P c . Figure 6b shows the measured curve of the ROP as a function of the attenuation coefficient varying from 0.2 m −1 to 1.3 m −1 for different data rates. It can be seen from Figure 3b that the ROP was about −78 dBm for a 2 Mbps data rate at c = 1.3 m −1 , which means that a total loss of 99 dB was introduced (launched optical power was 21 dBm). The values of ROP were calculated using the average number of experimentally counted photons according to Equation (22). filled with pure water. Then, we added aluminum hydroxide powder to the water at a mass of 3 g each time and measured the ROP as Pc. Figure 6b shows the measured curve of the ROP as a function of the attenuation coefficient varying from 0.2 m −1 to 1.3 m −1 for different data rates. It can be seen from Figure 3b that the ROP was about −78 dBm for a 2 Mbps data rate at c = 1.3 m −1 , which means that a total loss of 99 dB was introduced (launched optical power was 21 dBm). The values of ROP were calculated using the average number of experimentally counted photons according to Equation (22).
Measured BER Performance
In our experiment, we used a Hamamatsu PMT with a spectral response range from 300 nm to 650 nm as the receiver. The quantum efficiency of the PMT was 5%, and the typical dark count was 20 counts/sec. The number of photons counted in symbol "1" was contributed by the signal and the background light, while the photons counted in symbol "0" were caused by the background light and inter-symbol interference. An RZ code with a duty cycle of 0.7 was designed according to Equation (22), since the ROP can be maximized and a clock frequency component is included [21], where is the quantum efficiency of PMT, h is Planck's constant, ν is the frequency of light, Tb is the symbol duration, and n1 and are the average numbers of photons contained in symbols "1" and "0".
According to the measured results shown in Figure 7, when the number of received photons was less than 20, the measured data followed a relatively strict Poisson distribution since the PMT worked in the linear region. Upon increasing the number of photos to 40, the PMT was subjected to overexposure and worked in the nonlinear region, thus experiencing signal distortion [21]. In this condition, the distribution of the counted photons does not obey a strict Poisson distribution, as shown in Figure 7. The BER value can be calculated using Equation (23), where is the hard-decision threshold [5].
Measured BER Performance
In our experiment, we used a Hamamatsu PMT with a spectral response range from 300 nm to 650 nm as the receiver. The quantum efficiency of the PMT was 5%, and the typical dark count was 20 counts/sec. The number of photons counted in symbol "1" was contributed by the signal and the background light, while the photons counted in symbol "0" were caused by the background light and inter-symbol interference. An RZ code with a duty cycle of 0.7 was designed according to Equation (22), since the ROP can be maximized and a clock frequency component is included [21], where ξ is the quantum efficiency of PMT, h is Planck's constant, ν is the frequency of light, T b is the symbol duration, and n 1 and n 0 are the average numbers of photons contained in symbols "1" and "0".
According to the measured results shown in Figure 7, when the number of received photons was less than 20, the measured data followed a relatively strict Poisson distribution since the PMT worked in the linear region. Upon increasing the number of photos to 40, the PMT was subjected to overexposure and worked in the nonlinear region, thus experiencing signal distortion [21]. In this condition, the distribution of the counted photons does not obey a strict Poisson distribution, as shown in Figure 7. The BER value can be calculated using Equation (23), where n th is the hard-decision threshold [5]. We present the measured BER performance under different water conditions in Figure 8. As discussed before, when the number of received photons is increased to around 20 (~−73 dBm), the number of the received photons no longer obeys a Poisson distribution. At this moment, the values of V D should also be adjusted. In our experiment, the optimal values of V D were obtained according to the rule of minimizing the BER. As illustrated in Figure 6b, an ROP of −73 dBm corresponded to a 10 m underwater transmission with an attenuation coefficient of 0.8 m −1 . When the PMT worked in photon-counting mode (c > 0.8 m −1 ), the number of photons in symbol "1" obeyed a strict Poisson distribution. Thus, the value of DTV V D was set to 2.5 mV. However, the measured BER performance worsened, especially for 1 Mbps and 2 Mbps data rates, when the attenuation coefficients varied from 0.2 m −1 to 0.8 m −1 (saturation region of PMT). With the adapted optimal value of V D = 4.5 mV, error-free transmissions of 1 Mbps and 2 Mbps data rates were successfully achieved. The BER performance enhancement at the 5 Mbps data rate was not significant, because, when increasing the signal baud rate, severe inter-symbol interference was introduced due to the limited bandwidth of LED. Moreover, conclusions can be made according to Figure 8 that the receiver sensitivities of our proposed LED-UOWC systems at 1 Mbps, 2 Mbps, and 5 Mbps data rates were −76 dBm (1.08 m −1 ), −74 dBm (0.92 m −1 ), and −70 dBm (0.24 m −1 ) at the 7% FEC limit of 3.8 × 10 −3 , respectively. We present the measured BER performance under different water conditions in Figure 8. As discussed before, when the number of received photons is increased to around 20 (~−73 dBm), the number of the received photons no longer obeys a Poisson distribution. At this moment, the values of should also be adjusted. In our experiment, the optimal values of were obtained according to the rule of minimizing the BER. As illustrated in Figure 6b, an ROP of −73 dBm corresponded to a 10 m underwater transmission with an attenuation coefficient of 0.8 m −1 . When the PMT worked in photon-counting mode ( > 0.8 m −1 ), the number of photons in symbol "1" obeyed a strict Poisson distribution. Thus, the value of DTV was set to 2.5 mV. However, the measured BER performance worsened, especially for 1 Mbps and 2 Mbps data rates, when the attenuation coefficients varied from 0.2 m −1 to 0.8 m −1 (saturation region of PMT). With the adapted optimal value of = 4.5 mV, error-free transmissions of 1 Mbps and 2 Mbps data rates were successfully achieved. The BER performance enhancement at the 5 Mbps data rate was not significant, because, when increasing the signal baud rate, severe inter-symbol interference was introduced due to the limited bandwidth of LED. Moreover, conclusions can be made according to Figure 8 that the receiver sensitivities of our proposed LED-UOWC systems at 1 Mbps, 2 Mbps, and 5 Mbps data rates were −76 dBm (1.08 m −1 ), −74 dBm (0.92 m −1 ), and −70 dBm (0.24 m −1 ) at the 7% FEC limit of 3.8 × 10 −3 , respectively.
The Predicted Performance Based on the Proposed System
As illustrated in Figure 9, we further investigated the proposed system performance under conditions of the first type of seawater (pure, c = 0.056 m −1 ), the second type (clean, c = 0.151 m −1 ), and the third type (coastal, c = 0.398 m −1 ). According to the experimental results illustrated in Figure 4, the required ROP for 2 Mbps at the 7% FEC limit is −74 dBm. Using Equation (4) and the parameters in Table 1, the optical power distribution at the receiving plane within the receiver sensitivity of −74 dBm was established using Lambertian model. Within the receiving radii of 1.28 m, 0.62 m, and 0.29 m, the achievable distances were 83.5 m, 40.5 m, and 19.2 m for the first, second, and third types of seawater,
The Predicted Performance Based on the Proposed System
As illustrated in Figure 9, we further investigated the proposed system performance under conditions of the first type of seawater (pure, c = 0.056 m −1 ), the second type (clean, c = 0.151 m −1 ), and the third type (coastal, c = 0.398 m −1 ). According to the experimental results illustrated in Figure 4, the required ROP for 2 Mbps at the 7% FEC limit is −74 dBm. Using Equation (4) and the parameters in Table 1, the optical power distribution at the receiving plane within the receiver sensitivity of −74 dBm was established using Lambertian model. Within the receiving radii of 1.28 m, 0.62 m, and 0.29 m, the achievable distances were 83.5 m, 40.5 m, and 19.2 m for the first, second, and third types of seawater, respectively. The maximum transmission distances could be extended to 100 m, 46 m, and 21 m when the receiver was located in the center of the receiving plane, as depicted in Figure 10. With a transmitted electrical power of 10 W, the maximum distances were further increased to 134 m, 60 m, and 27 m. When c exceeded the value of 0.92 m −1 (ROP = −74 dBm), as shown in Figure 8, the BER performance for the 2 Mbps signal became worse than the 7% FEC limit. The calculated optical power based on Equation (22) was −74.37 dBm in this condition, which is consistent with the optical power distribution obtained by the Lambertian model, as shown in Figure 9. The experimental 2 Mbps data rate after 10 m could achieve a receiving area of π × 0.15 2 = 0.07 m 2 . Thus, it is believed that our proposed long-reach UOWC system is capable of achieving an Mbps-scale data rate with an alignment-released configuration. as shown in Figure 8, the BER performance for the 2 Mbps signal became worse than the 7% FEC limit. The calculated optical power based on Equation (22) was −74.37 dBm in this condition, which is consistent with the optical power distribution obtained by the Lambertian model, as shown in Figure 9. The experimental 2 Mbps data rate after 10 m could achieve a receiving area of π × 0.15 = 0.07 m 2 . Thus, it is believed that our proposed long-reach UOWC system is capable of achieving an Mbps-scale data rate with an alignment-released configuration.
Discussion
To build a long-range UOWC link or to propagate light through relative turbid water, two factors need to be considered: (i) pointing and alignment, and (ii) multipath interference. as shown in Figure 8, the BER performance for the 2 Mbps signal became worse than the 7% FEC limit. The calculated optical power based on Equation (22) was −74.37 dBm in this condition, which is consistent with the optical power distribution obtained by the Lambertian model, as shown in Figure 9. The experimental 2 Mbps data rate after 10 m could achieve a receiving area of π × 0.15 = 0.07 m 2 . Thus, it is believed that our proposed long-reach UOWC system is capable of achieving an Mbps-scale data rate with an alignment-released configuration.
Discussion
To build a long-range UOWC link or to propagate light through relative turbid water, two factors need to be considered: (i) pointing and alignment, and (ii) multipath interference.
Discussion
To build a long-range UOWC link or to propagate light through relative turbid water, two factors need to be considered: (i) pointing and alignment, and (ii) multipath interference.
Pointing and Alignment
To maintain a reliable line-of-sight UOWC link using an LD source after long-distance transmission is very difficult, since the optical beam is quite narrow. At this moment, pointing errors usually occur because of the link misalignment. Using a beam spread function, the link misalignment model for a UOWC system can be expressed as follows [3]: where BSF(L, r) is the irradiance distribution at the receiver plane. Employing a LED source with a large beam size corresponds to a large receiving range. Thus, we can get the irradiance distribution more accessibly at the receiver plane.
Multipath Interference
As illustrated in Figure 4d, a multipath interference effect is produced in an optical turbid harbor underwater channel after 8 m transmission. For a certain data rate, the effect of multipath interference eventually leads to time spreading and waveform distortion, thus decreasing the BER performance due to the inter-symbol interference. Thus, when designing a UOWC system, this issue should be taken into consideration. Fortunately, technologies such as channel equalization [22], adaptive optics, and spatial diversity [23] are capable of suppressing the interference.
Conclusions
In this paper, we demonstrated a high-sensitivity long-reach UOWC system using LED and PMT. An experiment was conducted to investigate the BER performance under different water turbidities. Several key factors were taken into consideration during the system design, such as symbol rates, symbol duty cycles, water conditions, PMT characteristics, and decision criteria. With the help of RZ-OOK modulation and a PMT receiver, we experimentally achieved receiver sensitivities of −76 dBm, −74 dBm, and −70 dBm for 1 Mbps, 2 Mbps, and 5 Mbps data rates over a 10 m underwater channel, respectively. More than 100 m distance is achievable for a 2 Mbps data rate in pure seawater at 1 W transmitted power. | 9,515.8 | 2021-10-22T00:00:00.000 | [
"Physics"
] |
Using a clicker question sequence to teach time-development in quantum mechanics
Research-validated clicker questions as instructional tools for formative assessment are relatively easy to implement and can provide effective scaffolding when developed and implemented in a sequence. We present findings from the implementation of a research-validated Clicker Question Sequence (CQS) on student understanding of the time-development of two-state quantum systems. This study was conducted in an advanced undergraduate quantum mechanics course. The effectiveness of the CQS was determined by evaluating students’ performance after traditional lecture-based instruction and comparing it to their performance after engaging with the CQS.
I. INTRODUCTION
The time-evolution of a quantum state is an important concept in quantum mechanics. Many fields of active research, including quantum computing, must contend with the dynamical behavior of quantum systems. Since it draws on prerequisite knowledge of quantum states and the Hamiltonian of the system, the concept can be challenging for students to grasp. At the advanced undergraduate level, time-evolution of a quantum state is introduced with a timeindependent Hamiltonian ̂. The state as a function of time is then the solution to the time-dependent Schrödinger equation, i.e., ℏ |Ψ( )⟩ =̂|Ψ( )⟩, and is equivalent to applying the operator − ħ to the initial state. Because the Hamiltonian governs the time-development of the state, the eigenstates of the Hamiltonian, i.e., the energy eigenstates or "stationary states," are special in that they simply acquire an overall time-dependent phase factor. For instance, given a Hamiltonian ̂=̂ for a two-state system with a dimensionally-appropriate constant , an initial state expressed in the energy eigenbasis is Here all notations are standard. If, however, the initial state is expressed in some other basis, one can obtain the state at time by first re-expressing the initial state as a superposition of energy eigenstates before introducing the time-dependent phase factors to each term.
To become proficient at determining the state at time t given an initial state in some basis, students must be adept at several different tasks. These include being able to recognize whether the given initial state is an eigenstate of the Hamiltonian; working in the energy eigenbasis and converting to this basis if the initial state is given in any other basis; and correctly applying the time-evolution operator. Students also must recognize that different energy eigenstates generally correspond to different eigenvalues. The convergence of all these challenges, as well as possible unfamiliarity with the meaning of the complex exponential itself, can place significant demands on students' cognitive resources. This may also obfuscate other consequences of the Hamiltonian playing a central role, e.g., the expectation value of any observable (that does not have any explicit timedependence) does not depend on time in a stationary state.
Prior research suggests that students in quantum mechanics courses often struggle with many common difficulties, including with issues related to timedevelopment of a quantum state, but research-validated learning tools can effectively help students develop a robust knowledge structure . Quantum Interactive Learning Tutorials (QuILTs) have been developed, validated and implemented on many topics in quantum mechanics, with encouraging results [19,38,39]. Similarly, clicker questions, first popularized by Mazur using his Peer Instruction method, are conceptual multiple-choice questions presented to a class for students to answer anonymously, individually first and then again after discussion with peers, and with immediate feedback. They have proven effective and are relatively easy to incorporate into a typical course, without the need to greatly restructure classroom activity or assignments [40,41]. When presented in sequences of validated questions that build on one another, they can systematically help students with a particular theme that they may be struggling with. Previously, such Clicker Question Sequences (CQS) have been developed, validated and implemented on several key topics in quantum mechanics [42][43][44][45][46]. Here we discuss the development, validation and implementation of a CQS focused on helping students learn time-evolution of two-state quantum systems.
II. METHODOLOGY
The CQS targets upper-level students in junior-/seniorlevel quantum mechanics courses. The data presented here is from implementation in a mandatory junior-/senior-level course at a large research university, with sample size N = 29. To develop and validate this CQS, we took advantage of the learning objectives and goals of the QuILT on this topic that had previously been developed [38,39,45]. Taking inspiration from the validated pre-and post-tests intended for use with that QuILT, we made adjustments to questions to specifically address the time-development of two-state systems. Additional inspiration came from questions from other sequences, including those focused on the timedevelopment in the context of Larmor precession.
Additionally, we took advantage of much of the cognitive task analysis both from the expert and student perspectives (based upon interviews) and the scaffolding that had been incorporated in the aforementioned QuILT. We focused on condensing this material, to ensure that the CQS can be administered in class. To be strategic with regard to the available class time, we prioritized basic conceptual knowledge and specific consequences that students often find difficult, provided checkpoints at which instructors should discuss some broader themes related to the previous questions, and avoided burdensome calculations.
After we conceptualized the most important features of time-development of quantum states that students should know, we drafted questions and discussed among ourselves many times to minimize unintended interpretations. We standardized terminologies and sentence constructions while simplifying them as much as possible to avoid causing cognitive overload for students. We also paid attention to the answer choices for each question. In some instances, after discussion amongst researchers, we revised the questions to make sure that students understood them unambiguously.
We aimed to address common stumbling blocks and emphasized key features that students may have missed in a typical lecture. The 13 questions in the CQS focused on four learning goals on the following topics: identifying the basic properties of the energy eigenstates or stationary states (CQS 1.X, 2 questions), transforming from an initial state to its time-evolved state (CQS 2.X, 5 questions), expressing a state in the energy eigenbasis before applying the time-evolution operator (CQS 3.X, 4 questions), and calculating the timedependence of various observables' expectation values (CQS 4.X, 2 questions). We designed several questions specifically to address certain student difficulties that have previously been found [12,13,20]. Selected CQS questions, referenced in later sections, are reproduced below (answers in boldface, all notations being standard and familiar to students): CQS 1.2 Consider a system with a Hamiltonian ̂=̂.
Which of the following initial states | ( = 0)⟩ is a stationary state? I. Choose all the correct statements about a system in the state | (0)⟩.
Each measurement of a generic observable will return the same result, regardless of the time when the measurement is performed.
A. II only B. III only C. I and II only D. II and III only E. None of the above CQS 3.1 Consider a system with a Hamiltonian ̂=̂.
Choose all the correct statements about a system in the state | (0)⟩. I.
. II only C. I and II only D. II and III only E. All of the above CQS 3.2 Consider a system with a Hamiltonian ̂=̂.
Choose all of the following that are correct about a system in the state | (0)⟩ = | ⟩ + |− ⟩.
To find | ( )⟩, we can write | (0)⟩ as a linear superposition of energy eigenstates, and then attach a time-dependent phase factor with the appropriate energy to each term. A. I only B. III only C. I and II only D. I and III only E. All of the above CQS 3.3 Consider a system with a Hamiltonian ̂=̂.
Choose the correct expression for the time evolved state | ( )⟩ given an initial state Energy II. III. A. None of the above B. I only C. I and II only D. II and III only E. All of the above Since the entire course was remote due to the COVID-19 pandemic, the CQS was administered during the online lectures as a Zoom poll while the instructor displayed the questions via the "Share Screen" function. The instructor allowed several minutes for students to vote before revealing the results, and some had the opportunity to explain their responses, before systematically discussing the different options. When a majority of students selected an option that involved alternative conceptions, the instructor would give a hint and ask students to vote again, and ask for volunteers to explain the reasoning behind their choices. In a typical classroom setting, students would have had easy access to one another to discuss their thinking in small groups, but this proved less feasible in the online instructional setting, where students or the instructor predominantly spoke to the whole class.
To determine the effectiveness of the CQS in helping students overcome these common difficulties, we developed and validated pre-and post-tests that had both questions taken directly from the CQS and other questions on topics covered in the CQS. The post-tests were a slightly modified version of the pre-tests, with some changes (e.g., eigenstates of ̂ being replaced by eigenstates of ̂) but otherwise remaining conceptually similar. Students were given the pretest immediately following traditional lecture-based instruction on the topic. After administration of the CQS, which took place over the course of three lecture sessions, students were given the post-test. For both, they were given a 25-minute period at the end of the class session. Two researchers graded the pre-test and post-test, and after discussion converged on a rubric on which the inter-rater reliability was greater than 95%. Questions 3 and 4 were scored with 2 points split between answer and reasoning, and the remainder were all-or-nothing.
III. IN-CLASS IMPLEMENTATION RESULTS
In addition to examining the improvement from the pretest to the post-test, we analyzed student performance on the clicker questions, including the attractiveness of the distractors. The pre-test and post-test results, as well as normalized gain [2] and effect sizes [47], are listed in Table I. The effect sizes for the six questions ranged from 0.45 to over 1, indicating conventionally medium to large effects.
Many of the common student difficulties were successfully addressed to varying degrees after students engaged with CQS, as follows.
Difficulties with general eigenstates vs. stationary states
The highest normalized gains were seen in the questions that probed students' knowledge of stationary states (Q1 and Q2). The energy eigenstates are stationary states, but students often remember "eigenstates" as stationary states without having recognized the importance of the "energy eigenstates" aspect. These difficulties could be exacerbated if students are shaky on the prerequisite linear algebra, without a clear grasp of what eigenstates and operators mathematically or conceptually are. Moreover, even if students are proficient with the linear algebra in the context of a math course, transferring that knowledge to the context of a quantum mechanics course can still be very challenging. As illustrated by CQS question 2.2, which also appeared as Q1 on the pre-test and post-test, students appeared to understand the distinction between generic eigenstates and energy eigenstates after the CQS. As seen in Table I, 28% of students answered this question correctly on the pre-test; 40% answered correctly during the CQS (not shown); and 72% answered correctly on the post-test, indicating substantial improvement. Additionally, on Q2, students also better recognized that a superposition of stationary states is not a stationary state, with a normalized gain of 0.64. With similar normalized gains and effect sizes, students also learned in Q5 that only the expectation value of energy does not vary with time in a non-stationary state.
Replacing the operator ̂ with one eigenvalue
For CQS questions 2.2-2.5, students chose with substantial frequency an answer option that resulted in a single phase term involving energy instead of a sum of terms (distractor choice I in question 2.2 invokes this idea). This is at least partially due to students not being entirely comfortable with the notation or not understanding the role of the Hamiltonian. After CQS instruction, most students, when asked in the free-response Q3 on the post-test, correctly multiplied each energy eigenstate by a separate phase factor, with a normalized gain of 0.47 (see Table I).
Difficulties with change of basis
The CQS question 3.4, which asked students to change from the z-basis to the x-basis, had considerably lower performance than the preceding question 3.3, which had asked the reverse. This is at least partly due to students having less experience with the former transformation, as the latter is the predominant example used to introduce the idea of changing basis. The symmetry between the two cases may be obvious to more experienced problem solvers, but students needed the opportunity to reason through the basis change. Once students learned the importance of working in the appropriate basis, addressed in CQS questions 3.1-3.4, more students correctly answered the corresponding question (Q4) on the post-test, with normalized gain 0.51. We note that the performance on Q4, with an average of 71%, is a bit lower than that on Q3, 83%, as shown in Table I. With the exception of the latter expressing the given state in the correct basis, the two questions were identical. The lower performance on Q4 is likely due to forgetting the basis change, or making a mistake in the process.
B. Difficulties that were less successfully addressed
Improvement on Q6 on the post-test was the weakest, as seen in Table I. The question appeared with small changes on the pre-test and post-test, and as CQS question 4.2. Although there was some improvement from the pre-test to the CQS question, evidence from the post-test suggests very little further improvement. Students must recognize that invariant probabilities of measurement outcomes imply static expectation values, though it appears that more scaffolding is needed to help students learn this concept. Moving forward, we would specifically include suggestions to encourage students to think of an expectation value as an average of a large number of measurements made on identically prepared systems. Students could also benefit from an additional discussion of Ehrenfest's theorem, giving them more tools with which to process these ideas [29].
C. Several examples of class discussion
A particular advantage of the CQS is that it provides opportunity for rich class discussions that can deepen student understanding. Following are examples of such discussion. Question 1.2 addressed the common difficulty that any superposition of stationary states is itself a stationary state.
Initially the correct answer was not even the most popular response. Without immediately giving a full explanation, the instructor noted that, since any state can be written as a superposition of stationary states, selecting this option would imply that every possible state is a stationary state. When the class was allowed to vote a second time, nearly 50% chose the correct answer.
On question 3.2, two students volunteered to explain how the time evolution of the state could not be simplified (expressed without the Hamiltonian operator) by remaining in the {| ⟩, |− ⟩} basis, and could thus rule out option II. Question 4.2 asks about the time-dependence of the expectation value of an observable in a stationary state. Despite the instructor's hints, the distribution of answers remained nearly identical both times the polling was opened to students. While the students may not have been able to sufficiently parse the hints individually, it is likely that performance would have improved in a typical classroom setting, if students were given an opportunity to discuss the meaning of the hints and their consequences in small groups.
Opportunities to hold an overall class discussion about salient concepts such as these after students have voted are very important, but ensuring that instructors hold such discussions when they are recommended can be a challenge especially because time is limited. We will continue to investigate ways to encourage such discussions via check points between CQS questions, even in instances when the instructor may opt not to follow our suggestions verbatim.
IV. SUMMARY
Clicker question sequences can be effective when implemented alongside traditional classroom lectures. We developed, validated, and found encouraging results from implementation of a CQS on the topic of time-development in two-state systems. Post-test scores improved for every question following the administration of the CQS, with mostly uniform normalized gains of around 0.60 in the multiple-choice questions, and high performance in the generative questions that asked students to correctly apply the time-development concepts in open-ended questions. Effect sizes throughout are conventionally large to medium: most were over 0.70. Students' performance was weakest on the questions on expectation values, but we believe that this can be improved through a more robust classroom discussion and more focus on this topic in the CQS itself.
We emphasize that this study was conducted in a remote learning context, and that these results may not transfer exactly to traditional classroom instruction contexts. We will investigate this further in the future. | 3,904.6 | 2021-10-10T00:00:00.000 | [
"Physics"
] |
Processing Topics from the Beneficial Cognitive Model in Partially and Over-Successful Persuasion Dialogues
A persuasion dialogue is a dialogue in which a conflict between agents with respect to their points of view arises at the beginning of the talk and the agents have the shared, global goal of resolving the conflict and at least one agent has the persuasive aim to convince the other party to accept an opposing point of view. I argue that the persuasive force of argument may have not only extreme values but also intermediate strength. That is, I wish to introduce two additional types of the effects of persuasion in addition to successful and unsuccessful ones (cf. Van Eemeren and Houtlosser in Argumentation 14(3):293–305, 2000; Advances in pragma-dialectics. Sic Sat, Amsterdam, 2002; Walton in A pragmatic theory of fallacy. University of Alabama Press, Tuscaloosa, 1995; Walton and Krabbe in Commitment in dialogue: basic concepts of interpersonal reasoning. State University of New York Press, Albany, New York, 1995). I propose a model which provides for modified versions of the standpoint of an agent needed in order to bring about two possible outcomes of a persuasion dialogue. These two outcomes I label partially-successful and over-successful. I call the potential, not yet verbalised, standpoint of an agent here the original topic t. Based on some aspects of relevance theory (Sperber and Wilson in Relevance: communication and cognition. Blackwell, Oxford, 1986; Wilson and Sperber in The handbook of pragmatics. Blackwell Publishing, Malden, 2006), I explain that the modified version of the original topic t is an implicature created from the original topic t and from a specific mental topic which belongs to, what I call the beneficial cognitive model (hence BCM). I define BCMi,t as a set of topics which are within the area of agent i’s interest of persuasion with respect to t.
Introduction
Actual communication practice is the point of departure for the model presented in this paper. The paper is rooted in the programme of the Polish School of Argumentation which is inspired by the pragma-linguistic and cognitive aspects (see Kopytko 2002;Cap 2010) of argument force in communication practice. This paper proposes to consider a specific type of dialogue called a persuasion dialogue in which two participants have opposing points of view on a certain issue. The notion of a point of view is defined here in pragma-dialectical terms and is described as ''a certain positive or negative position with respect to a proposition'' ( van Eemeren 2001: 17). Participants in this specific type of dialogue act as proponent and opponent. A proponent of a particular point of view adopts a positive position with respect to a certain proposition. The opponent of the point of view challenges the positive position of the proponent or expresses a counter attitude to that position. If the opponent only questions the proponent's position, without defending a thesis of his own, then he becomes engaged in a non-mixed dispute. If the opponent expresses his own position with respect to a proposition, then he becomes involved in a mixed dispute (van Eemeren and Grootendorst 1992: 17). In this article, the proponent and the opponent are named agent i and agent i' respectively.
I define a persuasion dialogue as a dialogue in which a conflict between agents with respect to their points of view arises at the beginning of the talk and the agents have the shared, global goal of resolving the conflict. Furthermore, as part of this definition, at least one agent has the persuasive aim to convince the other party to accept an opposing point of view (cf. Walton 1995;Walton and Krabbe 1995). I claim that an agent has a persuasive aim if he is interested only in such an outcome of a dialogue in which his position wins. My perspective on persuasion relies on its socio-psychological definition which treats it as ''a successful intentional effort at influencing another's mental state through communication' ' (O'Keefe 2002: 5).
The main aim of this paper is to introduce a supplementary model which distinguishes types of persuasion effects in addition to the ones discussed in the pragma-dialectical critical discussion (van Eemeren and Houtlosser 2002; van Eemeren 2009) and the Waltonian persuasion dialogue (1995; Walton and Krabbe 1995). Both of those approaches allow the analyst to identify two types of effects: fully successful persuasion and fully unsuccessful persuasion. They propose systems describing the course of a dialogue in which the standpoint of an agent is introduced in advance and is not changed during the dialogue. The term ''standpoint'' is considered here in pragma-dialectical terms. ''Standpoint'' is defined as ''(…) individual expression of someone's subjective opinion (…), a public statement put forward for acceptance by a listener or reader who is assumed not to share the speaker or writer's point of view'' (Houtlosser 2001: 31).
The model I propose here provides for modified versions of the standpoint of an agent during one dialogue and introduces two additional types of effects of persuasion: partially successful dialogue and over-successful dialogue (cf. Budzynska and Debowska 2010). 1 The normative reason for adding these two nuances of success is to give the proponent his due after he has partially or excessively made his case. I claim that a certain mental conception in the mind of an agent, which I call in this paper the original topic t, might become his standpoint when publicly expressed in a persuasion dialogue. 2 The secondary aim of the paper is to see how a certain modified version of the original topic t is generated in the mind of an agent and why that version of topic t has a decisive function in describing partially successful persuasion and oversuccessful persuasion. Based on some aspects of relevance theory (Sperber and Wilson 1986;Wilson 1994Wilson , 2000Wilson and Sperber 2006), I explain that the modified version of the original topic t is an implicature created from the original topic t and from a specific mental topic which belongs to, what I call the beneficial cognitive model (hence BCM). 3 I define BCMi,t as a set of ''beneficial'' topics with respect to t which are within the area of agent i's interest of persuasion.
The paper is structured as follows. Section 2 shows how the persuasive aim is considered in the models of the pragma-dialectical critical discussion and Waltonian persuasion dialogue, and what type of a criterion is provided by those models to evaluate the persuasive force of argument in terms of its successfulness. The paper also seeks to show that the criterion of acceptance or rejection of an agent's point of view by the other party does not apply to the assessment of dialogues in which types of persuasion other than those fully successful or fully unsuccessful are intuitively recognised. Section 3 elaborates on the notion of topic t and introduces the notion of the set of other topics Ti,t = {t1,…,tn} which helps agent i resolve the difference between his and the other party's point of view. Using the aspects of relevance theory, Sect. 3 shows how the set of other topics Ti,t = {t1,…,tn} is activated in the mind of agent i during a dialogue. Section 4 explains that the BCM is a subset of Ti,t, but involves only the topics which help agent i realise his persuasive intention. Section 5 shows that only the topics from BCMi,t might help to generate the modified version of topic t needed for obtaining partially-successful and oversuccessful persuasion dialogues. Finally, Sect. 6 discusses some selected features of partially-successful and over-successful persuasion dialogues.
Persuasive Aim in the Standard Models
The representatives of the pragma-dialectical school of argumentation (van Eemeren and Houtlosser 2002; van Eemeren 2009) and Walton (1995;Walton 1 In (Budzynska and Debowska 2010), I discuss over-success in the case of dialogues with conflict resolution using the notions of 'degree of importance' and 'degree of acceptance'. 2 Clearly, a mental conception of topic t cannot be evaluated as true or false or acceptable or unacceptable because it functions only as a proposition under consideration. However, when it becomes verbalised in the form of a statement it might be evaluated in this way. 3 See (Budzynska and Witek 2014) for another example of a pragmatic approach to argumentation. While my approach relies on relevance theory, theirs relies on the theory of speech acts. and Krabbe 1995) discuss the persuasive aim 4 of an agent and provide a criterion for its successful achievement. In the pragma-dialectical advanced model of a critical discussion, the persuasive aim is discussed in relation to the dialectical aim. Houtlosser (2000, 2002; van Eemeren 2009, see also Debowska et al. 2009: 122-123) introduce the concept of 'strategic manoeuvring' when discussing the employment of reasonable argumentation achieved by maintaining a balance between the simultaneous pursuit of the persuasive and dialectical aim. Pragmadialecticians indicate that disputants may simultaneously pursue the persuasive aim of making the strongest case and the dialectical aim of the resolution of the difference of opinion. In pragma-dialectics, a persuasive aim is concerned with the intention of an individual agent to have his own point of view accepted. As van Eemeren and Houtlosser (2002:15) emphasise, ''rhetorical considerations [in a critical discussion] relate to the contextual adjustment of argumentation to the people that are to be convinced''. Walton (1995), Walton and Krabbe (1995) discuss the persuasive aim of an agent within the model of a persuasion dialogue and relate it to the notion of commitments (cf. Kacprzak and Yaskorska, this issue). They indicate that the individual aim of each agent in this type of dialogue is ''to persuade others to take over its point view'' (Walton and Krabbe 1995: 68). Walton (1995: 18-19) treats a critical discussion as a subspecies of a persuasion dialogue. As Walton (1995: 100) indicates, ''the critical discussion is a much more specific and precisely regulated type of dialogue [than a persuasion dialogue] that has all kinds of specific rules defining what a participant may or may not do at any given stage [of the dialogue]''. The Waltonian model centres on the commitments of the other party. Commitments are said to be ascribed to propositions when an agent publicly declares them as his beliefs, attitudes, intentions, plans, preferences, etc. (cf. Walton 1995;Searle 1970;Hamblin 1970;Katriel and Dascal 1989;Grootendorst 1984, 2004; van Eemeren and Houtlosser 2004). Walton (1995) emphasises that an agent realises his persuasive aim by trying to determine what will successfully persuade the other party by tracking the other party's commitments.
In both the pragma-dialectical model of a critical discussion and the Waltonian model of a persuasion dialogue, the acceptance or rejection of an agent's point of view by the other party is a criterion for deciding whether persuasion has been successful or unsuccessful. If the opponent changes his point of view or his stance towards the proponent's thesis at the end of the dialogue, then persuasion is evaluated as successful for the proponent. If at the end of the dialogue the opponent does not change his point of view or his stance towards the proponent's thesis, then persuasion is evaluated as unsuccessful for the proponent.
Below I present two brief examples of dialogue 1 and 2 to show that the criterion provided by the standard models permits the identification of fully successful persuasion and fully unsuccessful persuasion but not the nuances of partially 4 See (Castelfranchi and Paglieri 2007) for the cognitive processing of a goal in which beliefs and desires are perceived as pre-stages. successful and over-successful persuasion. 5 Since dialogue 1 and 2 6 are to serve as quick ways of showing what I mean by partial and over-success there is not too much elaboration of their argumentative content.
Consider dialogue 1 in which partially successful persuasion is intuitively observed: John_1 Please lend me a 100 euro note. Ann_2 No, I can't. I have got only a 50 euro note. John_3 OK, in that case I can take 50 euros. Ann_4 OK.
Intuitively, partially successful persuasion has been achieved in dialogue 1 because John has not persuaded Ann to lend him 100 euros, but he will get part of the amount he wanted. The proposition which John defends in move John_1 can be reconstructed as ''You should give me a 100 euro note''. Move John_1 can also be read as a full argument: ''You should give me a 100 euro note, because I need 100 euros and you possibly have a 100 euro note''. The proposition defended by John in move John_3 is ''You should give me a 50 euro note''. Turn 3 can also be read as a full argument ''You should give me a 50 euro note, because I need 50 euros and you have a 50 euro note.'' Relying on standard models of conflict resolution we cannot, however, describe this type of dialogue as partially successful persuasion. Pragmadialecticians would probably reconstruct the dialogue as a multiple dispute in which two standpoints are defended and one of these (''I think you should give a 100 euro note'') is abandoned at some point, and the second one (i.e. ''I think you should give me a 50 euro note'') is won by John. From the Waltonian perspective, the dialogue might be considered a situation where John comes up with a new standpoint (i.e. ''I think you should give me a 50 euro note'') at some point in the dialogue, and instead of retracting the earlier thesis (i.e. ''You should give a 100 euro note''), he starts defending the new thesis.
Even if only one standpoint were considered by pragma-dialectians and Walton (i.e. ''I think you should give a 100 euro note'') in the reconstruction of dialogue 1, then still the pragma-dialectical and Waltonian criterion of achieving success by convincing the other party to accept the opposing point view would not be fulfilled. Ann has not been persuaded to give John a 100 euro note. Since John has not achieved his original aim of borrowing a 100 euro note, the dialogue would be evaluated as a fully unsuccessful persuasion. Still, it is not true that John has gained 5 Numerical representation of the degrees of achieving success has already been discussed by Budzynska and Kacprzak (2008), Budzynska and Kacprzak (2011), but this paper focuses on non-numerical representation. 6 Looking at the surface structure of dialogue 1 and 2, it is possible to point out in those dialogues the features of a negotiation dialogue and the features of a persuasion dialogue. Relying exclusively on the surface structure of dialogue 1 and 2 it is, therefore, possible to analyse those dialogues in terms of Walton (1995) mixed dialogues. My aim at this point is, however, to use those dialogues as simple cases for showing what I mean by partial and over-success. Additionally, it should also be noticed that no conflict of interest is present in those dialogues; therefore, they could not be reconstructed as pure cases of a negotiation dialogue. It is the conflict of opinion which allows to reconstruct those dialogues as a persuasion dialogue. Intuitively, over-successful persuasion has been achieved in dialogue 2 because John has achieved more than he has wanted, in the sense that he convinced Ann of his initial thesis and, additionally, he convinced her of the thesis that seem to be closely related to his initial one. The proposition which John defends in move John_1 can be reconstructed as ''You should give me a 100 euro note''. Move John_1 can also be read as a full argument: ''You should give me a 100 euro note, because I need 100 euros and you possibly have a 100 euro note''. The proposition defended by John in move John_3 is ''You should give me a 200 euro note''. Turn 3 can also be read as a full argument: ''You should give me a 200 euro note, because I need 200 euros and you have a 200 euro note.'' Using the standard models, we cannot, however, conclude that John has obtained more than he has expected. Again, pragma-dialecticians would probably reconstruct the dialogue as a multiple dispute in which two standpoints are defended and one of these (''I think you should give a 100 euro note'') is abandoned at some point, and the second one (i.e. ''I think you should give me a 200 euro note'') is won by John. From the Waltonian perspective, it might be recognised as a situation where John comes up with a new standpoint (i.e. ''I think you should give me a 200 euro note'') at some point in the dialogue, and instead of retracting the earlier thesis (i.e. ''You should give a 100 euro note''), he starts defending the new thesis. Even if only one standpoint were considered by pragma-dialectians and Walton (i.e. ''I think you should give a 100 euro note'') in the reconstruction of dialogue 2, then we could only evaluate whether or not Ann has been convinced to accept this initial topic, i.e., that she should give John 100 euros. In such a reconstruction, the criterion of achieving full success would be fulfilled and therefore the persuasion would be evaluated as fully successful rather than over-successful, even though John has gained more money than he has asked for.
Notions of Topic, Manifestness and Implicature
In this article, I define the term 'topic' as a mental conception in the mind of agent i or agent i' which might become any statement, e.g., an assertion, a question, a standpoint or an argument, when it is publicly expressed. Therefore, the term 'topic' does not relate here to the Aristotelian notion of topos (Aristotle 1955;1959) I introduce here a distinction between topic t and other topics (i.e. t1, t2, t3, t4,…, tn) which might be activated in the mind of agent i during a persuasion dialogue (see e.g. Kacprzak and Yaskorska (2014, this issue) for the formal way of describing dialogues). Topic t relates throughout the paper to a potential standpoint. As mentioned in the introduction, the term 'standpoint' is considered here in pragmadialectical terms. The term 'potential' is applied here to the description of the notion of standpoint because it does not refer to a verbalised notion. In other words, the term 'potential' means that topic t does not become a real standpoint until certain commitments are ascribed to it. Thus, topic t is considered a potential, not a real, standpoint until certain preferences or attitudes with reference to it are publicly declared by agent i.
It is assumed in the article that a certain mental topic from the set Ti,t needs to be activated in the mind of agent i to contribute to generating a modified version of topic t. Relying on the elements of relevance theory introduced by Sperber and Wilson (1986;Wilson 1994Wilson , 2000Sperber 2006, cf. Yus 2006, see also Walaszewska and Piskorska 2012), I describe below two stages needed for the activation of a certain mental topic in the mind of agent i. Two notions from relevance theory are used for the description of the stages: cognitive environment and manifestness. Subsequently, it is explained how an implicature is created in the mind of agent i after the activation of the topic from the set Ti,t.
The first stage relates to the expression of an utterance. Agent i needs to hear an utterance to activate a certain mental topic in his mind. According to relevance theory, every utterance communicates certain facts and assumptions. After hearing an utterance, agent i adds the facts and assumptions from the utterance to his cognitive environment. The relevance-theoretic notion of the cognitive environment pertains to the set of those facts and assumptions which the hearer has possessed before a dialogue has started and those which have become available to him during the dialogue. The cognitive environment involving the set of old facts and assumptions is treated as the integrated context which helps agent i better understand the new information. Particular pieces of the information belonging to the cognitive environment of agent i are manifest to him in different degrees, e.g., they might be even less than known or assumed to him. An advocate of relevance theory, Carston (2002: 378), defines manifestness of an assumption to an individual as ''the degree to which an individual is capable of mentally representing an assumption and holding it as true or probably true at a given moment''. Agent i must accept a new assumption as true or probably true to adopt it and add it to his cognitive environment as a manifest assumption.
The second stage refers to the process of activation of a certain topic from the set Ti,t = {t1,…, tn} by a manifest assumption. Topics from the set Ti,t = {t1,…, tn} are assumed to be part of the cognitive environment and therefore to have a manifest status as well. Manifestness of a topic from the set Ti,t is defined here as the degree to which an agent i is capable of mentally representing the topic and accepting it as true or probably true. I argue that topic t and topics t1, t2, t3, t4,…, tn from the set Ti,t are already present in the cognitive environment of agent i before a dialogue starts or become part of it during the dialogue. The cognitive environment of agent i consists not only of the topics he is aware of, i.e. topics which he knows are advantageous to him because they help him resolve the conflict, but also of the topics he might become aware of during the dialogue (for example, when raised by the opposing agent i') if his cognitive abilities allow for it. 7 If the manifest assumption added to the cognitive environment is identical or similar to a certain manifest topic from the cognitive environment, then the manifest assumption activates the certain manifest topic in the mind of agent i.
In a persuasion dialogue, the first, most accessible interpretation of the expressed utterance called an implicature is created in the mind of agent i. This interpretation comes about by two processes: (1) the creation of a specific manifest assumption in the mind of agent i and (2) the activation of a topic from the set Ti,t in the mind of agent i. Also, the fact that agent i is aware of his standpoint contributes to the emergence of the implicature from the expressed utterance. Sperber and Wilson (1986: 194-195) define an implicature 8 as ''a contextual assumption or implication which a speaker, intending her utterance to be manifestly relevant, manifestly intended to make manifest to the hearer''. An important aspect of the process of drawing an implicature in a persuasion dialogue is the fact that the process helps agent i realise whether an expressed utterance yields to his persuasive advantage. Section 4 shows which activated topics from the set Ti,t help agent i realise his persuasive intention.
Beneficial Cognitive Model (BCM) as Part of a Cognitive Environment
As explained in Sect. 3, the set Ti,t is a set of topics which are advantageous for agent i since they help agent i resolve the difference of opinion. In other words, the set of topics Ti,t = {t1,…,tn} consists of topics which help agent i resolve the conflict no matter whether he has a persuasive or collaborative or any other individual aim. I argue that the BCM is part of the set Ti,t. I define BCMi,t as a set of beneficial topics which help agent i resolve the conflict of opinion but only in his favour. In other words, topics from BCMi,t help agent i fulfil only his persuasive aim. Cognitive environment (i.e. CE) is thus a broader conception than the set Ti,t and the set Ti,t is a broader conception than BCMi,t, i.e. BCMi,t , Ti,t , CE. 7 The set Ti,t = {t1,…, tn} considered in this paper might appear to be close to what van Eemeren et al. (1993) call disagreement space. Van Eemeren et al. (1993) discuss disagreement space by means of Searle's correctness conditions for speech acts. In contrast, the set Ti,t = {t1,…, tn} is not in any way concerned with the propriety of speech acts in a persuasion dialogue. The set Ti,t = {t1,…, tn} refers to a set of mental topics in the mind of a proponent which might become any speech acts when publicly expressed but need to belong to proponent's interest of conflict resolution. 8 The Gricean view and relevance theory differ in their approaches to the number of stages needed for implicature's recognition. Grice (1975Grice ( , 1989) presents a two-stage approach to implicature's recognition. According to him, only after the literal meaning is decoded in the mind of the hearer of an utterance is the implicature communicated by the speaker recognised. Relevance theory rejects the Gricean view and proposes to perceive the process of implicature recognition as one stage. According to relevance theory's Cognitive Principle of Relevance, ''human cognition tends to be geared to maximization of relevance'' (Wilson and Sperber 2006: 610). The principle states that a hearer attempts to maximize the relevance of an expressed utterance and thus considers it in a way which involves the least processing effort from him. Therefore, a hearer of an utterance arrives at its intended meaning ('an implicature') without prior processing of its literal meaning. Not all topics from a Ti,t are thus equally satisfying for agent i in terms of his persuasive aim. I propose to call the topics belonging only to the BCMi,t and therefore involving the satisfying, advantageous, salient and essential points for agent i in terms of his persuasive wants and desires prototype topics and the topics which are not advantageous for agent i in terms of his persuasive wants and desires radial topics. The set of topics Ti,t = {t1,…,tn} consists thus of prototype and radial topics. BCMi,t includes only prototype topics. The inspiration for the use of the terms 'prototype' and 'radial' comes from Lakoff (1987). The topics from a BCM are not, however, in any way concerned with Lakoff's idea of categorization of concepts having some universal features. Topics are prototype or radial in terms of individual gains of agent i during a persuasion dialogue involving a conflict of opinions.
Defining Partially and Over-Successful Persuasion Dialogues
In this section I propose a definition of a partially successful persuasion dialogue and an over-successful persuasion dialogue. The definitions are provided below: Partially successful persuasion dialogue for topic t and agent i-a persuasion dialogue in which agent i and the opposing agent i' do not agree on the original topic t but agree on the version of topic t which is logically implied by the original topic t or pragmatically implicated by the implicit warrant of agent i's argument and is generated by the implicature which arises from both t (i.e. the original standpoint of agent i) and the activated prototype topic from BCMi,t.
Over-Successful persuasion dialogue for topic t and agent i-a persuasion dialogue in which agent i and the opposing agent i' agree on the original standpoint and the version of topic t generated by the implicature which arises from both t (i.e. the original standpoint of agent i) and the activated prototype topic from BCMi,t. The version of topic t (the new standpoint) logically implies the truth of the original standpoint.
The pragmatic implication refers in the first definition to accepting the implicit warrant of proponent's argument by an opponent. The version of topic t discussed in the definitions is treated as a qualified standpoint because in the course of a dialogue it becomes a variation (a modified version) of the original standpoint on which agents agree. In the case of a partially successful persuasion dialogue and an oversuccessful persuasion dialogue, the qualified standpoint is generated by an implicature arising from the original standpoint of agent i and a prototype topic from the BCM of agent i. In partially successful persuasion only a qualified standpoint needs to be accepted. In over-successful persuasion, both the original and qualified standpoints need to be accepted.
In Sect. 2 it was indicated that dialogue 1 should be intuitively evaluated as a partially successful persuasion and dialogue 2 as an over-successful persuasion. Below it is shown how the new definitions provide for partially and over-successful persuasion. Consider dialogue 1 again in which partially successful persuasion can be intuitively recognised: Let's say that ''Please lend me a 100 euro note'' is a verbal manifestation of topic t. In this dialogue, topic t becomes a standpoint in move John_1 because the positive attitude to borrowing a 100 euro note from Ann is publicly expressed. If the definition of partially successful persuasion is to apply in this case, John and Ann need to agree on the version of topic t generated by the implicature arising from the original standpoint of John and from a prototype topic from the BCM of John. The prototype topic needs to be activated in the mind of John by his manifest assumption. According to relevance theory, after move Ann_2 John creates in his mind the manifest assumption that Ann is willing to give 50 euros up to him. This manifest assumption is added to the cognitive environment of John and related to the BCM of John. If the manifest assumption agrees with a certain prototype topic from the BCM of John, then the prototype topic is activated. Assume that obtaining only 50 euros is a satisfying alternative for John, which means that it is the certain prototype topic belonging to the BCM of John. The manifest assumption that Ann is willing to give 50 euros up to John and the prototype topic that John wants to obtain 50 euros coincide with each other. The prototype topic activated by the manifest assumption is added to the information from the cognitive environment about the content of the standpoint of John and, in this way, the implicature is created in the mind of John that Ann should lend John a 50 euro note. The implicature is a version of original topic t. The agreement of both agents on the version of topic t is achieved in move Ann_4 and the persuasion is thus partially successful for John. 9 John will borrow less money but he will still get part of the amount he originally wanted. Let us now consider dialogue 2 in which over-successful persuasion is achieved by agent i: John_1 Please lend me 100 euros. Ann_2 No I can't. I have got only a 200 euro note. John_3 OK, in that case I can take 200 euros. Ann_4 OK.
Let's say that ''Please lend me a 100 euro note'' is a verbal manifestation of topic t. In this dialogue, topic t also becomes a standpoint in move John_1 because positive attitude to borrowing 100 euros from Ann is publicly expressed. According to the definition of over successful persuasion, John and Ann need to agree on the original standpoint and on the version of topic t generated by the implicature arising from 9 I agree with Paglieri and Castelfranchi (2010) that non-argumentative, extra-dialogical goals contribute to the final outcome of argumentation. Dialogical goals can only be a means for achieving extradialogical goals. For example, agent i might want agent i' to accept his standpoint ''I think you should give a 100 euro note'' to achieve an extra-dialogical goal of having another opportunity to meet agent i' when giving him back the money. Successfulness of dialogical goals should thus be considered with reference to successfulness of extra-dialogical goals. This paper is to be perceived as a first step towards a more comprehensive view of successfulness of an agent in a dialogue. Therefore, the focus here is on dialogical goals. the original standpoint of John and a prototype topic from the BCM of John. The prototype topic needs to be activated in the mind of John by his manifest assumption. According to relevance theory, after move Ann_2 John creates in his mind the manifest assumption that Ann wants to give him 200 euros. This manifest assumption is added to the cognitive environment of John and related to the BCM of John. If the manifest assumption agrees with a certain prototype topic from the BCM of John, then the prototype topic is activated. Assume that obtaining 200 euros is more satisfying for John than obtaining 100 euros. This means that the 200 euro acquisition is the certain prototype topic belonging to the BCM of John. The manifest assumption that Ann wants to give John 200 euros and the prototype topic that John wants to obtain 200 euros concur. The activated prototype topic is added to the information from the cognitive environment about the content of the standpoint of John and, in this way, the implicature is created in the mind of John that Ann should lend him a 200 euro note. The implicature is a version of original topic t. The agreement of both agents on the original standpoint and the version of topic t is achieved in move Ann_4 since John will obtain what he originally wanted plus some extra money. Thus, the persuasion is over-successful.
As specified in the definition of partially-successful persuasion, the directionchange of topic t might have to do with logical implication or pragmatic implication. The pragmatic implication refers in the definition to accepting the implicit warrant of proponent's argument by an opponent. In dialogue 1, the implicit warrant of proponent's argument is accepted by an opponent. The implicit warrant of the reconstructed argument ''You should give me a 100 euro note, because I need 100 euros and you have a 100 euro note'' has the form ''If Ann has a particular amount of money, then (within reasonable bounds) she should lend it to John, if he needs it''. In dialogue 1, John has been able to employ this warrant in a successful way when advancing his second argument about the 50 euro note.
Consider dialogue 3 of partially successful persuasion which serves as an example explaining a direction-change of topic t based only on pragmatic implication: John_1 Let's go to the cinema. Ann_2 No, I don't feel like going to the cinema. But going to the theatre brings you closer to the culture as well. John_3 OK, let's go to the theatre. Ann_4 OK.
Assume that ''Let's go to the cinema'' is a verbal manifestation of topic t. In this dialogue, topic t of John becomes a standpoint in move John_1 because positive attitude to going to the cinema is publicly expressed. Move John_1 can be read also as expressive of a full argument ''We should go to the cinema because it will bring me closer to the culture''. The implicit warrant of the reconstructed argument has the form ''If a cultural place brings me closer to the culture, then we should go to a cultural place''. Ann, in move Ann_2, rejects the proposal of John to go to the cinema and expresses an argument in favour of going to the theatre. Simultaneously, in move Ann_2 she refers to the implicit warrant of John's argument in move 1. To meet the definition of partially successful persuasion, John and Ann need to agree The Beneficial Cognitive Model 335 on the version of topic t generated by the implicature arising from the original standpoint of John and from a prototype topic from the BCM of John. Assume that going to the theatre is a satisfying and allowable alternative for John, which means that it is a certain prototype topic belonging to the BCM of John. The implicature arising from the original standpoint and move Ann_2 is that Ann and John should go to the theatre. The implicature is a version of original topic t. The agreement of both agents on the version of topic t is achieved in move Ann_4 where Ann also accepts the implicit warrant of John's argument. The persuasion is thus partially successful.
Some Features of Partial and Over-Successful Persuasion Dialogues
In this section I will discuss two selected features of the partial and over-successful persuasion dialogues. The first one is concerned with the activation of a mental topic in the mind of agent i. The activation of a radial topic by a manifest assumption cannot contribute to partial and over-successful persuasion for agent i. The characteristic feature is thus the fact that only the activation of a prototype topic in the mind of agent i might lead to those types of effects. Thus, in the case of over-successful persuasion, obtaining more than agent i has expected does not simply mean that over-success has been achieved by agent i. Consider dialogue 2 again in which a prototype topic is assumed to be activated in the mind of John: Observe that only if obtaining 200 euros is a prototype topic from the BCM of John (not a radial topic which belongs only to set Tjohn,t) will the implicature arising from move Ann_2 contribute to the over-successful outcome of the dialogue. If obtaining 200 euros was not a satisfying alternative for John (e.g., it would be a problem for him to carry in his wallet more than 100 euros), then it would be treated as a radial topic which would help John and Ann resolve the difference of opinion but not in John's favour. John's persuasive intention would not be realised.
The activated prototype topic needs to rely on the original topic t and can change depending on a given dialogue. Assume that we have two dialogues to which the same Ti,t of agent i involving the same topics {t1,…, t10} pertains. Depending on the content of topic t and the nature of the conflict, agent i will not consider the same topics equally satisfying in these two verbal exchanges. In one verbal exchange, topics {t1, t2, t3} can be considered prototypical by agent i and other topics can be evaluated as radial. In a different verbal exchange, topics {t2, t10} can be considered prototypical by agent i and other topics can be treated as radial. Thus, in the first verbal exchange the implicature needs to emerge from topics {t1} or {t2} or {t3}, and in the second verbal exchange from either topic {t2} or topic {t10}, to produce a version of topic t needed for partial and over-success.
As for the second of these two features of partially and over-successful persuasion dialogues, this second feature pertains to the particular move in a dialogue which leads to the activation of a prototype topic. The activated prototype topic subsequently contributes to the generation of a version of topic t through an implicature. The significant feature that characterises the move is the fact that the move might be expressed not only by the opponent of agent i. The implicature might also arise after the prototype topic is activated by the move of the proponent of the original standpoint.
In dialogue 3 in the previous section, the prototype topic has been activated by the move of the opponent. In contrast, in dialogue 4 below, the prototype topic is activated by move John_3 of the proponent of the original standpoint: Dialogue 4 John_1 I have said many times that I want a nuclear power station to be built in Poland. Ann_2 I definitely don't want to have it in Poland. The government is considering building a nuclear power in _ Zarnowiec or Choczewo. These are highly populated areas. If a nuclear power exploded there, then it would pose a serious threat to the land on which the people live. John_3 But there are restricted areas in Poland far away from the populated areas. Ann_4 OK, so let's built the nuclear power station in these non-populated, desolate areas in Poland.
Let's say that ''I want a nuclear power station to be built in Poland'' is a verbal manifestation of topic t. In this dialogue, topic t of John becomes a standpoint in move John_1 because the positive attitude to building a nuclear power station in Poland is publicly expressed. In move Ann_2, Ann expresses her negative opinion to topic t and presents an argument justifying her point. Assume that building a nuclear power station in a non-populated area is an allowable alternative for John to building the nuclear power station anywhere in Poland and therefore is a certain prototype topic belonging to the BCM of John. Assume that move John_3 of the proponent of the original standpoint leads to the activation of the prototype topic through the manifest fact that there are restricted areas in Poland far away from the populated areas. Thus, in this dialogue, the move of the proponent of the original standpoint, not the move of the opponent as was the case in dialogues 1-3, leads to the activation of the prototype topic in the proponent's mind.
Conclusion
The main aim of this paper was to offer a supplementary model which provides for two types of effects of persuasion: partially successful dialogue and over-successful dialogue. The proposed model introduces the notion of a modified version of the original topic t which helps to define these two types of effects of the persuasive force of argument. It has been indicated in what way three relevance-theoretic notions of cognitive environment, manifest assumption and implicature contribute The Beneficial Cognitive Model 337 to the explanation of the processes of the generation of the modified version of topic t. I have introduced what I have called the BCM. It comprises prototype topics which need to be activated by a manifest assumption to produce an implicature acting as the modified version of topic t. BCMi,t has been defined as a mental model belonging to the cognitive environment of agent i which consists of a set of prototype topics which help agent i resolve the conflict of opinion but only in his favour. It has been shown that the activation of a prototype topic from a BCMi,t by the move of agent i or his opponent provides for the discussion of the directionchange of topic t based on the logical or pragmatic implication.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. | 9,814 | 2014-07-19T00:00:00.000 | [
"Linguistics"
] |
Beam Coupling Impedance Contribution of Flange Aperture Gaps: a Numerical Study for Elettra 2.0
The accurate analysis of any possible source of beam instability is mandatory for the design of a new particle accelerator, especially for high-current and ultra-low emittance synchrotrons. In the specific case of instabilities driven by the coupling between the charged particle beam and the electromagnetic field excited by the beam itself, the corresponding effect is estimated through the beam coupling impedance. The modeling of this effect is essential to perform a rigorous evaluation of the coupling impedance budget able to account for all devices present in the entire machine. To deal with this problem, this paper focuses on the estimation of the contribution of the joints lying between the different vacuum chamber sections, by performing a comparative numerical analysis that takes into account different aperture gaps between the flanges. The results point out the criticality of many small-impedance contributions that, added together, must be lower than a predefined impedance threshold.
Introduction
Operating for users since 1994, the existing third-generation Italian synchrotron radiation facility Elettra [1] is going to be replaced by Elettra 2.0 [2,3], an ultra-low emittance light source able to provide ultra-high brilliant and coherent photon beams.To ensure the performance of Elettra 2.0 is not affected by potential sources of beam instability, it is necessary to thoroughly examine the electromagnetic interaction between the circulating beam and its surrounding environment.This interaction can be evaluated through the wake field in the time domain and the beam coupling impedance in the frequency domain [4].It is important to keep the overall machine impedance below a predetermined threshold to prevent any possible sources of beam instability.This work focuses on estimating the contribution of the joints located between different sections of the vacuum chamber.This problem has also been addressed in other contexts such as CERN-SPS, where RF contacts have been used [5,6], or PSI-SLS2, where zero gap flanges have been chosen [7].This paper describes a comparative numerical analysis of the impedance of two types of vacuum flanges, taking into account different gap thicknesses between them.The obtained results are exploited to discuss the impact of the different impedance contributions in the forthcoming development of Elettra 2.0.
Flange models
Two different types of flanges are considered in this paper.The first one is a Spigot Flange Lip (SFL) type, while the second one is a Spigot Flange Planar (SFP) type.
Mechanical model
The mechanical drawings of the flanges under evaluation are detailed in Figure 1.The main difference between SFL and SFP types resides in the geometry of the resulting gap that separates the opposite sides of the vacuum joint: in the SFP case, the gap's volume is reduced with respect to the SFP's one.This is the result of the different geometry of the transition between the gasket housing and the rhomboidal vacuum chamber.
Electromagnetic model
A simplified electromagnetic (EM) model has been derived from the mechanical one to simulate the interaction between the charged particle beam and its surrounding environment by considering the short vacuum pipes, opposite-facing flanges and the gasket.To simplify the structure, only the surfaces, volumes and materials interacting with the EM field of the charged particles beam have been taken into account.The correspondence between the mechanical and EM models is summarized in Table 1.
The basic EM models of the SFP and SFL are shown in Figure 2, where the gasket and the flanges are assumed of the same material being considered as the background material in the model.The gap G and the cavity depth C of the parasitic cavities formed by the opposite sides of the flanges are also illustrated.A direct comparison between the two shapes shows that the passive cavity volume of SFL is larger than that of SFP.
Electromagnetic simulation
Two sets of EM simulations are carried out resorting to CST Particle Studio by Dassault Systemes Simulia [8].The first set aims to evaluate and compare the longitudinal impedance of the two types of flanges assuming the nominal geometries, while the second set focuses on the evaluation of the effects determined by the constructive tolerances and the parameter variations.
The transverse symmetry of both types of flanges allows one to exploit the symmetry with respect to the Y −Z and X −Z planes for the boundary conditions, thus enabling to simulate just one-fourth of the actual EM structure.Moreover, in order to find a reasonable trade-off between the number of mesh cells, the convergence results, and the computational time, a suitable number of preliminary simulations is carried out by varying the mesh density.The large aspect ratio of the flanges under investigation is also implicitly taken into account by maintaining oversized the number of mesh cells.
Flanges nominal dimensions
The nominal dimensions of the flanges are: The relativistic exciting Gaussian beam has bunch length σ = 4 mm in order to have an impedance estimation up to 25 GHz.The lossy metal considered as background is the AISI 316L stainless steel, with an electric conductivity σ 316L = 1.35e6S/m at room temperature.In order to evaluate the longitudinal impedance of both SFL and SFP, the exciting beam and the wakefield integration path are set on the longitudinal z-axis of the simulated structures, and the wake potentials are calculated by the Wakefield solver.In Figure 3 the SFL (red trace) and SFP (green trace) wake potentials appear overlapped.
A first qualitative comparison between the wake potential lengths and initial amplitudes, considered together with the shapes of the parasitic cavities (as depicted in Figure 2), suggests that the SFL cavity has a higher energy storage capability with respect to the SFP one.Performing some numerical analyses, both the broadband and narrowband (resonant) impedance contributions can be estimated, thus enabling a quantitative comparison between the SFL and SFP flange behavior in the frequency domain.Each narrowband impedance contribution is characterized by its resonant frequency f r , its shunt resistance R s (i.e. the amplitude of the real part of the complex impedance at the resonance frequency), and the quality factor Q.These values are summarized in Table 2 for the main longitudinal resonant mode of the investigated flanges.
The longitudinal analysis is then completed by calculating the normalized longitudinal impedances Z/n [9] (see Figure 4), where n = f /f rev is the mode number, with f rev denoting the revolution frequency of the accelerator.The wake loss factors (WLFs) for SFL and SFP are 4.83e-02V/pC and 1.31e-02V/pC, respectively.A comparison between the real parts of Z/n shows that the SFL type is almost 100 times higher than the SFP one, while the ratio of WLFs is about 3.69.
Mechanical tolerances and parametric simulations
The previously presented EM analysis on the SFP nominal model has allowed the evaluation of the variations of the longitudinal impedance for different geometric tolerances.Assuming that only one parameter varies at a time, we can now estimate the effects introduced by the manufacturing and assembly tolerances.The considered parameter variations and the corresponding effects can be listed and discussed as follows.
• The expansion of the gap G from 0.1mm to 0.4mm in steps of 0.1mm determines the increase of both the main and secondary peak amplitude of the real part of the longitudinal impedance, with a frequency shift toward higher values (Figure 5).The WLF increases too 3).
• The increase of the gasket inner radius from 19.6mm to 20.0mm determines the growth of both the main and secondary peak amplitude of the real part of the longitudinal impedance, with a frequency shift toward lower values.The WLF remains constant.
• The increase of the longitudinal length from 10mm to 70mm does not provide appreciable modifications on the real and imaginary part of the longitudinal impedance.This is because of the long range nature of the resonant field trapped in the gap, whose frequency (4.8793GHz) is below the cutoff frequency of the vacuum pipe (7GHz).
Conclusion
The longitudinal normalized impedance and the wake loss factor are useful to provide an effective description of the EM interaction between the charged particle beam and its surroundings.Our simulations show that the normalized longitudinal impedance of the SFP flange type is one hundred times lower than that of the SFL one, thus suggesting the opportunity of avoiding the installation of the second flange.Furthermore, thanks to the results of the parametric analysis on the SFP type, we have shown the importance of matching the geometric tolerance limit values for both the gap and the gasket radius.It is worth mentioning that the real part of the impedance is also related to the RF heating, which could represent a serious issue, both in terms of cooling and extra RF power that the accelerating cavities have to deliver to the beam.In the next future, the longitudinal impedance, and consequently the RF heating, of the SFP-based vacuum joints could be further reduced by acting on the beam-flange coupling by: • optimizing the cavity geometry (lowering Q); • shielding the cavity aperture (RF fingers for surface currents).
Figure 1 .
Figure 1.Flanges mechanical drawings.On the left, the SFL type (2) and, on the right, the SFP type (3).Also the gasket (1) and the rhomboidal vacuum pipe (4) are shown.
Figure 2 .
Figure 2. Electromagnetic model of the two flanges: 3D longitudinal cut views.The SFL type (left) and the SFP type (right).
Figure 5 .
Figure 5. Parametric dependence of Re(Z/n) on G.
Table 1 .
Correspondence between mechanical and electromagnetic model.
* background lossy metal input and output apertures open boundaries
Table 2 .
R s , Q and Re(Z/n) comparison between the SFL and SFP dominant resonance. 4
Table 3 .
Wake loss factor varying the gap G. | 2,226 | 2024-01-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Generation flow in field theory and strings
Nontrivial strong dynamics often leads to the appearance of chiral composites. In phenomenological applications, these can either play the role of Standard Model particles or lift chiral exotics by partnering with them in mass terms. As a consequence, the RG flow may change the effective number of chiral generations, a phenomenon we call generation flow. We provide explicit constructions of globally consistent string models exhibiting generation flow. Since such constructions were misclassified in the traditional model searches, our results imply that more care than usually appreciated has to be taken when scanning string compactifications for realistic models.
Introduction
One of the curious features of the Standard Model (SM) of particle physics is the repetition of families. That is, the matter content of the SM comprises three copies of fermions carrying identical SM gauge quantum numbers. While the number of generations is generally arbitrary in field theoretic extensions of the SM, such as a Grand Unified Theory (GUT), in string theory it can be thought of as a prediction of any specific model or compactification. Hence, the number of generations is often used as one of the first selection filters applied in a search for promising string models. It is the purpose of this study to point out that non-perturbative field theoretic dynamics may modify the number of effective generations in the process of renormalization group (RG) flow. Thus, some additional care is required when counting the number of generations in candidates for ultraviolet (UV) completions of the SM, in particular in string models.
In this paper, we will concentrate on supersymmetric models both because it is convenient in the context of string model building and because the relevant non-perturbative dynamics is under qualitative and often quantitative control in such theories. As shown by Seiberg [1], non-perturbative effects can have a dramatic impact on gauge theories. In particular, due to confinement and duality, the degrees of freedom appropriate for describing infrared (IR) physics often differ considerably from the UV degrees of freedom. Throughout this paper, aiming at preserving the chirality of the SM (or its GUT completion), we consider confinement without chiral symmetry breaking (so-called s-confinement [2,3]). Since the low-energy degrees of freedom in these models are composites of the elementary fields, they usually transform in different representations of the unbroken global symmetry. When a subgroup of such global symmetry is identified with a GUT or the SM gauge group, a new, composite, chiral generation may emerge in the IR or, alternatively, an existing chiral generation may become massive. The first of these phenomena was initially used in [4,5] to construct realistic extensions of the minimal supersymmetric Standard Model with some of the third generation quarks and Higgs bosons arising as composites of strong dynamics. In this approach, which we will refer to as the Nelson-Strassler (NS) mechanism, the RG flow leads to the appearance of light chiral composites in the IR thus increasing the effective number of chiral generations. The NS mechanism may be modified in several fairly obvious ways. For example, some of the composites may acquire masses by mixing with elementary chiral fields, modifying the spectrum of light fields in the IR in nontrivial ways. When all of the composites acquire mass, the model is in the second regime which attracted attention more recently [6]. We will refer to the second phenomenon as the Razamat-Tong (RT) mechanism. Here all of the composites of strong dynamics acquire masses by partnering with elementary degrees of freedom and thus reduce the number of effective generations in the IR. As we will argue, these two mechanisms can be continuously connected by introducing mass terms for vector-like elementary fields, which are allowed to mix with the composites. When masses of vector-like fields are small while the mixing between elementary fields and composites is of order one, the theory flows to the RT limit where all the light fields are elementary. On the other hand, in the limit of large mass the vector-like elementary fields decouple, leaving massless composites behind. In this case, the theory flows to the NS limit where some light fields are composites. By varying the mass terms, one can interpolate between the two limits, and for intermediate values of the mass term some IR degrees of freedom will be partially composite. Furthermore, one has freedom to decouple any number of composites. In general, however, non-perturbative dynamics affects RG flow and modifies the effective number of chiral generations in the IR. We will refer to these phenomena as generation flow.
It is then natural to ask whether generation flow can occur in scenarios where the number of generations is predicted from other data. This is particularly relevant for string model building (cf. e.g. [7] for a review), where one obtains the SM generations from string compactifications. We will argue that generation flow indeed occurs in some globally consistent string models. In these constructions, the true number of generations in the IR description can differ from the treelevel value that one obtains at the compactification scale. Hence, a search for 3-generation models in string theory has to go beyond the tree-level analysis.
This paper is organized as follows. In Section 2, we will review the RT mechanism of gapped chiral fermions. In Section 3, we construct models exhibiting generation flow towards a 3-generation theory with (a GUT completion of) the SM gauge group in the IR. Our first example is a 4 3 model based on the RT mechanism where all the IR degrees of freedom are elementary. We then construct a generalization of the 4 3 model where some of the third generation fields are composite. We point out that our construction is analogous to the NS mechanism [4,5]. This motivates us to build a 2 3 model with an upward generation flow. Furthermore, we discuss the stability of the chirally symmetric vacua in sconfining models under the deformations which induce generation flow. While such deformations may generally destabilize the vacua by non-perturbative dynamics (see [8] for a more detailed discussion), we argue that the chirally symmetric vacua survive in our models. In Section 4, we collect evidence for the existence of string models exhibiting generation flow by presenting explicit examples. Finally, Section 5 contains our conclusions.
s-confinement and gapped chiral fermions
We begin by briefly reviewing dynamics of supersymmetric gapped fermion models introduced in [6]. In the following we will take the approach of [8] to building models of chiral gapped fermions. This approach starts with SUSY QCD models that exhibit confinement without chiral symmetry breaking on smooth moduli space [1]. 6 For our purposes it is convenient to restrict attention to s-confinement in SU(2) s SUSY QCD with six chiral doublet superfields and thus SU(6) chiral global symmetry. We review the dynamics of this model in the subsection 2.1. In the subsection 2.2, we discuss the deformation of the SUSY QCD required to arrive at mass gap models of [6].
s-confining SU(2) s model
The model outlined above possesses SU(6) × SU(2) s symmetry, where SU(6) is a chiral global symmetry while SU(2) s is a strongly interacting s-confining gauge group. For future convenience we will assign quark superfields to (6,2) representation of the symmetry group. The theory possesses a set of classical D-flat directions which can be parameterized either in terms of squark vacuum expectation values (VEVs) or in terms of gauge invariant mesons which are classically defined as M ij ∼ Q i Q j /Λ, where we suppressed contraction of SU(2) s -color indices and the dynamical scale of the quantum theory Λ is introduced on dimensional grounds. The mesons M transform in the conjugate antisymmetric representation of the global SU(6) symmetry 15. However, since quark VEVs satisfy a set of algebraic identities, not all meson VEVs are independent. These classical constraints imply a set of relations between the mesons, One may implement these constraints in the composite description of the theory by postulating a dynamical superpotential The moduli space parameterized by mesons M together with the superpotential (2) coincides with the classical moduli space of the theory parameterized by quark VEVs satisfying D-flatness conditions. It was shown in [1] that the classical moduli space of vacua remains unmodified quantum mechanically and the IR physics is described in terms of weakly interacting mesons with the superpotential (2). While the chiral global symmetry of this model is broken at a generic point on the moduli space, the chiral symmetry remains unbroken at the origin where the theory exhibits confinement without chiral symmetry breaking. This is precisely the vacuum we are interested in.
Mass gap model
For phenomenological purposes we are interested in gauging SU(6) global symmetry of the s-confining model discussed in the previous subsection (more precisely we are interested in gauging a subgroup of SU(6), such as a GUT SU(5) or the SM group SU(3) × SU(2) × U(1)). To this end, one must introduce a set of spectator fields charged under SU(6) but not SU(2) s (so that the s-confining dynamics remains unaffected) to ensure a cancellation of the cubic SU (6) anomaly. This can be achieved, for example, by introducing spectators that transform in representations of SU(6) conjugate to those of elementary fields, i.e. by adding two spectators with quantum numbers given by (6,1). Alternatively, one can introduce a single spectator S in an SU(6) representation conjugate to the one of the mesons, i.e. transforming as (15,1). In the former case, the theory remains chiral both in the UV and IR. This is because SU(2) s is not yet confined in the UV and the matter fields transform in chiral representations of the full SU(6) × SU(2) s symmetry, while the representations of IR degrees of freedom are chiral under SU (6). However, in the latter case, the chiral properties of the model change as the theory flows from the UV to the IR. While the UV theory is clearly chiral, the IR degrees of freedom, the mesons M and spectators S, transform in conjugate representations and thus form a single vector-like representation. By choosing to cancel anomalies with the spectator S in the antisymmetric representation, we will be able to construct a model flows from a gapless, chiral phase in the UV to a gapped phase in the IR.
Since the matter content in the IR is non-chiral, a mass term, SM, is allowed in the IR superpotential. In terms of the UV degrees of freedom, this mass term corresponds to a marginal operator, SQ 2 . Thus, we deform the s-confining model by a tree-level superpotential where the numerical coefficient c represents both an arbitrary Yukawa coupling y of the UV theory and the fact that the mass scale generated by confinement is not directly calculable. At this point one might be tempted to conclude that a mass gap develops in the chirally symmetric vacuum at the origin, while the rest of the moduli space is lifted by the equations of motion for S and M. However, while ultimately correct, this conclusion is somewhat premature. Indeed, while lifting SU(2) s D-flat directions, the deformation (3) introduces new classical flat directions, those parameterized by SU(2) s singlets S. Since any VEVs for S would break the chiral symmetry, it is important to verify that the non-perturbative dynamical superpotential (2) does not destabilize these directions. A careful analysis [8] of the full superpotential in (2) and (3) demonstrates that SU(2) s dynamics generates an effective superpotential for gauge singlets S stabilizing them at the origin. 7 While referring the reader to [8] for the full analysis, we present a simple argument here. Consider the theory at large S where all quark superfields become heavy. In this region of the moduli space the low-energy physics is described in terms of a pure super-Yang-Mills (SYM) SU(2) s theory with dynamical scale given by The dynamics of the low-energy SYM in turn generates a gaugino condensate implying the existence of an effective superpotential It is easy to see that this superpotential stabilizes S near the origin. The main lesson we learn from this example is a possibility that the RG flow may change the chiral properties of the theory and, in particular, may change the number of chiral generations. Here we define a chiral generation as a field transforming in an antisymmetric representation of the chiral symmetry accompanied by an appropriate number of fields in an antifundamental representation as required by anomaly cancellation conditions. Then the net number of generations is given by a difference between number of fields in an antisymmetric representation and in a conjugate antisymmetric representation, ν = n − n . For example, in our example with SU(6) chiral symmetry the number of generations is given by n 15 − n 15 . This definition is chosen such that it can be used throughout this study, and coincides with what one calls a generation in SU(5) GUTs. From the SU(6) perspective, our UV model is a one-generation model containing an antisymmetric, 15, and two antifundamental, 6, of SU (6). On the other hand, the IR theory has no massless chiral superfields even while the chiral symmetry remains unbroken.
While the construction of [6] decreases the number of chiral generations in the IR, we will show in the following section that non-perturbative dynamics may also lead to an increase in the number of chiral generations. As we will see, the existence of generation flow offers immense opportunities for model building both in field theory (Section 3) and string theory (Section 4).
Generation flows in GUTs
The supersymmetric gapped fermion model reviewed in the previous section is based on an SU(2) s s-confining theory with SU(6) global symmetry. Generalizations to s-confining Sp(2N) with SU(2N + 4) global symmetry are straightforward [6]. 8 However, for phenomenological purposes one is interested in similar models with SU(5) or SU(3) × SU(2) × U(1) global symmetry which can then be identified with the GUT or the SM gauge group. As shown in [6], this can be easily achieved simply by considering the model of Section 2.2 and identifying GUT or SM gauge group with an appropriate subgroup of SU (6).
For example, to construct a one-generation SU(5) × SU(2) s theory which behaves as a pure SYM SU (5) The tree-level superpotential (3) and dynamical superpotential (2) can be easily written in the SU(5) language. One can verify that the UV description corresponds to a one-generation model complemented by a single vector-like flavor in a fundamental representation. As we learned in Section 2, the s-confining dynamics leads to a unique ground state with an unbroken chiral symmetry and no light matter fields.
We are now ready to generalize the mass gap construction of RT [6] to obtain models where the number of chiral generations is changed through renormalization group flow but remains nonzero both in the UV and the IR. As we will see shortly, the RG flow may lead both to an increase and a decrease in the effective number of chiral generations. The latter can be achieved in two ways. In the first approach, as in the model of Section 2, some of the chiral elementary fields acquire masses by partnering with the chiral composites generated by confining dynamics. As a result, all the massless degrees of freedom in the IR are elementary fields of the theory. Just like in the model of Section 2, the chirally symmetric vacuum is a unique ground state of this theory. The second approach is reminiscent of the construction first introduced in [4,5]. In this approach, some of the massless fields in the IR are composites even as other composites may become massive. Generically, models in this class retain the quantum moduli space and only one vacuum on this moduli space is chirally symmetric. Since IR degrees of freedom, including the massless composites, are to be identified with the SM multiplets, the motion along this moduli space is equivalent to motion along D-flat directions of a GUT or the SM. Note that the mechanism utilized in the second approach may also lead to an increase in the effective number of generations.
4 3 generation flow
We can now detail our general observations by building an explicit model of downward generation flow. Let us start with a more straightforward example, where the number of chiral generations decreases in the IR while all the composites are heavy. In particular, we construct a 4 3 model, i.e. a model containing 4 generations in the UV and 3 generations in the IR. The matter fields of the model and their quantum numbers are presented in Table 1a. Note that this matter content comprises the fields appearing in (5) complemented by three chiral flavors of SU(5) i.e. three copies of T ⊕ F. Thus, this is a four-generation model. It is easy to see that SU(2) s dynamics is not affected by the introduction of additional chiral multiplets as long as one linear combination of the T i 's has the Yukawa coupling with F ′ and φ that is implied by the superpotential (3). Indeed, at low energies SU(2) s charged fields confine into T ∼ F ′ F ′ /Λ and F ∼ F ′ φ/Λ. The transformation properties of the IR degrees of freedom are given in Table 1b. Finally, in the IR the superpotential (3) behaves like a mass term pairing composites F and T with F and one copy of T, respectively. Repeating the analysis of Section 2.2 one concludes that the classical flat directions parameterized by F and T are stabilized non-perturbatively.
Let us consider a generalization by noting that the symmetries of the model allow a mass term for the vector-like pair F ⊕ F. With this mass term, the full UV superpotential becomes # irrep label Note that the additional mass term and y 1 = y 2 explicitly break the SU(6) symmetry. Neither F nor F are charged under SU(2) s , thus the confined spectrum of the model (Table 1b) does not change. In the IR, the superpotential becomes where the first term is the s-confining superpotential Equation (2). A simple analysis shows that in the presence of the mass term the model possesses a quantum moduli space satisfying the condition While at a generic point on the moduli space the chiral SU(5) symmetry is broken, the s-confining vacuum where one generation acquires a mass survives at F = F = 0. This leaves three light generations, two made up entirely of elementary fields and another where the 5 is made up of a linear combination of F and F. This lays out two interesting limits. In the limit m → 0, the light generations are entirely composed of elementary fields, F = 0, and the chirally symmetric vacuum is stabilized as in Section 2.2. We refer to this as the RT limit because all composite fields decouple. In the limit m → ∞, one of the three light generations has a composite 5. We refer to this limit as the NS limit due to the appearance of light composite fields. At finite mass, there is a flat direction which can be parameterized by F . For the purposes of phenomenology, F would play the role of a SM multiplet; motion along the moduli space of this model corresponds to motion along D-flat directions of a GUT (or the SM). Table 2: Summary of the SU(5) × SU(2) s quantum numbers of the chiral superfield content of the 2 3 model.
2 3 generation flow
The NS limit of the model discussed above resulted in a theory with a composite 5 while the number of 10's (i.e. number of generations) was smaller in the IR. On the other hand, original models of [4,5] had a composite 10 in the IR thus increasing the number of generations. That construction can be interpreted as an upward generation flow. Let us discuss a variation of that model where the starting point of RG flow contains two chiral generations while the end point in the IR has three chiral generations, i.e. a 2 3 model.
Once again we consider a model with the symmetry group SU(5) × SU(2) s , whose matter content and charges are given in Table 2a. The tree-level superpotential in terms of the UV degrees of freedom is When the non-perturbative dynamics is included, the IR superpotential becomes where T ∼ F ′ F ′ /Λ and F ∼ F ′ φ/Λ. It is convenient to analyze the behavior of this superpotential by going along a flat direction parameterized by F. Without loss of generality we can assume that the VEV of F lives in a single component, say F 5 . At large VEV, the global symmetry is broken from SU(5) to SU(4), and one pair of doublets, the one corresponding to the F 5 meson, becomes heavy and can be integrated out. Along this flat direction the superpotential becomes where prime on the Pfaffian indicates that it is taken only over the light mesons comprising a 6-plet of the remaining SU(4) symmetry. Note that at this stage F 5 is not a dynamical field since it is a meson made out of heavy doublets. At the same time, the F 5 VEV remains arbitrary albeit related to the T VEVs by the F 5 equation of motion, Upon a careful inspection of (11) and (12), one notices that they correspond to the superpotential and one of the equations of motion of a four-doublet theory with a deformed moduli space, a dynamical scale Λ 6 L = F 5 Λ 5 , and the meson F 5 playing a role of Lagrange multiplier. We see that for each nonvanishing value of F 5 the effective theory possesses a quantum deformed moduli space, i.e. it exhibits confinement with chiral symmetry breaking. Furthermore, the scale of chiral symmetry breaking is parameterized by F 5 . While the effective description in terms of four-doublet theory is only valid at large F 5 , the solution of the F 5 equation of motion is valid everywhere on the quantum moduli space up to a SU(5) symmetry transformation. In particular, the chirally symmetric vacuum Pf ′ T = F 5 = 0 belongs to the quantum moduli space.
Note that the models introduced in this section differ in their quantum moduli spaces and their low-energy spectra. In the RT limit of the 4 3 model, there is a unique, s-confining vacuum. All composite degrees of freedom become massive via the RT mechanism, and there are three light generations made out of the elementary fields. In the 2 3 model and the NS limit of the 4 3 model, there remains a quantum moduli space of vacua parameterized by the VEV of F (or equivalently F), respectively, which includes the chirally symmetric vacuum. In the 2 3 model, one of the three light generations contains a composite 10, while at finite mass, the 4 3 model has a 5 which is partially composite and partially elementary.
In the following sections, we will show how these models can arise naturally in string model building, providing examples of phenomenologically viable string models which would have previously been ruled out by the tree-level analysis of the models.
for string models? In string phenomenology, one tries to connect string theory to the real world (cf. e.g. [7]). In practice, this often amounts to searching for a string compactification which reproduces the SM in its low-energy limit. When constructing a string model, one chooses a framework, such as one of the perturbative string theories, and compactifies it down to four dimensions. The step of compactification consists of making an assumption on the geometry of compact dimensions (in principle one also must show that the emerging setup is stable, i.e. string moduli describing the size and shape of compact space are stabilized). However, attempts to build realistic models often fail already at an earlier stage because the zero-modes do not comprise the SM matter. This could mean that one has chiral exotics, or just not the right number of generations. It is the latter possibility where generation flow, as discussed in Section 3, can be important. 9 In practice, when determining the number of generations, one looks at the tree-level predictions. However, as discussed in Sections 2 and 3, the number of generations obtained this way may differ from the true number of chiral generations in the low-energy effective theory. 10 It is therefore interesting to study the question to which extent models of the type discussed earlier can be obtained from string theory.
It is not the purpose of the present paper to construct a fully realistic model exhibiting generation flow. Rather, we will collect evidence for the existence of such models. To keep our discussion simple, and in order to relate our findings to Section 3, we will look for SU(5) models rather than models with SM gauge group. However, we expect that the results carry over to models with the SM gauge group after compactification.
Model scan
In what follows, we focus on orbifold compactifications of the (E 8 × E ′ 8 ) heterotic string [13,14], which can be efficiently constructed with the orbifolder [15]. We will collect evidence for the existence of globally consistent string compactifications that have either two or four generations of SM matter at tree level, but in fact have three generations in their low-energy effective description. That is, we 9 It is conceivable that more generally chiral exotics can be removed along the lines of Section 2 (cf. [9] for an example). It will be interesting to work out the detailed conditions for this to happen. 10 It is known that chirality-changing phase transitions can occur in string compactifications [10][11][12]. In this work we focus on generation flow that can be understood in terms of field-theoretic supersymmetric gauge dynamics with an s-confining SU(2) s as in Sections 2 and 3. It will be interesting to see whether there is a deeper relation between these phenomena. will present evidence for the existence of stringy versions of the 4 3 and 2 3 models discussed in Section 3.
The orbifolder allows us to compute a 4D model from certain input data, which comprises the geometry of the orbifold and the so-called gauge embedding. The latter essentially describes how the geometric operations of the 6D space-like compact dimensions act on the E 8 × E ′ 8 lattice. This determines not only what the residual gauge symmetry of the model is but also the spectrum. In more detail, the orbifolder provides us with the continuous and discrete gauge symmetries after compactification as well as the chiral spectrum of the model. By using the orbifolder, we obtained a large sample of supersymmetric heterotic orbifold models with the following properties: • orbifold geometry Z 2 × Z 4 (1,1) (see [16] for the notation, and [17] for details of the geometry); • 4D gauge group G 4D ⊃ SU(5) × SU(2) s (where we labeled the second factor "s" to indicate that this SU(2) plays the same role as in our earlier discussion in Sections 2 and 3); • the SU(5) and SU(2) s gauge groups emerge each from a different E 8 factor of the original heterotic string; • a net number of n SU(5) GUT generations, with no representation (10, 2) least one representation (5, 2) or (5, 2); • at least one "flavon" field transforming as (1, 2); other fields of this type could in principle be decoupled from low energies; • a (large) number of SU(5) × SU(2) s singlets; • additional non-Abelian gauge factors under which the SU(5) charged fields are singlets; and • additional U(1) factors which can be broken along D-flat directions without breaking SU(5) × SU(2) s .
Our scan yielded several models in which s-confinement can change the number of chiral representations.
Models
Rather than providing the reader with an extensive survey, we focus on two sample models defined in the Appendix. In more detail, we discuss • a 4 3 model (cf . Table 3a) in which the 4 th chiral generation acquires a mass and decouples through, and • a 2 3 model (cf . Table 3b) in which the 3 rd chiral generation emerges from states that are vector-like under SU(5) through a variant of the RT effect, in which a chiral 10 ⊕ 5 arises as a composite of (5, 2) ⊕ (1, 2) ⊕ 2(5, 1) Both models have the virtue that the SU(5) and SU(2) s factors come from different E 8 's. Consequently, SU(2) s can naturally be more strongly coupled than SU(5) (cf. e.g. [18]).
A stringy 4 3 model
The model defined by the parameters provided in Equation (14) results in the 4D gauge group G 4D = SU(5) × SU(2) s × [SU(2) 5 × U(1) 6 ]. The gauge factors in the brackets can be broken along D-flat directions. Since the Lagrange density is invariant under complexified gauge transformation, we can infer that nontrivial solutions to the F-term equations preserve supersymmetry [19,20]. We are then left with G unbroken = SU(5) × SU(2) s . Before discussing the 4 3 properties of this model, let us comment on the possibility to break SU(2) s along D-flat directions. In this case, we will obtain a vacuum with 4 generations of an SU(5) GUT, i.e. 4 copies of 10 ⊕ 5 while the other states are now vector-like and pick up masses proportional to the VEVs of the SU(5) singlets that got switched on. According to the usual string phenomenology practices, we would thus label this model an unrealistic 4-generation model, not worth being considered further.
On the other hand, if we leave SU(2) s unbroken, in a generic vacuum we obtain in an intermediate step a model with 4 copies of (10, 1), 2 copies of 5, 1 , a 5, 2 and a (1, 2). Since string selection rules do not forbid the corresponding couplings, the other states of Table 3a acquire masses proportional to the VEVs of the SU(5) × SU(2) s singlets. Conceivably, there also exist special string vacua that can allow for an extra massless vector-like pair (5, 1) ⊕ (5, 1). This brings us to either of the 4 3 models discussed in Section 3, and summarized in Table 1a. As we have seen there, due to the SU(2) s strong dynamics, 5, 2 and (1, 2) condense together to build a 5 and condensates of 5, 2 yield an SU(5) antigeneration 10.
Since there are no string selection rules prohibiting the couplings, we thus expect this antigeneration to pair up with a linear combination of the 4 generations, and we are left with a 3-generation model at low energies.
An important condition for the strong SU(2) s dynamics to play out as described is that SU(2) s is much more strongly coupled than SU(5). Since these two gauge factors originate from different E 8 's, it is plausible that this happens [18,21,22]. However, a detailed computation of the string thresholds is beyond the scope of this study.
A stringy 2 3 model
The model defined by the parameters provided in Equation (15) results in the 4D gauge group G 4D = SU(5) × SU(2) s × [SU(2) 2 × U(1) 9 ]. As in the previous model, the gauge factors in parentheses can be spontaneously broken along D-flat directions while preserving supersymmetry. The corresponding massless spectrum after compactification is summarized in Table 3b, where we only display the quantum numbers with respect to SU(5) × SU(2) s . After switching on the VEVs of SU(5) × SU(2) s singlets, we are left with 2 copies of (10, 1), 4 copies of (5, 1), and 1 instance of (5, 2) and (1, 2), reproducing the spectrum of the 2 3 model presented in Table 2a.
If we also break SU(2) s along D-flat directions, we obtain a vacuum with an SU(5) GUT symmetry and two generations of 10 ⊕ 5. In the traditional approach, we would thus label the model as an unrealistic 2-generation model that is to be discarded.
However, this conclusion changes if we look at vacua where SU(2) s confines. In this case, according to our discussion of the 2 3 model in Section 3, we can obtain a third generation from SU(2) s strong dynamics. In particular, the (5, 2) builds a condensate that behaves as the 10-plet of a third generation of an SU (5) GUT. This means that this model admits 3-generation vacua and cannot be ruled out immediately.
Discussion
The examples discussed in this section represent evidence for the existence of globally consistent string models with generation flow. In order to keep the discussion simple, we have focused on SU(5) models. However, we expect that qualitatively similar models with the SM gauge symmetry and matter content at low energies exist. We have verified that one can break extra gauge factors and decouple exotics by switching on VEVs along D-flat directions. We are thus guaranteed [19,20] that there are supersymmetric configurations that have the features we describe. While we did verify that there are no symmetries prohibiting the required couplings, we did not compute their coefficients, nor did we explicitly verify that all directions/moduli are stabilized.
Our findings lead to the following picture. In string models, one can readily count the net number of generations at the tree-level. However, some models may have vacua where the true number of chiral generations differs from the tree-level prediction. This means that model scans in the past may have missed interesting, possibly realistic models. It will be interesting to study such constructions in more detail.
As a side remark, let us note that the matter content as well as the gauge and continuous symmetries of the RT-like model discussed in Section 2 fit into a 27plet of E 6 . This is evident from the branching (cf. e.g. [23]) 27 → (6, 2) ⊕ (15, 1) (13c) → (5, 2) 1 ⊕ (1, 2) −5 ⊕ (10, 1) −2 ⊕ (5, 1) 4 . (13d) That is, while the representation content of the model may at first sight look a bit peculiar, it turns out to fit in a single chiral representation of an exceptional group. In fact, E 6 is the only exceptional group admitting complex representations, and the 27-plet is its smallest representation. From this perspective it is not too surprising that variants of this model can be obtained from string theory. Note, however, that in the models which we presented, SU(5) and SU(2) s stem from different E 8 groups, which favors the possibility that SU(2) s becomes strongly coupled while SU(5) does not.
Summary
We have studied the effects of non-perturbative s-confining dynamics on the effective number of chiral generations in supersymmetric models of particle physics. We emphasized that this number can flow either upward or downward because confinement may result in the appearance of chiral composites. In turn, these composites may either serve as new light chiral generations or lift existing chiral generations by partnering with other chiral fields in mass terms. We referred to these phenomena as generation flow. Our focus was on 4 3 and 2 3 generation flow, such that in the IR there are three generations of (a GUT completion of) the SM. We analyzed the nonperturbative dynamics and verified that in our models the s-confining vacuum is not destabilized by the non-perturbative dynamics driving the generation flow. We stress that this conclusion is model dependent.
As we have shown, there is strong evidence that generation flow arises in globally consistent string compactifications. In particular, we have constructed explicit 4 3 and 2 3 models resulting from orbifold compactifications of the heterotic string. Therefore, more care than previously appreciated has to be taken when scanning for realistic string models. There can be models which appear to yield an unrealistic number of generations but are saved by generation flow. Furthermore, the strong dynamics that reduces the number of generations may be exploited to decouple chiral exotics of string models. Hence, the phenomenological viability of string compactifications with such exotics should be further investigated.
A Orbifold model definitions
In the bosonic formulation, a Z 2 × Z 4 (1,1) heterotic orbifold compactification is defined by the shifts V 1 and V 2 of order 2 and 4, respectively, as well as six discrete Wilson lines W a , a = 1, . . . , 6 of order 2. These Wilson lines are restricted to satisfy W 1 = W 2 and W 5 = W 6 to be compatible with the Z 2 × Z 4 point group of the compactification. 11 These parameters can be used as input in the orbifolder [15] to obtain the corresponding massless spectrum and compute the superpotential of the associated low-energy effective field theory. | 8,622.6 | 2021-09-03T00:00:00.000 | [
"Physics"
] |
CellProfiler Analyst: interactive data exploration, analysis and classification of large biological image sets
Abstract Summary: CellProfiler Analyst allows the exploration and visualization of image-based data, together with the classification of complex biological phenotypes, via an interactive user interface designed for biologists and data scientists. CellProfiler Analyst 2.0, completely rewritten in Python, builds on these features and adds enhanced supervised machine learning capabilities (Classifier), as well as visualization tools to overview an experiment (Plate Viewer and Image Gallery). Availability and Implementation: CellProfiler Analyst 2.0 is free and open source, available at http://www.cellprofiler.org and from GitHub (https://github.com/CellProfiler/CellProfiler-Analyst) under the BSD license. It is available as a packaged application for Mac OS X and Microsoft Windows and can be compiled for Linux. We implemented an automatic build process that supports nightly updates and regular release cycles for the software. Contact<EMAIL_ADDRESS>Supplementary information: Supplementary data are available at Bioinformatics online.
Introduction
CellProfiler Analyst is open-source software for biological imagebased classification, data exploration and visualization with an interactive user interface designed for biologists and data scientists. Using data from feature extraction software such as CellProfiler (Kamentsky et al., 2011), CellProfiler Analyst offers easy-to-use tools for exploration and mining of image data, which is being generated in ever increasing amounts, particularly in high-content screens (HCS). Its tools can help identify complex and subtle phenotypes, improve quality control and provide single-cell and population-level information from experiments. Some distinctive and critical features of CellProfiler Analyst are its user-friendly object-based machine learning interface, its ability to handle the tremendous scale of HCS experiments (millions of cell images), its gating capabilities that allow observing relationships among different data displays, and its exploration tools which enable interactively viewing connections between cell-level data and well-level data, and among raw images, processed/segmented images, extracted features and sample metadata.
Compared to other commonly-cited open-source biological image classification software like Ilastik (Sommer et al., 2011), CellCognition (Held et al., 2010) and WND-CHARM (Orlov et al., 2008), CellProfiler Analyst has the advantage of containing companion visualization tools, being suitable for high-throughput datasets, having multiple classifier options, and allowing both cell and fieldof-view classification. Advanced Cell Classifier (Horvath et al., 2011) shares many of the classification features of CellProfiler Analyst, but it lacks HCS data exploration and visualization tools. Compared to command-line-based data exploration software like cellHTS (Boutros et al., 2006) and imageHTS (Pau et al., 2013) and the web tool web CellHTS2 (Pelz et al., 2010), CellProfiler Analyst provides interactive object classification and image viewing. Several other software tools (e.g. the HCDC set of modules for KNIME (Berthold et al., 2009)) are no longer available/maintained. Here, we present major improvements to CellProfiler Analyst. Since its original publication (Jones et al., 2008), CellProfiler Analyst has been rewritten in Python (vs. its original language, Java) with significant enhancements. While keeping the original functionality allowing researchers to visualize data through histograms, scatter plots and density plots and to explore and score phenotypes by sequential gating, the key new features include: • multiple machine learning algorithms that can be trained to identify multiple phenotypes in single cells or whole fields of view, by simple drag and drop • more efficient handling of large scale, high-dimensional data • a gallery view to explore images in an experiment, and cells in individual images and • a plate layout view to explore aggregated cell measurements or image thumbnails for single or multiple plates.
2 New features in CellProfiler Analyst 2.0 Classifier: CellProfiler Analyst 1.0 allowed researchers to train a single classifier (Gentle Boosting) to recognize a single phenotype (twoclass) in individual cell images (rather than whole fields-of-view) (Jones et al., 2009). In CellProfiler Analyst 2.0 ( Fig. 1), Classifier can perform cell and field-of-view-level classification of multiple phenotypes (multi-class) using popular models like Random Forest, SVM and AdaBoost from the high performance machine learning library scikit-learn (Pedregosa et al., 2011), which yields a 200-fold improvement in speed (Supplementary Data 1). First, cell-or wholeimage samples from the experiment are fetched and sorted by drag and drop into researcher-defined classes, making up the annotated training set. Fetching can be random, based on filters, based on perclass predictions of an already-trained classifier, or based on active learning. The new active learning option speeds annotation by presenting uncertain cases. In addition, researchers can view full images of each sample and drag and drop cells from the image for annotation. Next, a classifier is trained on this set. After training on the annotated set, a model's performance can be evaluated by cross validation in the form of a confusion matrix and precision, recall and F1 score per class. The model can then be used to quantify cell phenotypes or whole-image phenotypes. Image Gallery: CellProfiler Analyst 2.0 offers a convenient new Image Gallery tool (Fig. 1A), in addition to the existing visualization/exploration tools with standard plotting and gating capabilities in version 1.0 (Jones et al., 2008). Image Gallery provides a convenient grid view allowing an overview of images. A variety of options are provided to filter images based on experiment-specific metadata, e.g. gene name, compound treatments, etc. Multiple filters can be combined to refine the search. Images can be displayed as a customsized thumbnail or in full resolution, and the color assigned to each channel in the image can be customized to highlight structures of interest. Individual segmented cells can be viewed for each image, and can be dragged and dropped into the Classifier window.
Plate Viewer: Many large-scale imaging experiments take place in multi-well plate format. Researchers are often interested in seeing their data overlaid on this format, to check for systematic sample quality issues, or to see results from controls placed in particular locations, at a glance. The Plate Viewer tool (Fig. 1C) displays aggregated and/or filtered measurements (according to customizable color maps) or a thumbnail image for each well. Automatically imported annotations can be viewed, and individual annotations can be manually added or deleted for each well.
Additional features: Additional features added to CellProfiler Analyst vs. version 1.0 have been described elsewhere, such as Tracer, a tool that complements the object tracking functionality of CellProfiler, including visualization and editing of tracks (Bray and Carpenter, 2015), as well as workspaces for saving progress and display settings across sessions (Bray et al., 2012). The website, manual and tutorials have been redesigned and updated to the new version.
Future directions
The redesigned CellProfiler Analyst contains useful classification and visualization features in an interactive interface that facilitates data analysis and exploration of biological images. Its code base forms a solid foundation for integrating new classifiers into the tool, potentially including deep learning architectures. We also intend to integrate methods for constructing per-sample 'profiles' from raw morphological measurements to support morphological profiling applications (Caicedo et al., 2016;Bray et al., 2016). | 1,565.2 | 2016-06-09T00:00:00.000 | [
"Computer Science"
] |
Financial Markets , Banking and the Design of Monetary Policy : A Stable Baseline Scenario
A baseline integration of commercial banks into the disequilibrium framework with behavioral traders of Charpe et al. (2011, 2012) is presented. At the core of the analysis is the impact the banking sector exerts on the interaction of real and financial markets. Potentially destabilizing feedback channels in the presence of imperfect macroeconomic portfolio adjustment and heterogeneous expectations are investigated. Given the possible financial market instability, various policy instruments have to be applied in order to guarantee viable dynamics in the highly interconnected macroeconomy. Among those are open market operations reacting to the state-of-confidence in the economy and Tobin-type capital gain taxes. The need for policy intervention is even more striking, as the banking sector is modeled in a rather stability enhancing way, fulfilling its fundamental tasks of term transformation of savings and credit granting without engaging in investment activities itself.
Introduction
Monetary and fiscal policy measures have been applied in order to avert the financial market collapse of 2008 and counteract the global recession, which has still not been fully overcome.The financial crisis started in the U.S. housing market and was amplified by the bankruptcies of large banks and transmitted, finally, to the real economy.Obviously, it is the interconnectedness of real and financial sectors that makes the working of the whole economy so vulnerable to crashes in one part.We do not claim to capture the recent crisis in its specific lines in our model, but we want to stress that crisis phenomena (even large ones) might not be unique and extremely rare events in capitalist economies, as long as the complex interactions of (macro-) markets remain unfettered.
A basic framework will be delivered, which allows for a unique and attracting steady state despite its high dimension.The model ends up in dimension eight, which implies that the dynamics become easily non-trivial.The strategy to guarantee analytical tractability is to set up the model step-by-step and infer stability properties from the added eigenvalues.
Since the focus of the paper is on the financial side, we model, in addition to basic assets, like bonds and equities, credit relations in detail.Therefore, a baseline integration of commercial banks into the behavioral Keynesian disequilibrium framework of Charpe et al. [1,2] will be conducted; noting that we will only use the deterministic part here to set up the framework, though stochastic shocks might be easily reintroduced. 1 Potentially destabilizing feedback channels of advanced macroeconomies shall be detected and investigated throughout the paper.After the sources of instability are identified, we will design appropriate policy instruments, taking into account the causal structure of the economy.This procedure enables us to display endogenous crisis mechanisms and to highlight conditions (parameter relations) that restrict the occurrence of dynamic instability.
In the model of the paper, we consider, on the one hand, the interaction of asset markets with real economic activity and, on the other hand, the interaction of real activity with the credit channel, here based on commercial banking controlled by the central bank by its money supply policy.Secondary asset markets and real activity are linked via asset price-based demand effects and output-dependent profitability results.Tobin's q will be used here as measure of confidence in the economy, as in Blanchard [4].This state of confidence matters, then, for consumption and investment decisions, which ultimately drive aggregate demand.
The potential for asset market instability is shown via the coupling of a dynamic Tobinian portfolio approach with the interaction of heterogeneous agents in this market.Brunnermeier [5] reports that the existence of different types of agents in asset markets implying heterogeneous expectation formation is perceived to be one of the main source of bubbles and, thus, instability in financial markets.Therefore, we will use expectation formation schemes of the chartist-fundamentalist variety, as advocated by Menkhoff et al. [6] or De Grauwe and Grimaldi [7].
Since we aim to show the potential fragility of the whole economy and not only of certain parts, a bundle of instruments will be needed for an effective cure.Especially Minsky [8,9] developed many ideas of how to stabilize an unstable economy.In our model, Tobin-type capital gain taxation coupled with volatility reducing asset market open market operations of the central bank are capable of making the interaction of goods market results with financial markets a stable one, while additional countercyclical 1 The paper understands itself as being part of a larger attempt in order to develop a Dynamic Stochastic General Disequilibrium (DSGD) approach to macroeconomics.See, also, Charpe et al. [2] and Flaschel et al. [3].money supply rules can make the credit supply a countercyclical one.Taken together, they create the situation of a real financial interaction that allows for attracting steady states despite the high dimensional nature of this interaction, due to the facts that gross substitutability makes financial markets, in principle stable ones (if chartists behavior is not dominating the outcomes on these markets, in particular if supported by countercyclical open market operations on these markets), while the implied credit supply is counteracting booms through credit reductions and busts through credit expansions.
The need for policy intervention is more striking, as the banking sector is modeled in a stability enhancing way, fulfilling its fundamental tasks of term transformation of savings and credit granting without engaging in investment activities itself.This implies that a strict separation between commercial and investment banks is an additional necessary condition for the stable configuration obtained in the end.
Tobinian Asset Price Dynamics and the Multiplier
In the Keynesian modeling framework of Charpe et al. [1], the interaction of real and financial markets via several potentially destabilizing feedback channels has been investigated.In the following, we start from this framework and integrate a banking sector with the help of credit relationships.
The financial side is described by a Tobin-portfolio structure along the lines of Tobin [10].The array of financial assets contains equities E, long-term bonds B l and money represented by short-term bonds B, which are issued by the central bank.
The expectation formation process for capital gains on financial markets is driven by two kinds of agents, namely chartists (showing speculative behavior by making use of a simple adaptive mechanism to forecast price evolution) and fundamentalists (who expect the convergence of capital gains back to their steady-state positions).
At this modeling stage, diverse fiscal and monetary policies are needed in order to stabilize such unstable macroeconomies.One particular policy instrument is the taxation of capital gains.In this paper, the Tobin tax income of the central bank is made explicit contrary to the former approach.The tax revenues are simply transferred into the government sector.When combined with an additionally stabilizing open market policy Ṁ that buys (sells) the respective assets if the corresponding asset markets are weak (strong), the magnitude of equities and long-term bonds available for private trading becomes endogenous, though overall stocks remain exogenously given.We do not yet consider the issue of new equities by firms or of long-term bonds by the government, i.e., there is no asset accumulation taking place, so far.In this respect, we still ignore the budget equations of firms, the government (and also of the households), due to the simple dynamic multiplier approach we shall be using for the description of the dynamics of the real part of the model.
Complexity on the real side is reduced to a minimum, but the financial structure is extensively modeled.The financial assets are imperfect substitutes, and only a fraction, α, of current stock disequilibria enters the markets for bonds or equities in the form of supply or demand.This is due to the assumption that adjustment costs are implicitly present.Moreover, capital gain expectations are imperfect in the model.This can be justified on empirical grounds: in reality, the gathering of information is quite costly, and information processing capabilities are limited.Attaining perfect foresight (the deterministic correspondence to rational expectations) is out of reach for at least part of the agents acting in financial markets.There might be even a rationale for chartist expectations in the presence of the knowledge of the fundamental positions of the economy.Some agents could perceive that riding the bubble is a favorable strategy, as long as they assume to be smarter than others and exit the market before a potential burst.
In the following, the time derivative of a variable, x, is denoted by ẋ, the growth rate by x and by f x , the first derivative of a function, f (•), with respect to x. Goods price inflation is not considered, and the corresponding price level normalized to one.Only the equity price, p e , and the price for long-term bonds, p b , is assumed to be variable.Y denotes output, Ā autonomous expenditure, r the profit rate, π e expected capital gains, r e e the expected rate of return on equities and r e b the expected rate of return on bonds.
The core dynamical system of Charpe et al. [1], slightly extended, reads as follows (with Ē/(pK) = 1 in Tobin's average q for expositional simplicity, i.e., q = p e ): with W n h := M + D + p b B l h + p e E h ; nominal wealth of households equals money, deposits, the value of long-term bonds and the value of equities.
For further details, see also the accounts shown below.The block of Equations ( 1)-(3) shows the impact of asset markets on real economic activity, as well as the impact of the profitability2 r = Π pK , r pe = Π pe Ē of firms on the dynamics of financial markets, where there is also a pronounced self-referencing dynamics at work.
The first law of motion is just the textbook multiplier dynamics, based on Tobin's q = p e , measuring the state of confidence, driving the economy, as far as the feedback from financial markets to the real markets is concerned.The second and third law of motion shows the excess demand pressures, α e (•), α b (•), on the respective asset markets, which lead to asset price adjustments with speed β e , β b , , but in the end, to no change in the stocks actually held in the private sector.Note also that the shown excess demands for equities and bonds must be balanced (here, implicitly) by a corresponding excess supply of money M 2 = M + D (and vice versa).Note further that the stock demand function, f , is characterized by the gross substitute property, i.e., the demand for the respective asset depends positively on its own rate of return, r e x , and negatively on the other one's rate of return (x = e, b).
Stability Propositions
At the heart of our investigation into the interaction of aggregate demand, stock market performance and credit relationships is the question of economic stability.A very convenient way to assess the stability characteristics of dynamic systems is to use eigenvalue analysis of the constituting dynamic equations for the state variables.Whenever an eigenvalue of a characteristic polynomial shows a negative real part, convergence can be asserted for the respective law of motion.Associated with deviations from negativity of the real part of an eigenvalue are instability and saddle-point outcomes, whereas even the latter one has to be considered an instability situation from our point of view. 3 Whether the eigenvalues of low-dimensional systems are positive or negative can be evaluated by means of Routh-Hurwitz conditions, which make use of the Jacobian matrices.The Jacobian matrices contain the partial derivatives evaluated at the steady state.From these, one has to determine the trace and determinant in order to check the fulfillment of the theorems. 4ll proofs are given directly in the text, except those establishing the core system of real-financial interactions without a banking sector, which can be found in the Appendix.
We get from the above the following proposition for the stability of the asset markets when capital gain expectations are static.
Proposition 1: Stable Financial Markets Interaction
Assume that capital gain expectations are static.Then, the dynamics: converges to the current asset market equilibrium for all adjustment speeds of asset prices p e , p b .
The proof of Proposition 1 is presented in the Appendix and is based on Charpe et al. [1].Since the dynamic multiplier of the real side is stable on its own, we can state that in sum, the real and the financial markets, when considered in isolation (and with sufficiently tranquil capital gain expectations) are both stable.The next step, therefore, is to investigate what happens when they are interacting as a full 3D dynamical system.In the case of static chartist capital gain expectations, we obtain the following proposition: Assume that the parameter, β y , is sufficiently large and the parameter β e sufficiently small.Assume, moreover, that the parameter, a y , is sufficiently close to one (but smaller than one).Then, the dynamics: is locally asymptotically stable around its steady-state position.
3
A very extensive discussion on this point can be found in Chiarella et al. [11], as well as in Chiarella, Flaschel and Semmler [12].
This proposition and its proof (modified from Charpe et al. [1]; see the Appendix) show, however, that the coupling of two stable, but partial, processes need not provide a stable interaction of the two partial processes.The stability proposition is restricted with respect to several parameters and their possible values.In general, stability cannot be ascertained for the whole parameter space.Moreover, until now, the working of financial markets has been relatively tranquil compared to the stage when capital gains will be considered.These capital gain expectations will be a serious source of instability if the weight of chartists in the average market expectations is high.
Contrasting these propositions with the case of neoclassical perfectness with regard to substitution and expectations delivers, basically, the Blanchard [4] model and its stability implications.Perfectness demands β e , β b = ∞ and α e , α b = 1, which means perfect substitution of assets and myopic perfect foresight of the capital gains evolution.The 3D core system then collapses into a two-dimensional one in Y and q.The system is unstable, but exhibits a saddle path.The neoclassical treatment of such a stability situation requires the usage of the jump-variable technique of Sargent and Wallace [14].It is assumed that the economy always jumps to the converging trajectory.The possibility of an unstable economy is ruled out by assumption.An in-depth demonstration and critique of this technique can found in Chiarella et al. [11].
Commercial Banking and Central Bank Behavior
This model of the private sector of the economy is now augmented by a detailed description of the banking sector and the policy actions of the central bank.In particular, the central bank's income of the required Tobin capital gain taxation is dealt with in detail, and the credit channel of the economy will be introduced explicitly.First, the stock and flow accounts of the central bank (Tables 1 and 2) are presented in order to capture its full activities.
Table 1.The balance sheet of the central bank.
Assets Liabilities
Treasury bonds (perpetuities): The balance sheet simply states that the central bank can hold treasury bonds issued by the government or equities issued by firms.Since the stock of these assets is assumed to be fixed for the time being, the difference between total stocks and central bank's holdings must be private holdings.
The flow account 5 shows on the right-hand side the resources that accrue to the central bank, namely dividends on its stock holdings (r, the rate of profit of firms) and interest on its long-term bond holdings, the taxes that are obtained by capital gain taxation and the changes in its inventory of equity and long-term bonds through its open market operations Ṁ.These changes are the uses (if positive) of the issue of new money, Ṁ , and are reported on the left-hand side of its income account again. 6 Against this background, we have to consider now the two laws of motion for the households' holdings of equities and long-term bonds, i.e., of the endogenous variables, E h , B l h , which are given by: 7 They imply, as laws of motion for private equity and long-term bond holding: We only observe here that these two laws of motion do not endanger the stability of the 3D baseline structure, at least when operated in a sufficiently cautious way, since the implied 4D system exhibits a positive determinant and the implied 5D dynamics, when the second law is added, a negative one, so that parameter changes with respect to c pe and c p b , from zero to small values of them, will add negative eigenvalues to the already existing three eigenvalues with their negative real parts.
Next, we introduce commercial banks into the model and enlarge therewith the assumed financial structure of money, M , long-term bonds B l and equities E by deposits D and loans L. Commercial banks are here conceived as firms that hold saving deposits D from households and transform them into credit to firms, based on reserve requirements R = δ r D. Banks are borrowing short and lending long, which means that they provide for the term transformation of savings. 8This is their only proper function, i.e., they, in particular, do not trade on financial markets in an active way, but only passively adjust their interest rate on deposits, i d , such that the flow account shown below is a balanced one with respect to the new loan supply, L, they intend to provide. 9 We assume, as in the multiplier approach to money holdings M 2 = M + D, that households are hoarding (in secured deposit boxes) as "cash" M = δ h D a certain amount of money, M of the central 5 rE c = Π Ec Ē ; all dividends are paid out as profits. 6 We assume that the central bank surplus is transferred into the government sector for expositional simplicity, i.e., it affects the government budget constraint (GBR). 7 Note that we do not yet integrate the new issue of equities by firms and of new bonds issued by the government.
8 This is the basic and fundamental task attributed to banks.On this, see, for example, Gorton and Winton [15] or Freixas and Rochet [16].The latter, moreover, deliver rigorous microeconomic justifications for the existence of financial intermediaries.9 We abstract from labor input and equipment in the banking sector here.
bank, assumed to be proportional to the deposit, D, they have accumulated, i.e., they do not deposit all their money holdings, M 2 , into the credit circuit operated by commercial banks.The supply of money by the central bank is then characterized by: as the relationship between the money supplied by the central bank and the deposits held in the household sector.The total amount of money, M 2 , held in the household sector, moreover, by definition is: This gives, as usual, the money multiplier formula: between the money supply concept, M 2 , and the money supply, M +R, issued by the central bank.Based on this multiplier formula, we can now redefine the nominal wealth of households into the four assets, M + D, B l h , E h , that households now possess by: We assume here that the interest rate on savings deposits, i d , only concerns the allocation of M 2 into cash and savings deposits and, thus, the cash management process of the households M = δ h (i d )D, while the allocation of financial assets between liquid assets M 2 and bonds and equities is driven by the rates of return on the risky assets solely.Moreover, we could also assume that the reserve rate is a given magnitude, controlled (fixed) by the central bank, which, thereby, then influences the supply of new credit, a scenario that does impact the model in its present formulation.
Commercial banks adjust the loan rate, i l , in order to bring credit supply in line with credit demand.The credit multiplier can then be easily calculated, providing the expression:10 Based on the total supply of high powered money: M = M +R = δ h D +δ r D and the shown demand for it, as well as the subsequent derivation of the multiplier formulae for money and credit, the banking sector exhibits the following balance sheet and flow account(Tables 3 and 4). 11able 3. The balance sheet of the commercial banks (private ownership).
Assets
Liabilities The maturity transformation function of banks is reflected in the balance sheet.Banks collect private savings, which are checking deposits, and transform them into loans that possess a longer maturity horizon.The creation of loans is constrained by the reserve obligation in the form of a certain fraction of deposits, δ r D, not to be granted as loans. 12he relationship between high powered money, M, and the volume of loans, L, gives rise to the following law of motion for the loans of commercial banks to firms: This shows that this specific integration of commercial banks into our model adds one further law of motion (for loans) to the ones already considered.
Moreover, we postulate now as a law of motion for the output of firms on the basis of an extended aggregate demand schedule (now based on the excess debt level, L − L o , transmitted through firms' investment demand schedule, as well as the state of confidence measure q): if we assume that the private sector considers the steady-state loan volume as normal and deviations from it as impacting investment and the economy in a negative way.This can be justified by the assumption that firms take into account their stock of debt in their investment decisions.High leverage levels expose firms to the risk of insolvency; low leverage states induce expansions of investment projects.The incorporation of the credit channel makes the dynamical system now an eight-dimensional one.
Credit and Credit-Dependent Multiplier Dynamics
We already described the banking activity as a transformational one: Private savings are channeled into loans and enable investment activities.Banks serve as financial intermediaries that allow for term conversion.The link between the real side and the credit channel implies the interaction of the output multiplier with the loan multiplier and should be investigated first in isolation from the overall dynamics.
Adding the debt dynamics to the financial markets gives, by means of the money supply rule of the central bank, the two further laws of motion: Concerning the first law of motion, we have zero root hysteresis in the evolution of the state variable, L, while the output dynamic, Ẏ , adds a stable dynamic multiplier process to the financial sector of the economy.Moreover, a monetary policy, which increases credit in busts and decreases its volume in the boom, should also contribute to the stability of the overall real-financial market interaction.We now augment the dynamics of the money supply, however, as follows: We here assume that the time rate of change of the money supply (and, thus, credit) is also influenced by the state of the business cycle in a countercyclical way and that there is also a negative feedback of the level of money supply on its current rate of change.We assume that this change in money supply is deducted from the central bank gains that are distributed to the government.The output-debt subdynamics of the model then read: and are, in themselves, asymptotically stable if the propensity to spend is less than one (as it is usually assumed): The trace is negative, and the determinant is positive, which implies two negative eigenvalues according to Routh-Hurwitz conditions.
Financial Markets, Accelerating Capital Gain Expectations and Tobin-Type Taxes
The picture of the fully interacting eight-dimensional system is still not complete.For the moment, we concentrate on the dynamics of financial markets and the impact of monetary policy on these dynamics, by keeping the real sector and credit fixed at their steady-state values.The output and loan dynamics is thereby ignored in the following, as is the dynamics of asset accumulation through investment and the corresponding issue of equities, as well as the government budget constraint and the corresponding issue of new government bonds.
We thus first study the financial markets in isolation.We add to them now expectations schemes, as far as static expected capital gains were employed before.By endogenizing capital gain expectations, we distinguish between fundamentalists f and chartists c and assume for the former that they expect capital gains to converge back with speeds β π ef , β π bf to their steady-state position, which is zero.Chartists, by contrast (for analytical simplicity), make use of a simple adaptive mechanism to forecast the evolution of capital gains, πe e , in the equity market and πe b in the market for long-term bonds.Market expectations π e are then an average of fundamentalist and chartist expectations.
Justification for this scheme is two-fold: Many empirical studies argue in favor of this kind of expectation mechanisms in order to explain agents' behavior on financial or FXmarkets and the ability to be insightful despite its relatively simple structure.De Grauwe and Grimaldi [7] employ this kind of scheme to characterize the behavior of agents on the foreign exchange market, and Brunnermeier [5] shows how bubbles can evolve in a market when this agent constellation is underlying. 13 We stress here that these simple expectation formation mechanisms are chosen to make the dynamics analytically tractable.They can, of course, be replaced by much more refined forward-and backwardlooking expectation rules when the model is treated numerically.However, we do not expect that this changes the results in a significant way if these learning mechanisms are built in the spirit of the ones we introduce and employ below.
The incorporation of imperfect expectation schemes gives, finally, a rise to the following set of equations: with Note that we, in addition, have the law of motion: which feeds back into the rest of the dynamics through the definition of private wealth.This law of motion, when added to the above dynamics, does, however, not alter them very much, since it gives rise to a zero root and, thus, to zero root hysteresis in the money supply, M, solely.Nominal money supply and its steady-state value, and, thus, private wealth, is, therefore, a path-dependent state variable in this version of the model.The remaining steady-state values of the above dynamics are, of course, simply given by: For the financial markets subdynamics, the following propositions are obtained: 13 See Charpe et al. [2] for endogenizing the population weights of each type of agent and also Proaño [19] for the incorporation of heterogeneous expectations in a two-country model along the lines of the disequilibrium approach to macroeconomics pursued also in this paper.For empirical evidence on the chartist-fundamentalist framework in explaining expectational heterogeneity, see Menkhoff et al. [6].
Proposition 3: Gross substitutes, stabilizing expectations and absence of monetary policy
Assume that output Y is fixed at its steady-state value.Then, the 4D dynamics, ( 16), ( 17), ( 20), (21), of asset prices is asymptotically stable around its steady-state position if capital gain expectations are dominated by fundamentalists to a sufficient degree (which can be enforced by choosing the Tobin tax parameters as sufficiently high).
Proof: The proposition is a consequence of the inherently stabilizing Tobinian gross substitute assumption for equities and long-term bonds; see Flaschel et al. [18].
This shows that the financial core of the model can work in a proper way with respect to local stability under conditions that favor fundamentalist behavior in the asset markets, a prerequisite that cannot be easily regarded as being given per se without policy intervention, as it is by no means clear that fundamentalist expectations will dominate chartistic ones.At least the gross substitution characteristic facilitates the requirements for convergence, since negative dependence of demand for one asset on the rate of return of the other one excludes explosiveness from this source of interaction.
Proposition 4: Gross substitutes, static expectations and monetary policy
Assume that output Y is fixed at its steady-state value.Then, the 4D dynamics, ( 16)-( 19), of asset prices, with one or two policy rules switched on, is asymptotically stable around the steady-state position for all choices of the policy parameters in laws of motion (20) and (21).
Proof: Consider the countercyclical equity policy rule of the central bank.Then, it is easy to show that the resulting 3D Jacobian of the considered subdynamics fulfills that a 1 = −trace J is positive; the positive 2D upper principal minor is increased by −J 13 J 31 , and a 3 = −det J is positive and equal to J 22 J 13 J 31 , so that all Routh-Hurwitz polynomial coefficients are positive.The expression, a 1 a 2 − a 3 , finally is positive, since a 3 = J 11 J 13 J 31 is contained in a 1 a 2 .The same applies to the bond-oriented monetary policy.
The combination of the two policies leads to a positive 4D determinant (obtained by several appropriate row operations), so that the stability results are preserved for small positive policy parameters, since the determinant is the product of the four eigenvalues and since three of them already have negative real parts.However, the final Routh-Hurwitz condition: leads to lengthy expressions, where the dominance of the positive terms over the negative terms is not so obvious, as in the discussed 3D case.Therefore, Proposition 4 provides the foundation for the impact of monetary policy.The potentially beneficial effects of the countercyclical policy rules of the central bank for the real sector of the economy do not lead to instability in its financial part.The interaction of asset demand and asset supply works smoothly when regarded in isolation.
Proposition 5: Tobin type transactions costs or capital gains taxation
The full 6D dynamics is asymptotically stable around the steady-state position of its state variables if the Tobin-type capital gain taxation parameters are chosen as sufficiently high or if Tobin-type transaction costs on financial markets make the parameters, α e , α b , sufficiently small.Proof: In the limit case where zero is enforced for transactions or capital gains, we get: as further laws of motion, which makes the 6D dynamics, in a trivial way, locally asymptotically.This proposition thus demands, as a lot of literature starting with Tobin [20], Tobin-type taxation rules concerning financial market transactions or sufficiently high taxes on capital gains. 14nserting the various equations of the model into each other as far as the structure of financial markets is concerned gives, however: These equations show that the trace of the Jacobian can be made positive (by means of its fifth and sixth component) if, for example, the parameter combinations: are made sufficiently large through an appropriate choice of the β π e -adjustment speed variables.These speeds of adjustment must be dampened in any case through monetary policy measures in order to avoid the emergence of asset market bubbles that can endanger the stability of the economy.The trace of J becomes positive only after the system has already lost its stability by way of a Hopf-bifurcation.The trace = 0 condition, therefore, supplies only an upper bound for the parameter region, where the system can be expected to be stable.
Financial Markets, Credit and Output Dynamics: A Stable Baseline Scenario
The full system describes an economy being complete with regard to all basic financial instruments from a macro perspective.Therefore, the full interaction of the financial side (including credit and the capital gains acceleration process) with the real side (including a Tobinian investment accelerator) can be investigated now.
For the full dynamics with L as the seventh and Y as the eighth state variable, we have as signs of the Jacobian at the steady state: By the usage of appropriate row operations, one can remove the influence of the first six state variables from the last two rows, i.e., in the full 8D case, the 2D subdeterminant can be isolated in its row representation from the first six state variables and the corresponding subdeterminant.The 8D determinant is, therefore, positive.
Positive feedback loops, leading to instability via the a 2 > 0 Routh-Hurwitz stability condition, are J 18 J 81 , J 55 J 88 , J 66 J 88 .They appear to imply instability if β y is made sufficiently large (a y is sufficiently close to one).The system is therefore generally not stable, even if the 6D and the 2D systems are assumed as stable when isolated from each other.This is basically due to the fact that economic activity depends positively on Tobin's q and Tobin's q positively on economic activity via the rate of profit.The dynamics is, however, 8D stable if β y is chosen as sufficiently small, since the zero eigenvalue must become negative if β y is made positive (the full determinant is positive).
There is, therefore, only a limited reason that the full 8D dynamics can be made asymptotically stable through the policy instruments of the central bank (Tobin taxes and open market operations and countercyclical money supply actions) if the positive feedbacks, J 18 J 81 , J 55 J 88 and J 66 J 88 , become sufficiently large.
If asset markets are booming, the money supply is decreased by the selling of financial assets through the central bank (and vice versa).This, in particular, should moderate the volatility in the financial markets and, thus, contribute to the overall stability of the interaction of the real with the financial sector.
Assume that the 6D system characterizing the financial markets is stable and that capital gain taxes are sufficiently large for this purpose, as well as for a reduction of the entry, J 58 .Assume, finally, that dividends are taxed, such that r (Y ) becomes sufficiently small.In the limit, the sign structure of the Jacobian is then characterized by: which implies the stability of the full dynamics, since |λI − J| can then be decomposed into the financial and the real part of the model.The eigenvalue structure of the full 8D system is identical with the eigenvalue structure that results when the 2D subsystem and 6D subsystem are regarded consecutively.The entry, J 18 J 81 , is then no longer causing problems (also, if chosen as sufficiently small), because it only appears in the determinant and not in a subdeterminant.
Without a limitation of the results obtained beforehand, a credit expansion effect could be added to the aggregate demand schedule.Obviously, L should feed back into the output dynamics.The expression for the dynamic multiplier would alter to: since loan increases immediately generate the same amount of aggregate demand.Regarded in isolation, this would add another source of instability, but embedded into the full dynamics, its destabilizing potential vanishes.The slightly modified Jacobian looks like: and shows, now, an ambiguity concerning the sign in J 81 and J 88 , as well as a negative entry in J 82 .In the limit case, J 81 cancels out again as J 82 does.The positive influence of output on its rate of change is caused by the monetary policy that cares about the state of the business cycle via γ y Y .For a stable working of the whole economy, it is just necessary to conduct this policy carefully enough in order not to fully counteract the stable dynamic multiplier.
Conclusions and Outlook
A framework with all basic financial markets (including credit relationships) from a macroeconomic perspective was presented.Sources of instability were highlighted and remedies to overcome these fragilities proposed.Financial market-oriented open market policies, augmented by a procyclical term and Tobin-type capital gain taxation do the job of stabilizing the economy characterized by a Tobinian financial structure and heterogeneous expectation formation.Summing up, we therefore get that the financial markets must be regulated and also handled by open market policy with care, in order to ensure the stability of the real-financial market interaction in the considered economy.Of course, these conditions are only sufficient ones and, thus, not necessary ones, in order to have such stability assertions.The provided stable baseline model might serve as a reference framework and point of departure for the discussion of further topics of banking and asset markets in a macroeconomic context.
A. Appendix Proof of Proposition 1:
The matrix of partial derivatives of the two considered laws of motion is given by: Bl ] The trace of this matrix is obviously negative, while, for the determinant, we obtain the expression: and, thus, get that the negative entries in the diagonal dominate the positive entries in the off-diagonal.This implies that the determinant of J must be positive and, thus, proves the validity of the Routh-Hurwitz stability conditions for such a planar dynamical system.
Proof of Proposition 2:
The Jacobian of the full 3D system at the steady state is given by: The trace of J is obviously negative, and the principal minor of order two: is positive according to the above proposition, as is the principal minor, J 2 .Additionally, for the remaining principal minor of order two, we get: β y (a y − 1) β y a q β e α e f e1 r /qW n h β e α e [−f e1 r/q 2 W n h + (f e − 1) Ē] = β y β e α e f e1 /qW n h a y − 1 a q r −r/q + β y (a y − 1)(f e − 1) Ē which is positive if the speed parameter, β e , is chosen as sufficiently small.Note, however, that the Routh-Hurwitz conditions only demand that the sum of principal minors of order two is to be positive, which provides a much weaker condition than the one just stated.
For the determinant of the Jacobian, J, one gets from the above: In order to get a negative determinant, we, therefore, have to show that the determinant: is positive, in addition to the already assumed positivity of the minor, J 3 .The last expression here shows that this for example holds if the marginal propensity to purchase goods, a y ∈ (0, 1), is sufficiently close to one.
Uses Resources OMP: Ṁ = c pe (p o e − p e )E h + c p b (p o b − p b )B l h Equity demand p e Ė[ c = c pe (p o e − p e )E h Tobin capital gains taxes τ e ṗe E h Bond demand p b Ḃl c = c p b (p o b − p b )B l h Tobin capital gains taxes τ b ṗb B l h CB surplus (→government budget constraint (GBR)): rE c + B l c + τ e ṗe E h + τ b ṗb B l h Dividends and interest rE c + B l c
β y (a y − 1 )
β y a q 0 β e α e f e1 r /qW n h β e α e [−f e1 r/q 2 W n h + (f e − 1) Ē] β e α e [−f e2 /p 2 b W n h + f e Bl ] β b α b f b l 1 r /qW n h β b α b [−f b l 1 r/q 2 W n h + f b l Ē] β b α b [−f b l 2 /p 2 b W n h + (f b l − 1) Bl ]
Table 2 .
The monetary policy (flows) of the central bank.
Table 4 .
The flow account of the commercial banks (private ownership). | 9,210.6 | 2013-12-30T00:00:00.000 | [
"Economics"
] |
Planet-Disk Interaction revisited
We present results on our investigations of planet–disk interaction in protoplanetary disks. For the hydrodynamic simulations we use a second order semi–discrete total variation diminishing (TVD) scheme for systems of hyperbolic conservation laws on curvilinear grids. Our previously used method conserves the momentum in two dimensional systems with rotational symmetry. Additionally, we modified our simulation techniques for inertial angular momentum conservation even in two dimensional rotating polar coordinate systems. The basic numerical practices are outlined briefly. In addition we present the results of a common planet–disk interaction setup. 1 2D advection solver FOSITE FOSITE ([1], fosite.sf.net) implements second order semi-discrete central schemes for systems of hyperbolic conservation laws on curvilinear grids. Let (ξ, η, φ) be the coordinates of such a grid with the geometrical scaling factors hξ, hη, hφ. It is convenient to define new spatial differential operators Dξ = 1 √ g ∂ ∂ξ hηhφ Dη = 1 √ g ∂ ∂η hξhφ using the √ g = hξhηhφ. Systems of hyperbolic conservation laws on curvilinear grids can than be described by ∂tu +DξF (u) +DηG (u) = S (u) . The numerical scheme used by FOSITE generalizes the two-dimensional central-upwind schemes developed by [2]. Geometrical source terms of vectorial conservation laws are formulated in a general prescription for various orthogonal curvilinear coordinates. In contrast to many other astrophysical softwares used for planet-disk interaction simulations (e.g. FARGO by [3], RODEO by [4] and others in [5], as well as RAPID by [6]), FOSITE solves the Euler equations in one step without depending on techniques such as operator splitting. In conclusion we abstract the most import features of FOSITE: • finite volume scheme for hyperbolic conservation laws ae-mail<EMAIL_ADDRESS>EPJ Web of Conferences 46, 07006 (2013) DOI: 10.1051/epjconf/20134607006 © Owned by the authors, published by EDP Sciences, 2013 This is an Open Access article distributed under the terms of the Creative Commons Attribution License 2.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Article available at http://www.epj-conferences.org or http://dx.doi.org/10.1051/epjconf/20134607006 EPJ Web of Conferences • semi discrete: second order in space, up to fifth order in time • total variation diminishing with a variety of flux limiters • arbitrary orthogonal curvilinear grids • upwind: accounts for propagation of information • Fortran 95, object-oriented design, GPL • parallelised using MPI, vectorized for NEC SX8/9 • outputs: ASCII, gnuplot, VTK, netcdf, hdf5, binary • integrated python based plotting framework 2 Inertial angular momentum transport If exact conservation of angular momentum in the inertial frame is of great interest, it is possible to reformulate the Euler equations for transport of inertial angular momentum l. This includes the Coriolis and centrifugal forces in the rotating frame of reference with angular velocity Ω:
2D advection solver FOSITE
FOSITE ( [1], fosite.sf.net) implements second order semi-discrete central schemes for systems of hyperbolic conservation laws on curvilinear grids. Let (ξ, η, ϕ) be the coordinates of such a grid with the geometrical scaling factors h ξ , h η , h ϕ . It is convenient to define new spatial differential operators Systems of hyperbolic conservation laws on curvilinear grids can than be described by The numerical scheme used by FOSITE generalizes the two-dimensional central-upwind schemes developed by [2]. Geometrical source terms of vectorial conservation laws are formulated in a general prescription for various orthogonal curvilinear coordinates. In contrast to many other astrophysical softwares used for planet-disk interaction simulations (e.g. FARGO by [3], RODEO by [4] and others in [5], as well as RAPID by [6]), FOSITE solves the Euler equations in one step without depending on techniques such as operator splitting.
In conclusion we abstract the most import features of FOSITE: • finite volume scheme for hyperbolic conservation laws a e-mail<EMAIL_ADDRESS>• semi discrete: second order in space, up to fifth order in time • total variation diminishing with a variety of flux limiters • arbitrary orthogonal curvilinear grids • upwind: accounts for propagation of information • Fortran 95, object-oriented design, GPL • parallelised using MPI, vectorized for NEC SX8/9 • outputs: ASCII, gnuplot, VTK, netcdf, hdf5, binary • integrated python based plotting framework
Inertial angular momentum transport
If exact conservation of angular momentum in the inertial frame is of great interest, it is possible to reformulate the Euler equations for transport of inertial angular momentum l. This includes the Coriolis and centrifugal forces in the rotating frame of reference with angular velocity Ω: Here h η is the geometrical scaling factor along the second coordinate, e.g. for polar coordinates h η = r. This leads to a new system of hyperbolic conservation laws, which imply exact conservation of mass and inertial angular momentum. Using the local isothermal speed of sound approximation yields p = ρc 2 s for the pressure, we derive:
Planet-disk interactions
We validate FOSITE with the planet-disk interaction standard simulations as proposed by [5]. First we compare a standard jupiter mass simulation using inertial angular momentum transport to our past method. The figures shown below display the surface densities and the radial density profiles after 30 and 100 planet orbits. While the depth of the gap is similar in both simulations, there is a big difference at the gap edges in the radial density plot. Furthermore the Lagrange points L4 and L5 can only be seen in the simulation with inertial angular momentum transport, since the other is a lot more diffusive.
Secondly we compare results using the standard resolution 128×384 to high resolution 512×1536 simulations. Generally these simulations agree quite well. However the high resolution simulation shows different behavior regarding the Lagrange points, which vanish a lot faster in the standard simulations. In the high resolution simulation the outer edge of the gap is dominated by a big vortex which rotates with a lower angular velocity than the proto planet.
Conclusions
We could show, that exact inertial angular momentum conservation can be achieved without relying on techniques like operator splitting. FOSITEs results agree well with standard simulations. In the high resolution runs we observe a big vortex at the outer gap edge. | 1,414 | 2013-04-01T00:00:00.000 | [
"Physics"
] |
HTLV-1 uveitis
Human T cell lymphotropic virus type 1 (HTLV-1) is the first retrovirus described as a causative agent of human disease. Following adult T cell leukemia/lymphoma and HLTV-1-associated myelopathy/tropical spastic paraparesis, HTLV-1 uveitis (HU) has been established as a distinct clinical entity caused by HTLV-1 based on seroepidemiological, clinical, and virological studies. HU is one of the most common causes of uveitis in endemic areas of Japan and can be a problematic clinical entity all over the world. HU occurs with a sudden onset of floaters and foggy vision, and is classified as an intermediate uveitis. Analysis of infiltrating cells in eyes with HU revealed that the majority of infiltrating cells were CD3+ T cells, but not malignant cells or leukemic cells based on their T cell receptor usage. HTLV-1 proviral DNA, HTLV-1 protein, and viral particles were detected from infiltrating cells in eyes with HU. HTLV-1-infected CD4+ T cell clones established from infiltrating cells in eyes with HU produced large amounts of various inflammatory cytokines, such as IL-1, IL-6, IL-8, TNF-α, and interferon-γ. Taken together, HU is considered to be caused by inflammatory cytokines produced by HTLV-1-infected CD4+ T cells that significantly accumulate in eyes; therefore, topical and/or oral corticosteroid treatment is effective to treat intraocular inflammation in patients with HU. Further investigation is needed to establish a specific treatment for HU.
INTRODUCTION
Retrovirus was first described in the 1970s (Temin and Baltimore, 1972), but its causal relationship with human diseases was not identified until the early 1980s when human T cell lymphotropic virus type 1 (HTLV-1) was identified as an etiologic agent of adult T cell leukemia/lymphoma (ATL; Poiesz et al., 1980;Hinuma et al., 1981;Yoshida et al., 1984). After the discovery of the link between HTLV-1 and ATL, HLTV-1 was also found to be a causal agent of HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP; Gessain et al., 1985;Osame et al., 1986) and HTLV-1 uveitis (HU; Mochizuki et al., 1992a,b,c).
HTLV-1 uveitis, the third clinical entity of HTLV-1 infection, was established by a series of studies in the highly endemic area of southern Kyushu, Japan. Clinical case reports from this area suggested possible associations of HTLV-1 carriers with various ocular manifestations (Ohba et al., 1989). In the 1990s, the first set of evidence that indicated the causative implication of HTLV-1 in uveitis was reported by Mochizuki and colleagues. They showed clinical and laboratory data consisting of seroepidemiology, clinical features, detection of proviral DNA and mRNA of HTLV-1 from ocular tissues, and detection of viral particles from T cell clones (TCC) derived from the aqueous humor of the patient (Mochizuki et al., 1992a,b). Since then, it has been well established that uveitis is significantly related to HTLV-1. Here, we review historical findings that contributed to the establishment of the HU entity and recent advancements that deepen our understanding of HU.
SEROEPIDEMIOLOGY
HTLV-1 infection is known to have unique geographic distribution and is prevalent in Japan, Melanesia, the Caribbean Islands, Central America, South America, and Central Africa. It is estimated that 20 million people carry the virus worldwide (Watanabe, 2011). This virus is etiologically linked with HU, which is one of the most common causes of uveitis in the endemic area of Japan and can be a problematic clinical entity all over the world Takahashi et al., 2000;Merle et al., 2002;Pinheiro et al., 2006;Miyanaga et al., 2009). Uveitis is a sight-threatening inflammatory disorder affecting the intraocular tissues (Forrester, 1991) and is the third leading cause of blindness in developed countries. The etiology of uveitis is categorized as infectious or non-infectious and varies depending on the genetic background of the population and the prevalence of the pathogenic agent in the area. Clinically, the etiology of approximately 30% of cases could not be defined even when careful examinations were performed. A survey comparing the etiologies of uveitis in different areas of Japan demonstrated that the proportion of undefined etiologies was particularly high in southern Kyushu as compared to those in northern Kyushu and Tokyo. Seroepidemiological comparison studies (Mochizuki et al., 1992a,b;Shirao et al., 1993) in these highly endemic and non-endemic areas revealed that the HTLV-1 seroprevalence in patients with idiopathic uveitis was significantly higher than that in the following two control groups: patients with etiology-defined uveitis and patients with non-uveitic ocular diseases (Figure 1). This was the first clue suggesting that HTLV-1 infection is significantly related to uveitis. Uveitis is now recognized as a distinct clinical entity related to HTLV-1 and is designated as HU. The seroprevalence of HTLV-1 in the general Japanese population is known to have decreased after serological screening tests of HTLV-1 in blood donors started www.frontiersin.org in 1987, as blood transfusion and breastfeeding from mother to child are major routes of viral transmission (Iwanaga et al., 2009). A recent survey (Miyanaga et al., 2009) in the HTLV-1 endemic region revealed that the most common clinical entity was still HU, followed by Vogt-Koyanagi-Harada disease, sarcoidosis, and others. However, new cases of HU clearly decreased with time, while the prevalence of Vogt-Koyanagi-Harada disease and sarcoidosis has not changed much in the last two decades. The age distribution of HTLV-1 seroprevalence of all patients with uveitis including HU and of patients with uveitis excluding HU showed that the HTLV-1 seroprevalence increased with age in patients of both groups Takahashi et al., 2000;Miyanaga et al., 2009). As for the sex, higher prevalence rates were found in women, especially after 40 years of age. HTLV-1 is known to be transmitted by infected lymphocytes in sperm and this may contribute to the higher prevalence of the disease in women than in men Takahashi et al., 2000;Miyanaga et al., 2009). As for the prevalence of HU in different parts of the world, the prevalences of HU in Martinique (Merle et al., 2002) and Brazil (Rathsam-Pinheiro et al., 2009) are lower than that in Japan (Yamamoto et al., 1999;Pinheiro et al., 2006). In general, as migration to metropolitan areas is on the rise, the number of HTLV-1 carriers in metropolitan areas (for example, Tokyo) is significantly increasing (Uchimaru et al., 2008), although the number of carriers is still the highest in the endemic areas. In consideration of this evidence, it is estimated that the number of patients with HU is prospectively increasing in metropolitan areas. Therefore, careful examination concerning HU is needed for the diagnosis of uveitis.
CLINICAL MANIFESTATIONS
A recent report indicated that ocular disturbances may be the first manifestations of HTLV-1 infection to come to clinical attention, in addition to neurologic and rheumatologic signs and symptoms (Poetker et al., 2011). Therefore, all patients presenting for an initial diagnosis should be strictly screened for ocular symptoms. The major symptoms of HU at initial presentation are sudden onset of floaters, foggy vision, and blurred vision. Other symptoms are pain/burning, itching, and foreign body sensation. These symptoms appear in all geographic regions according to studies in Japan, Brazil, and Martinique Merle et al., 2002;Pinheiro et al., 2006). Regarding the anatomic diagnosis of uveitis according to the criteria of the International Uveitis Study Group, most patients had an intermediate degree of uveitis with moderate or heavy vitreous opacities (fine cells and lacework-like membranous opacities). The vitreous opacities were the most impressive findings and were accompanied by mild iritis and mild retinal vasculitis, but no uveoretinal lesions . The ocular inflammation of HU was unilateral or bilateral Merle et al., 2002;Pinheiro et al., 2006). An association between HU and Graves' disease has been reported; HU occurs after the onset of Graves' disease in all cases (Yamaguchi et al., 1994). The most recent study (Miyanaga et al., 2009) reported a similar incidence of HU after Graves's disease as that reported by Yamaguchi et al. (1994). Only a few cases of HU develop into HAM/TSP, but no literature has reported that ATL develops in patients with HU during their clinical course. Further patient-tracking research is ongoing to determine whether HU is a risk factor for the development of ATL or HAM/TSP.
DIAGNOSIS
Considering seroepidemiological and clinical studies, the diagnosis of HU should be based on seropositivity for HTLV-1 with no systemic evidence of HTLV-1-related diseases (such as ATL or HAM/TSP) and exclusion of other uveitis entities with defined causes. Therefore, all clinical entities of uveitis with defined causes should be excluded by careful ophthalmic and systemic examinations. Patients with HU should not have ophthalmic and systemic symptoms that are compatible with other types of uveitis such as Behçet's disease, Vogt-Koyanagi-Harada syndrome, and sarcoidosis.
PATHOGENESIS
Eye research has progressed significantly in accordance with the development of modern molecular biological technology, such as the polymerase chain reaction and flow cytometry. Many fundamental findings have been obtained in the study of HU pathogenesis. The cells floating in the anterior chamber of the eye with HU consisted of lymphocytes with a small proportion of macrophages. No malignant cells or leukemic cells were detected in the aqueous humor of the patients with HU . The majority of infiltrating cells in the aqueous humor of patients with HU were CD3 + T cells (Ono et al., 1997). Analysis by polymerase chain reaction of ocularinfiltrating cells revealed that HTLV-1 proviral DNA was detected in almost all patients with HU. However, proviral DNA was not detected in patients with uveitis of other defined etiology who were seropositive for HTLV-1. These data suggest that HTLV-1-infected cells are present at the local site of HU (Ono et al., 1997). Furthermore, expression of viral mRNA was detected by Frontiers in Microbiology | Virology reverse transcriptase-polymerase chain reaction from the inflammatory cells in the aqueous humor. More direct evidence of HTLV-1 in the pathogenesis of HU has been provided by using TCC derived from intraocular tissues of eyes with HU. Proviral DNA of HTLV-1 was identified in TCC from the ocular fluid . Immunohistochemical staining showed that HTLV-1 env and gag proteins were detectable in HTLV-1 proviruspositive TCC. Furthermore, electron microscopic observation of the TCC identified HTLV-1 virus particles, the mean diameter of which was 102 nm . Most HTLV-1-infected TCC had a CD3 + CD4 + CD8 − phenotype and had polyclonal TCRα usage . The HTLV-1-infected TCC produced significant amounts of IL-1α, IL-2, IL-3, IL-6, IL-8, IL-10, TNF-α, IFN-γ, and GM-CSF, which are potent cytokines capable of inducing immune reactions and inflammation at the intraocular tissue level . These data suggest that cytokine production by HTLV-1-infected T cells in intraocular tissues is responsible for intraocular inflammation, i.e., uveitis (Figure 2). In addition to this molecular biological/immunological evidence, virological research supported the pathogenicity of HTLV-1 in the eye by the following three pieces of evidence: (1) the HTLV-1 provirus load in patients with HU is significantly higher than that in asymptomatic carriers without uveitis (Ono et al., 1995); (2) the proviral load in peripheral blood mononuclear cells correlates with the intensity of intraocular inflammation (Ono et al., 1998); and (3) the proviral load in the eyes of patients with HU is significantly higher than that present in peripheral blood mononuclear cells (Ono et al., 1997). Serologic data showed that the antibody level against HTLV-1 in patients with HU was similar to that in asymptomatic carriers of HTLV-1, but was lower than that in patients with HTLV-1-associated myelopathy (Mochizuki et al., 1992b). Antibody to the virus in the aqueous humor was also detected in all tested samples from patients with HU. Flow cytometry analysis indicated that the CD4 fraction was elevated and the CD8 fraction was decreased in peripheral lymphocytes from patients with HU, thereby elevating the CD4/8 ratio in the HU group as compared with the seronegative group. Furthermore, the CD25 fraction of T lymphocytes with expression of interleukin 2 receptors was significantly elevated in patients with HU. The serum levels of soluble interleukin 2 receptors (sIL2R or sCD25) were also significantly higher in patients with HU than in seronegative healthy controls . Taken together, these laboratory data suggest that the immune-mediated mechanism, particularly involving CD4 + T cells, plays a critical role in the pathogenesis of HU.
THERAPY
Immunopathogenesis studies of HU showed that the majority of ocular-infiltrating cells are inflammatory cells, but not malignant cells. Also, a series of studies showed that HU is caused by inflammatory cytokines produced by HTLV-1-infected CD4 + T cells that significantly accumulate in the eyes of the patients. Furthermore, the addition of corticosteroids in the culture medium suppressed the cytokine production . Therefore, corticosteroid treatment is effective for treating the intraocular inflammation of patients with HU by suppressing the cytokine production of HTLV-1-infected CD4 + T cells. Clinical management should be performed according to the degree of ocular inflammation. HU with a mild degree of ocular inflammation can be managed by topical non-corticosteroidal or corticosteroidal anti-inflammatory drugs. A sub-Tenon's injection of corticosteroids may be used when the patients have moderate inflammatory activity in the vitreous cavity. If the vitreous inflammatory activity and the retinal vasculitis are severe, oral corticosteroids should be given, but the long-term administration of a systemic corticosteroid should be avoided. In most cases, intraocular inflammation is markedly improved with these therapies and complete remission is achieved. The visual prognosis for cases of HU is generally good with these corticosteroid treatments, although approximately 60% of patients experience recurrences of uveitis .
CONCLUSION
We reviewed the seroepidemiological, clinical, molecular biological, and virological studies that established the HU entity and clarified the immunopathogenesis and the clinical management of HU. Corticosteroid is the only effective treatment for HU to suppress the cytokines produced by infiltrating HTLV-1-infected cells; however, it is unknown whether long-term corticosteroid treatment adversely affects patients with HU. Many mechanisms in HU remain unclear, such as how HTLV-1-infected CD4 + cells break down the ocular blood barrier and why the vitreous humor is the major site of inflammation (Figure 2). We may be able to find more effective treatments if we can understand the mechanism of HU in more detail. Recent studies have shown new insights into HTLV-1 infection and pathogenesis by pursuing the molecular functions of HTLV-1 basic leucine zipper factor and Tax (Yasunaga and Matsuoka, 2011). However, few studies have been conducted to apply these new findings to HU research. Further investigation is needed to establish a specific treatment for HU. HU results from HTLV-1 infection; therefore, the most important means of preventing this disease is by spreading the knowledge about HTLV-1. www.frontiersin.org | 3,319.4 | 2012-07-24T00:00:00.000 | [
"Medicine",
"Biology"
] |
GREEN ECONOMY: CONTENT AND METHODOLOGICAL APPROACHES
. The existing economic development model needs to fit into the sustainable development framework due to the continuing depletion of natural resources and the continuing disproportions in economic growth. Therefore, a new concept was introduced, a "green economy", which emphasises improving the population's quality of life while minimising the use of resources and preserving nature for subsequent generations. However, discussions about the green economy measurement methodology continue. Based on the literature analysis, the authors clarified approaches towards the concept under consideration. They developed a novel approach to a green economy in the context of the basic principles of sustainable development.
Introduction
A series of global forums in the second part of the 20 th century and the beginning of the 21 st century devoted to sustainable development stimulated scientific interest.Sustainable development as a concept burst into scientific considerations of a broad spectrum of disciplines in the late 1980s due to the publication of the report "Our Common Future" in 1987.The report summarised the achievements and failures of humanity in the 20 th century identifying sustainable development as a possible way of improving the existing situation (Brundtland, 1987).
What the Brundtland Report defined as "Our Common Future" received an institutional framework with the adoption of Millennium Development Goals (MDGs) in 2000, and what is more important -Sustainable Development Goals (SDGs) set by the United Nations General Assembly in 2015, developed as a result of Rio+20 conference (the United Nations Conference on Sustainable Development, UNCSD) held in 2012.Two agenda items for Rio+20 were: "Green Economy in the Context of Sustainable Development and Poverty Eradication" and "International Framework for Sustainable Development".As we see, the term green economy was used in the context of sustainable development.The paper's authors adopt the same approach and focus on the green economy content and its measurement in the context of sustainable development.
As mentioned above, moving towards a green economy has become a strategic policy agenda for sustainable development.A green economy recognises that the goal of sustainable development is improving the quality of human life within the constraints of the environment, which include combating global climate change, energy insecurity, and ecological scarcity.However, a green economy must be focused on more than eliminating environmental problems and scarcity.It must also address the concerns of sustainable development with intergenerational equity and eradicating poverty (UNEP, 2011).
The European Union has contributed significantly to the activities of international structures related to sustainable development.The EU countries have hosted most of the decisive environmental forums.The European Commission finds the green economy to be more than a sum of existing commitments.It has the potential to introduce a new development paradigm and a new business model in which growth, development, and the natural environment are deemed mutually supportive.Increasing resource efficiency, promoting sustainable consumption and production, preventing climate change, protecting biodiversity, combating desertification, reducing pollution, and managing natural resources and ecosystems in a responsible manner are necessities and a simultaneous driving force ensuring the transition to a green economy (Ryszawska, 2013;Kasztelan, 2021).Bogovic et al. (2020, p.1) "conclude that transitioning towards a green economy, i.e., implementing specific green economy policies, can push sustainable development in the EU while simultaneously contributing to the implementation of the strategic goals of the European Green Deal".
In line with the commitment to develop a green economy, the EU emphasises attaining Sustainable Development Goals (SDGs).The EU made a positive and constructive contribution to the development of the 2030 Agenda, is committed to implementing the SDGs in all policies and encourages EU countries to do the same (European Commission, 2022).
Against this background, it is notable that the analysis of existing literature has demonstrated that only a few scholars have conducted research dedicated to assessing the performance of the green economy in the European Union, especially in the context of sustainable development and SDGs.Such a state of affairs is discordant with the ambitious goals and political actions of the European Union in terms of the green economy.
A wide range of modern scientists worldwide is engaged in research on the theoretical and methodological basis of the green economy.Alcalde-Calonge et al. (2022, p.1). "the literature on the topic has grown from 12 scientific articles published in 2008 to 2355 in 2020, which represents an almost two hundredfold increase in around a decade".The fact that most natural resources are nonrenewable, a significant increase in environmental damage, and the growth of the world population highlight the need to develop a green economy that promotes environmentally sustainable investments (Bergius et al., 2020).
With many countries striving to improve resource efficiency, introduce environmentally-friendly production methods, combat climate change etc., it is clear that the concept of a green economy remains high on the agenda nowadays, especially taking into account the high energy prices the world economy is has faced recently.
Promoting a more resource-efficient, greener and more competitive economy was the priority for the EU, the "European Green Deal"a plan to achieve carbon neutrality by 2050 outlined by European Commission in December 2019 (European Commission, 2020).Regions and many separate states also retain an interest in promoting greener economies on a national level.Decision-makers in most European countries have acknowledged this imperative, which lies at the core of EU common policies and implement it on the country level through so-called National Energy and Climate Plans.These plans provide, among other things, targets for the decommissioning of those technologies that have a more profound impact on our carbon footprint and for the development of new renewable capacity.
Anyway, it must be admitted that a particular gap in the research on the green economy and its connection with the concept of sustainable development still exists: a scientific problem of measuring a country's/ region's progress towards a greener economy.Even though there are several models, as it will be shown further in this study, they seem to consider only some spheres related to the issue.
Evolution of green economy research in the context of sustainable development
Meaning, which different authors puts into "green economy", along with which accompanying definitions such as "green technologies", "eco-innovation", "green innovation", "and green growth", slightly differ.Fulai (2010) claimed that green economy was typically understood as an economic system, which was compatible with the natural environment, was environmentally friendly, ecological, and for many groups, was also socially just.Others, such as the United Nations Economic and Social Commission for Asia and the Pacific (ESCAP), define green growth as a policy focus that emphasises "environmentally sustainable economic progress to foster low-carbon, socially inclusive development" (Greening the economy, 2011, р.3).
The definitions of "green growth" provided by the OECD are characterised by a broad approach described by promoting economic growth while reducing pollution and greenhouse gas emissions, as well as minimising waste and inefficient use of natural resources and conserving biodiversity (OECD, 2017).
Green growth can be defined as economic growth focusing on environmentally sustainable (safe for the environment) and socially inclusive development.The essential features of such change are that it does not affect the environment and does not ensure better economic prospects for contemporaries at the expense of future generations.
To reach green growth without hampering economic prospects (particularly the growth of GDP), humanity utilises green innovations, meaning the creation or implementation of new or modified processes, practices, systems and products which benefit the environment and contribute to environmental sustainability.
According to Swart and Groot (2020), the term "green economy" emphasises a friendly attitude toward the natural environment.Chavula et al. (2022) agree that a low-carbon, resource-efficient, and socially inclusive economy is referred to as green.
Other scientists pay special attention to the well-being aspect, claiming that "the green economy is an alternative vision for growth and development; one that can generate economic development and improvements in people's lives in ways consistent with also advancing environmental and social wellbeing" (Söderholm, 2020, р.1).
For Chen Lai et al. (Chen et al., 2006, p. 332), green innovation "is hardware or software innovation that is related to green products or processes, including the innovation in technologies that are involved in energy-saving, pollution-prevention, waste recycling, green product designs, or corporate environmental management.Kemp et al. (Kemp and Pearson, 2007 p.7) associate green innovations with "the production, assimilation or exploitation of a product, production process, service or management or business method that is novel to the organisation (developing or adopting it) and which results, throughout its life cycle, in a reduction of environmental risk, pollution and other negative impacts of resources use (including energy use) compared to relevant alternatives".
Green innovations, according to Oltra and Saint Jean (Oltra and Saint Jean, 2009, p.567), "are innovations that consist of new or modified processes, practices, systems and products which benefit the environment and contribute to environmental sustainability".Nuryakin et al. (2022, р.1) established "the mediating role of green product innovation and green product competitiveness advantage on green marketing performance".Leal-Millán et al. (2017) claimed that green innovations contribute to creating products, services or processes while optimising the use of natural resources to improve human well-being and can also contribute to sustainable development".
Scientists claim that green innovations produce positive spillovers in both the introduction and diffusion stages; they are intrinsically more risky and uncertain than other investments because they involve technologies that are in the initial stage of their development and therefore suffer from the existence of increasing returns (from knowledge, competencies, and infrastructure) in established, carbon-intensive technologies; finally, the evolution of and frequent changes in environmental regulation make the profitability of the eco-innovative projects uncertain (Cecere et al. 2020;Andersén, 2021).As a result, green stocks may be very volatile in their market performance (Rybalkin, 2022).As it has already been stated, green innovations, irrespective of the economic sector introduced, are one of the main tools to facilitate green growth, which is also conducive to the green economy.
Similarly to green innovations, eco-innovations embrace "the introduction of any new or significantly improved product (good or service), process, organisational change or marketing solution that reduces the use of natural resources (including materials, energy, water and land) and decreases the release of harmful substances across the whole lifecycle" (Sobczak et al., 2022, p.1).The nature of the ecoinnovations includes the product, process, and organisational eco-innovations (Eco-Innovation Observatory, 2012).
Eco-innovation is the creation or implementation of new or significantly improved products (goods and services), processes, marketing methods, organisational structures and institutional arrangements which with or without intentlead to environmental improvements compared to relevant alternatives.
Thus, the analysis performed by the authors of the present study shows that the green economy is an economic system that is compatible with the natural environment, is environmentally friendly, is ecological, and for many groups, and is also socially justit can be regarded as the final goal of green growth.
How are the concepts of "green economy", "green growth", and "green innovation" related to the notion of "sustainable development"?Sustainable development ensures economic growth, which makes it possible to harmonise human-nature relations and safeguard the environment for present and future generations (Vertakova et al., 2017).
Ryszawska identifies sustainable development (Ryszawska, 2015) as social, economic and political development to preserve the natural balance and environmental access for future generations.
The concept of sustainable development is usually considered from two perspectives.In a narrow sense, the focus is mainly on its ecological component.Still, in a broad sense, sustainable development is interpreted as a process that denotes a new type of civilisation functioning.Therefore, sustainable development can be seen as an objective requirement of our time (Medvedkina, 2020;Khan, 2021).Having emerged with Blueprint for a green economy for the UK's Department for the environment, the concept of sustainable development attracted the particular interest of researchers in the aftermath of the 2008-2009 global financial crisis, which, in the first place, made it apparent for decision-makers that studying this phenomenon is inevitable since there was an urgent need for the shift in the existing economic model and finding new ways of elaborating a new green economy paradigm.Fulai (2010), Oliinyk (2020), Trushkina (2022), for instance, articulate the relationships between the notion of a green economy with other related concepts such as a low-carbon economy, a circular economy, sustainable consumption and production (SCP), green growth, sustainable development, the Millennium Development Goals (MDGs) etc. Green economy can improve the growth of the country's economy and at the same time achieve sustainability goals (Alsmadi et al., 2022).
The one performed by Ryszawska (2017) possesses specific value because it both compares and analyses several definitions of green economy and green growth provided by UNEP (2011), OECD (2011) etc.Some authors analysed the relationship between a green economy, green growth and sustainable development.
Kazstelan (2017) concluded that the co-existence of the trio "green economygreen growthsustainable development" is reasonable due to the complementary and synergistic nature of correlations between these concepts.The author argues that the restructuring of the economy aiming at the so-called "green" solutions (green economy), based on the assumptions of the strategy of green growth, is the primary condition for entering the path of sustainable development.In the economic dimension, green economy and green growth have to enable the overall increase in welfare; in the social aspect, it will translate into improvement in life quality, while in the environmental dimension, they will contribute to reducing pressure on the environment and improving the effectiveness of how natural capital is utilised (Kazstelan, 2017).The primary assumption of a green economy or green growth is not replacing the concept of sustainable development.Still, the conviction that is achieving sustainable development should be based on an adequately oriented economy.Building a green economy based on the assumptions of the strategy of green growth must become an integral element of economic policy on the way towards sustainable development.Finally, Kazstelan (2017) proposes the following definition of green growth: economic growth which contributes to rational utilisation of natural capital, prevents and reduces pollution, and creates chances to improve the overall social welfare by building a green economy, and finally makes it possible to enter on the path towards sustainable development.Such treatment makes it possible for the author to emphasise the integrity of the trio: green growthgreen economysustainable development.Taking the abovementioned findings into account, the definition of the green economy (to put it in the context of sustainable development) should be enlarged: the green economy is based on sustainable development principles and lays the basis for SD.
These are education (new or modified processes, assimilation etc.), economy (products, goods, services, corporate management, business method, energy use etc.), politics (organisational structures, energy security, just system etc.) and environment (reduction of environmental risk, pollution; pollutionprevention, waste recycling, biodiversity etc.).These findings point to the five-spheres model, the Quintuple Helix.
"Quintuple Helix" model of sustainable development is based on the quality management of development, restoring balance with nature and preserving Earth's biological diversity.Moreover, it can solve existing problems by applying knowledge and know-how, as it focuses on the social (public) exchange and transfer of knowledge within the subsystems of a particular or national state (Barth, 2011;Arsova et al., 2021).The innovative Quintuple Helix Model explains how knowledge, innovations, and the environment (natural environment) are interrelated (Carayannis and Campbell, 2010;Barth, 2011;Carayannis et al., 2021;Cai, 2022).The Quintuple Helix model is both interdisciplinary and transdisciplinary: the complexity of the five-spiral framework implies that a complete analytical understanding of all spirals requires the continuous involvement of the entire disciplinary spectrum, ranging from Natural Sciences (due to the presence of natural environment factors) to Social Sciences and Humanities, to promote and visualise the system of collaboration between knowledge, know-how and innovations for more sustainable development (Carayannis et al., 2010;Kholiavko et al., 2021).The first subsystem of the Quintuple Helix is the education system, where the necessary human capital is formed.The second subsystemthe economic oneconcentrates on the economic capital (e.g., entrepreneurship, machines, food, technologies and money).The third subsystemthe political one, i.e., the political and legal capital (e.g., ideas, laws, plans, policies, etc.).The fourth subsystem unites two forms of capitalsocial capital and information capital.The fifth subsystemthe environmentis crucial for sustainable development, as it provides people with natural capital (e.g., resources, plants, animal diversity, etc.).
To combine all the findings of the present chapter, it is necessary to work out a definition of green economy that would fit both into the context of the Quintuple Helix Model and sustainable development.In line with such requirements, the study's authors propose that a green economy should be defined in the following way: the green economy is an economic system based on sustainable development principles, laying the basis for sd. it ensures economic growth while being compatible with the natural environment and environmentally friendly.It is for many groups and socially comprehend the implementation of specific policy instruments targeted at the environment and disseminate their ideas through the education system.
The Green Growth Index comprises 25 to 30 indicators that characterise four main groups: environmental and resource efficiency of the economy (carbon and energy efficiency, resource efficiency: materials, nutrients, water, multifactor productivity), natural asset base (renewable resources: water, forests, fisheries resources, non-renewable stocks: minerals, biodiversity and ecosystems), the environmental aspects of quality of life (environmental conditions and risks, ecosystem services and environmental benefits), and the economic opportunities and policy instruments that determine green growth (technology and innovation, environmental goods and services, international financial flows, prices and transfers, skills and training, regulations and management approaches).In addition, indicators reflecting the socio-economic context and characteristics of growth (economic growth and economic structure, productivity and trade, labour markets, education and income, as well as socio-demographic characteristics) have been identified.The proposed set of indicators is still being determined.Each country can adapt the set to national circumstances (OECD, 2014).Economic indicators characterising a significant part of the Stage 3 indicator are crucial in the Green Economy approach.Investing in green activities will lead to capital accumulation and job creation while stimulating economic growth through more sustainable production and consumption.The construction of the Green Economy Index by Bożena Ryszawska (Ryszawska, 2015(Ryszawska, , 2017) ) began with an overview of the definitions of a green economy presented in selected strategic documents.The measurement of a green economy covers the assessment of the environmental condition, the pressure exerted on the environment by human activity, and the policies pursued by governments which support actions in favour of a green economy (Ryszawska, 2015, p.45).The Global Green Economy Index (Global Green Economy Index, 2014) includes subcomponents: Environment and natural capital, Market and investment; Efficiency sectors, Leadership and climate change.Thirty-two underlying indicators and datasets define the performance index of the 2014 GGEI.Table 5 presents a general structure of these four main dimensions and their associated subcomponents (Global Green Economy Index, 2014, p.8).The Green GDР Index (Stjepanović et al., 2019) considers three methodological approaches to calculating environmentally adjusted domestic product: 1) includes consideration of reduction of natural capital; 2) takes into account the degradation of the environment due to the accumulation of pollutants and waste, since they affect both economic activity and natural capital; 3) implies a further deduction of the costs of combating environmental degradation, as these adjusted accounts should show defence costs depending on their impact on natural capital.Stjepanović, Tomić and Škare (2019) proposed an alternative approach to sustainability and green growth, which represents a crucial step towards transforming global economic thinking by ensuring applicable methodology and correct information for the assessment of economic progress."By following their work and keeping common Green GDP accounting framework (a quantitative position), we have applied a general methodological algorithm that is suitable for the assessment of and comparison between different countries, as well as other surveys" (Stjepanović et al., 2019, p.6).
Measuring progress towards a Green
The Environmental Performance Index in 2020 evaluates only Environmental Health (40%) and Ecosystem Vitality (60%) (Wendling et al., 2020).The 2020 EPI framework organises 32 indicators into 11 issue categories and two policy objectives, with weights shown at each level as a percentage of the total score (Wendling et al., 2020).The 2022 EPI framework organises 40 indicators into 11 issue categories and three policy objectives, with weights shown at each level as a percentage of the total score.The Environmental Performance Index in 2022 evaluates Environmental Health (20%), Climate change (38%) and Ecosystem Vitality (42%) (The Environmental Performance Index, 2022).Other authors also believe that the green economy represents a catalyser for sustainable development in its three dimensions -economic, social and environmental-aiming to improve human well-being and social equity and reduce ecological risks (Chaaben et al., 2022).
The Greenness of Stimulus Index 2021 (Greenness of Stimulus Index, 2021, p.20) is constructed by combining the flow of stimulus into five key sectors with an indicator of each sector's environmental impact, the latter accounting for both historical trends and specific measures taken under the country's stimulus.The five sectors are chosen for their historical impact on climate and environment: agriculture, energy, industry, waste and transport.The overall GSI is an indicator of the total fiscal spending in response to COVID-19, categorised as either a positive or negative environmental impact.The final index for each country is an average of sectoral impact, normalised to a scale of -1 to 1 (Greenness of Stimulus Index, 2021, p.20).The Greenness of Stimulus Index 2021 (Greenness of Stimulus Index, 2021) covers the areas of "Natural environment", "Educational subsystem", "Economic subsystem", and "Political subsystem".However, it does not consider a social aspect at all."Natural environment subsystem" may include Nature-Based Solutions, Conservation and wildlife protection programmes, Subsidies for environmentally harmful activities, Environmentally harmful infrastructure investments, and Environmentally related bailouts without green strings; "Educational subsystem" may include Green R&D subsidies; the "Economic subsystem" -Subsidies or tax reductions for environmentally harmful products, Green infrastructure investments, Subsidies or tax reductions for green products; "Political subsystem" -Deregulation of environmental standards.
The EEPSE Green Economy Index is consistent with the "Quintuple Helix" model of sustainable development (Rybalkin, 2022)."Natural environment subsystem" may include the state of natural The role of an educational factor in the green economy has long been acknowledged.As early as in Brundtland Report (Brundtland, 1987) there was an appeal, among other things, to educational institutions and the scientific community, which had played indispensable roles in creating public awareness and political change in the past.It was suggested that they would play a crucial part in putting the world onto sustainable development paths.It is also essential that knowledge has been widely recommended as a critical resource to support innovativeness and hence green economy research (Leal-Millán et al., 2017).Indeed, the knowledge base after effective supply chain networking becomes vital for enhancing the green economy (Ibid).
Education can supply the job market with new specialists for the green economy and retrain some existing specialists.As the 'green' spheres in the job market develop, the demand for specialists in new professions known as 'the green collars' grows, too.Specialists in the rapidly evolving energy efficiency policy and savings could be an example of such 'green collars' (Arnett et al., 2009).The role of the economic subsystem can hardly be overestimated due to the importance of the business environment and activities taken by companies, which have to play their proactive role in averting the global climate crisis.Green development has become a strategic issue for firms seeking to achieve environmental improvement and profitability while actively replying to growing environmental pressures and demands.
Still, being concerned about the potential loss of assets due to environmental damage, major asset owners are starting to stimulate the companies in their portfolios to address climate change.This trend is economically justified since the long-term returns of the world's largest investors are threatened by climate change.The same tendency is observed in the European Union itself, which is the object of the present research.At the beginning of 2020, sustainable European funds held €668 bn of assets, up 58% from 2018.Helping to propel the growth is an increase in new products, with 360 sustainable funds launched in the year, bringing the total number across Europe to 2405.Some 50 sustainable funds established in 2019 had a specific climate-oriented mandate (Black, 2020).
As the clean-energy industry, which can be seen at the core of the economic subsystem described above, is gaining momentum, governments and public bodies are waking up to climate change.Politicians worldwide, particularly in Europe, square up to ecological challenges backing green-infrastructure plans.As early as in the Brundtland Report it has been highlighted that sustainable development is not a fixed state of harmony but rather a process of change in which the exploitation of resources, the direction of investments, the orientation of technological development, and institutional change are made consistent with future as well as present needs.
[…] Painful choices must be made (Brundtland, 1987).Thus, sustainable development must rest on political will, prior approval procedures for investment and technology choice, foreign trade incentives and all components of development policy.
The role of politics and the state in promoting a green economy is underlined by the fact that the transition towards sustainable development needs to be publicly funded, at least partially, because of the weak competitiveness of clean technologies (at present) compared to the conventional alternatives and the uncertain effectiveness of regulation and other public policies mechanisms (Cecere et al., 2020).As mentioned in Brundtland Report, sustainable development requires changes in values and attitudes towards environment and developmentindeed, towards society and work at home, on farms and in factories (Brundtland, 1987).
Such ideas were inherited by the Global Compactan international initiative launched in July 2000 by United Nations Secretary-General Annan, bringing companies together with UN agencies, labour and civil society to support ten principles of sustainable development (United Nations, 2006).These standards address respect for human rights as set out in the major international instruments, avoidance of complicity in human rights abuses, freedom of employees to associate and engage in collective bargaining, elimination of forced labour and child labour, non-discrimination, a precautionary approach to environmental harm; promotion of environmental responsibility, developing and spreading of environmentally sound technology, avoidance of corrupt practices (United Nations, 2022).Thus, forming environmentally responsible behaviour models for the population and business is essential.This will reduce both unsustainable production and negative environmental impacts-the rest results from the inhabitants' social and ecological activity (Vertakova et al., 2017).
Last but not least subsystem of the new index should be the natural environment.Several factors underline its importance.Paragraph 53 of the Brundtland Report points out that species diversity is necessary for the normal functioning of ecosystems and the biosphere.The genetic material in wild species contributes billions of dollars yearly to the world economy in the form of improved crop species, new drugs and medicines, and raw materials for industry.But utility aside, there are also moral, ethical, cultural, aesthetic, and purely scientific reasons for conserving wild beings.Paragraph 54 states that the priority is establishing the problem of disappearing species and threatened ecosystems on political agendas as a significant economic and resource issue.Sustainable development requires views of human needs and well-being that incorporate such non-economic variables as education and health enjoyed for their own sake, clean air/water and the protection of natural beauty.
Even though all the models mentioned above contribute to the progress towards sustainable development, many indices still need to reflect all the components of SD: societal, economic, political, educational and environmental.Even though particular indices (such as OECD indicator, Greenness of Stimulus Index (Vivid Economics, 2021) and Green Economy Index by Ryszawska (2015) seem to be the most comprehensive and inclusive, they still miss certain aspects of sustainable development: societal in first two cases and educational in the third.Against this background, it can be concluded that the integrated indicator EEPSE Green Economy Index most accurately characterises green economy in the context of sustainable development, its principles and components.
Conclusions
The content of the categories "green economy", "green technologies", "eco-innovation", "green innovation", "and green growth" confirm the growing interest in the green economy, suggesting potential directions of development towards the establishment of a consistent set of indicators, since the critical problem at this point lies in the lack of their homogeneity.Each organisation employs their own set of indicators, frequently based on quite divergent definitions.
The analysis of scientific literature within the present research allowed us to identify the characteristic features of the green economy and its relationship with the concept of sustainable development.In line with that, the authors' interpretation of the concept of "green economy" was given: the green economy is an economic system based on sustainable development principles, laying the basis for SD.It ensures economic growth while being compatible with the natural environment and environmentally friendly.It is for many groups and socially comprehend the implementation of specific policy instruments targeted at the environment and disseminates their ideas through the education system.Moreover, different models and indices dealing with the green economy were analysed through the prism of the newly developed definition.
Discussions about the green economy usually take place in the context of the concept of sustainable development.There is a perception in the information space that these concepts are identical; many articles in this field of knowledge make this point explicitly and implicitly.However, it would be a mistake to consider them synonyms.
As a tool for sustainable development, the green economy indices reflect certain aspects of sustainable development that are most important, according to the authors of the indices.Thus, the structure of these indices varies and depends on the concept of sustainable development adopted by the authors.Some indices reflect only the area of the natural environment, others the economic area or the economic, social and political areas, etc.Consequently, the structure of the indices depends on the authors' approach to sustainable development.
Economy 2012 Indicators at different stages of green economy policies consists of 3 indicators: indicators for environmental issues and targets (Initial stages), indicators for policy interventions (Intermediary Stages), indicators for policy impacts on well-being and equity (Final stages) (United Nations Environment Programme, 2012).
Figure 2 .
Figure 2. Structure of the EEPSE Green Economy IndexSource: authors
Table 1
presents the main areas and associated indicators for environmental issues and targets.
Table 1 .
The first stage indicator
Table 2 .
The second stage indicator Source: United Nations Environment Programme, 2012, p.17
Table 3 .
The third stage indicator Considering the structure of other indicators found in the literature is interesting.The types of systems for these indicators are illustrated below, using specific indicators as examples.
Table 4 .
Areas and indicators for the synthetic Green Economy Index
Table 5 .
The performance index of the 2014 GGEI
Table 7 .
The Greenness of Stimulus Index 2021.Summary of negative policy archetypes | 7,007 | 2022-12-01T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Evolution of ocular defects in infant macaques following in utero Zika virus infection
of ocular defects in 2 infant macaques over 2 years. We found that one of them exhibited colobomatous chorioretinal atrophic lesions with macular and vascular dragging as well as retinal thinning caused by loss of retinal ganglion neuron and photoreceptor layers. Despite these congenital ocular malformations, axial elongation and retinal development in these infants progressed at normal rates compared with healthy animals. The ZIKV-exposed infants displayed a rapid loss of ZIKV-specific antibodies, suggesting the absence of viral replication after birth, and did not show any behavioral or neurological defects postnatally. Our findings suggest that ZIKV infection during early pregnancy can impact fetal retinal development and cause congenital ocular anomalies but does not appear to affect postnatal ocular growth. by stop solution (KPL). Optical densities were detected at 450 nm (PerkinElmer, Victor). Half of maximal effective dilution (ED 50 ) values were calculated with the sigmoidal dose-response (variable slope) curve fit in Prism 7 (GraphPad), which uses a least squares fit. The positive control was plasma from a ZIKV-infected monkey at 6 weeks after infection, and the negative control was plasma from an uninfected monkey. Samples with an ED 50 below the limit of detection of 50 were plotted at the limit. Ophthalmic examination . For detailed ophthalmic examinations, animals were sedated with ketamine hydrochloride, midazolam, and dexmedetomidine, followed by pupillary dilation with phenylephrine (Par-agon Biotech), tropicamide (Bausch & Lomb), and cyclopentolate (Akorn). Ophthalmic evaluations were conducted by portable slit lamp biomicroscopy (SL-7E, Topcon) of the anterior segment and by indirect ophthalmoscopy (Heine) of the retinal fundus by a board-certified ophthalmologist and retinal specialist. IOP was measured by rebound tonometry (TonoVet, Icare). External photographs of the anterior segment were captured using a digital camera (Rebel T3, Canon). A-scan ultrasonography (Sonomed PacScan 300A+) was performed for measuring ocular biometry. Multimodal ocular imaging and analysis . Color fundus photography was performed using the CF-1 Retinal Camera (Canon) with a 50-degree wide-angle lens. NIR, FAF, FA, and SD-OCT were performed using the Spectralis HRA+OCT system (Heidelberg Engineering), using a 30-degree or 55-degree objective for NIR, FAF, and FA imaging and the 30-degree objective for SD-OCT (52). Confocal scanning laser ophthalmoscopy was used to capture 30 × 30–degree NIR, FAF, and FA images using an excitation light of 820 nm for NIR and 488 nm for blue-peak FAF and FA imaging (53). For FA, animals were injected with 7.7 mg/kg fluorescein sodium (Akorn) by i.v. route, and serial images captured up to 15 minutes after dye injection. SD-OCT was performed using a 20 × 20–degree volume scan and a 30 × 5–degree raster scan protocol, centered on the fovea and in the areas of chorioretinal colobomas, with progression mode using retinal vessel tracking enabled, where possible, to reliably image the same area for longitudinal imaging sessions. All retinal measurements were made using the Heidelberg Explorer software (version 1.9.13.0, Heidelberg Engineering), which has been used in prior studies and calibrated for both humans (54–56) and macaques (57–59). Chorioretinal lesion diameters were measured from the widest horizontal dimension of each lesion on NIR imaging. Disc-to-fovea distance was measured from the visual center of the optic disc to the center of the foveal pit based on combined NIR and SD-OCT images. Semiautomated segmentation of the chorioretinal layers was performed by the Heidelberg Explorer software, followed by manual adjustment of the segmentation lines by a masked grader, including the nerve fiber layer, GCL, inner plexiform layer, inner nuclear layer, outer plexiform layer, ONL, photoreceptor inner and outer segments, and RPE. Average retinal layer thicknesses were measured from the nasal quadrant of the 1–3 mm ring of the Early Treatment of Diabetic Retinopathy Study grid (60) for consistency between animals. Necropsy and tissue collection for histopathology . Animals were euthanized with an overdose of pentobar-bital, followed by immediate collection of a specimen of spleen and inguinal lymph node (preserved in RNALater for RT-PCR) and upper body perfusion with 4% paraformaldehyde for optimal preservation of brains and eyes for histological
Introduction
Zika virus (ZIKV) is a mosquito-transmitted flavivirus that was first isolated from a rhesus macaque in the Zika Forest of Uganda in 1947. ZIKV received worldwide recognition when a surge of congenital birth defects occurred closely after a ZIKV outbreak in Brazil in 2015. The rapid expansion of the outbreak in the Americas led to its declaration by the World Health Organization as a public health emergency in 2016 (1). The spectrum of fetal and neonatal anomalies, including microcephaly, ocular defects, musculoskeletal contractures, and neurologic deficits, combined with a diagnosis of prenatal ZIKV infection, together constitute congenital Zika syndrome (CZS). The predilection for CNS abnormalities in CZS is explained by the tropism of ZIKV for neural progenitor cells (2). Studies indicate this may be due to the binding of the viral RNA genome to the RNA-binding protein Musashi-1 that is involved in neurodevelopment and is highly expressed in these precursor neurons (3). Multiple strategies enable ZIKV to evade host innate immune responses to allow spread to the placenta of the mother, and to traverse both the blood-cerebrospinal fluid barrier in the choroid plexus and blood-brain barrier of the fetus (4). Because many infants whose mothers are ZIKV-infected during pregnancy are born without microcephaly or detectable viral RNA in fluids, but may develop neurologic problems later in life, long-term monitoring of these young children is essential (5)(6)(7).
A unique feature of CZS is the high frequency of ocular malformations, particularly in patients with microcephaly (8)(9)(10)(11)(12)(13)(14)(15)(16)(17). Ocular findings in these infants primarily impact posterior segment structures such Congenital Zika syndrome (CZS) is associated with microcephaly and various neurological, musculoskeletal, and ocular abnormalities, but the long-term pathogenesis and postnatal progression of ocular defects in infants are not well characterized. Rhesus macaques are superior to rodents as models of CZS because they are natural hosts of the virus and share similar immune and ocular characteristics, including blood-retinal barrier characteristics and the unique presence of a macula. Using a previously described model of CZS, we infected pregnant rhesus macaques with Zika virus (ZIKV) during the late first trimester and characterized postnatal ocular development and evolution of ocular defects in 2 infant macaques over 2 years. We found that one of them exhibited colobomatous chorioretinal atrophic lesions with macular and vascular dragging as well as retinal thinning caused by loss of retinal ganglion neuron and photoreceptor layers. Despite these congenital ocular malformations, axial elongation and retinal development in these infants progressed at normal rates compared with healthy animals. The ZIKV-exposed infants displayed a rapid loss of ZIKV-specific antibodies, suggesting the absence of viral replication after birth, and did not show any behavioral or neurological defects postnatally. Our findings suggest that ZIKV infection during early pregnancy can impact fetal retinal development and cause congenital ocular anomalies but does not appear to affect postnatal ocular growth.
as the retina, choroid, and optic nerve, including chorioretinal atrophy, torpedo maculopathy, retinal vessel tortuosity, peripapillary atrophy, and optic disc hypoplasia (8,9,15,(18)(19)(20). Central retinal thinning has also been observed on in vivo imaging using spectral domain-optical coherence tomography (SD-OCT), particularly in the ganglion cell layer (GCL) that consists of axonal projections from the eye to the brain (21). Other eye findings include anterior segment abnormalities, such as iris coloboma, lens subluxation, cataract, and glaucoma, as well as neuro-ophthalmic deficits, such as oculomotor dysfunction and loss of pupillary response (9,(22)(23)(24). Although acquired ZIKV infection can cause intraocular inflammation such as conjunctivitis, iridocyclitis, and posterior uveitis (13,25,26), these findings have not been reported in congenital cases. Maternal symptoms of ZIKV during the first trimester of pregnancy are associated with a higher frequency of ocular abnormalities, which, similar to other birth defects in CZS, may result from the significant neural cell proliferation and differentiation occurring during this critical period (8,18). ZIKV can bypass the developing blood-retinal barrier by infecting neural progenitor cells, and it can also infect other cell types located in the inner and outer blood-retinal barrier, including retinal vascular endothelial cells, pericytes, Muller glia, and retinal pigment epithelium (RPE) (27,28).
Because rodents are not reservoir hosts for ZIKV, modeling CZS is limited by the inability of ZIKV to replicate efficiently in pregnant outbred mice, often requiring the use of immunodeficient mice. Moreover, ocular anatomy and development in mice differ significantly from those of humans, particularly due to the absence of a cone-rich macula that enables high-acuity daytime vision, which uniquely exists in primates (29). ZIKV infection of pregnant rhesus macaques is a highly relevant animal model of CZS because it recapitulates many features of human ZIKV infection and CZS, including time course of viremia, maternal neutralizing antibody responses, rates of vertical transmission, and development of placental and fetal neurologic abnormalities and fetal loss, although microcephaly has not been observed in these animals (30)(31)(32)(33)(34)(35)(36)(37)(38). This model has also been used successfully to demonstrate the efficacy of antiviral interventions, such as active and passive immunization strategies, in reducing transplacental transmission and the harmful effects of in utero infection (39,40).
Prior reports of ocular findings in fetal macaques born to ZIKV-infected pregnant animals included choroidal colobomas, retinal dysplasia, and possible anterior segment dysgenesis. However, these features were described based only on postmortem histology from fetuses that either underwent fetal demise due to preterm premature rupture of membranes on gestational day (GD) 95 (41) or collected at the end of gestation (34,35). To our knowledge, no studies have studied CZS-related ocular pathology in postnatal infant macaques, employed live imaging, or monitored the progression of such infants over prolonged periods of time after birth. In this study, we used a previously developed pregnancy macaque model of CZS that was designed to reliably induce fetal infection at defined times, by inoculating the pregnant macaques by both the i.v. and intra-amniotic (IA) routes (31), and describe the evolution of ocular findings in 2 infant macaques over 2 years after birth.
Results
ZIKV infection of pregnant macaques and prolonged ZIKV presence in amniotic fluid. The 2 infants described in this manuscript were part of a study in which 6 pregnant macaques were each inoculated once, between GDs 42 and 53 (corresponding to the first trimester of pregnancy) with 2000 PFUs of 2 ZIKV strains isolated from 2015 outbreaks, by both the i.v. and IA routes ( Figure 1A). Of the 6 animals, 4 animals (inoculated between GDs 42 and 51) experienced early fetal loss (n = 3) or stillbirth (n = 1), the findings of which have been previously described (30). By contrast, the 2 other pregnant dams (dam no. 1 and dam no. 2, inoculated on GD 51 or GD 53, respectively) had no clinical symptoms and each gave birth to a female infant (infant no. 1 and no. 2, respectively) by natural delivery on GDs 168 and 171, respectively. Both these dams had patterns of high-peak plasma viremia at 5 or 6 log 10 vRNA copies per mL and prolonged detection of viral RNA in amniotic fluid samples that decreased toward the end of pregnancy in a pattern similar to the 4 dams that lost their fetus or infants (Figure 1, B and C) and to historical data of animals inoculated by these same routes (31). The 2 fetuses that survived showed normal fetal growth and no evidence of microcephaly, as determined by frequent ultrasound monitoring of biparietal diameter (values were within or above the mean ± 2 SD range of uninfected fetuses; data not shown).
Postnatal course and ZIKV-specific antibody detection in exposed infant macaques. Upon delivery, both infants looked visibly normal, had normal birth weight (460-500 g, infant no. 1 and infant no. 2, respectively) relative to newborn macaques born at the same facility, and were dam reared. They were housed with their mothers until approximately 17 months of age and then pair housed together until time of euthanasia at Pregnant macaques were inoculated by both i.v. and intra-amniotic routes between GDs 42 and 53 followed by frequent monitoring. Whereas 4 dams had fetal loss or stillbirth, the other 2 animals delivered infants that were dam-reared, subsequently weaned and then housed together until they were euthanized at approximately 2 years of age. The patterns of viral RNA levels in plasma (B) and amniotic fluid (C) of the pregnant dams that delivered live infants were similar to those for animals whose fetuses died and reflects prolonged virus replication. The dotted lines show the limit of detection. (D) The 2 ZIKV-exposed infants had normal weight gain. Green dots indicate historical control data (15,585 data points collected from n = 284 female animals over the first 2 years of life). (E) Anti-ZIKV antibodies in plasma of dams and infants measured by whole-virion ELISA, showing rapid loss of ZIKV IgG in congenitally exposed infants after birth and gradual decline of IgG in ZIKV-infected dams. Magnitude of ZIKV-specific IgG is expressed as the log of ED 50 . ZIKV, Zika virus; GDs, gestational days; ED 50 , 50% of maximal effective dilution. approximately 2 years of age. Throughout that time, both animals had normal weight gain ( Figure 1D). Both juvenile macaques were tested on a panel of behavioral tests to index differences in affective reactivity and cognition and showed no obvious abnormal behavior compared with 2 age-matched, dam-reared, and weaned juvenile macaques (Bliss-Moreau et al., unpublished observations).
Blood samples were collected regularly (≥15 time points) from the 2 dams and their 2 infants between 2 days after delivery until euthanasia. None of the maternal and infant plasma samples had detectable ZIKV RNA. In addition, several CSF and urine samples and spleen and lymph node specimens collected from both infants at the time of necropsy were tested, and none had detectable viral RNA.
ZIKV-specific IgG antibodies were measured via whole virion ELISA. Concentration of anti-ZIKV IgG in the neonatal macaques shortly after delivery were similar to titers in their mothers (titers of 1:1649 to 1:10,389), suggesting passive transplacental transfer of maternal IgG into the infants' circulation ( Figure 1E). ZIKV-specific IgG in infants then declined with a half-life of approximately 1.9 weeks, which is similar to the half-life previously described for passively acquired rhesus macaque antibodies in infant macaques (42), and became undetectable (titer less than 1:50) by 6 months of age. Although we cannot exclude that the infants may have made their own antibody response in utero, which may have been masked by high titers of maternal antibodies at birth, the observation that the infants did not show persistent anti-ZIKV antibodies in plasma after birth suggests that there was no virus replication in these infants after birth, since persistent infection would have led to an increase in ZIKV-specific IgG antibody titers.
Maternal IgG concentrations were higher and persistent as expected. However, for one of the dams (no. 1), antibody levels became undetectable by 72 weeks after delivery, suggesting limited durability of B cell memory responses after early containment of viremia ( Figure 1E).
Ocular biometry in ZIKV-exposed infant macaques. Serial ophthalmic examination of the 2 congenitally ZIKV-exposed infants showed no overt evidence of anterior segment abnormalities on slit lamp biomicroscopy ( Figure 2A). Intraocular pressures (IOPs) remained within normal range throughout the study but were slightly above average when compared with normal, age-matched control animals ( Figure 2B), although IOP values varied between individuals. Both infants showed normal rates of axial elongation compared with control eyes and published data (43), based on axial lengths measured from ultrasound A-scans at 28 and 82 weeks of age ( Figure 2C). However, both eyes of infant no. 1 showed a slight reduction in anterior chamber depth (-0.28 mm and -0.46 mm) and an increase in lens thickness (+0.56 mm and +0.62 mm) between their first and second year of life, based on A-scan biometry, in contrast to the other ZIKV-infected infant (no. 2) and control eyes (Figure 2, D and E), which showed the opposite trend but varied between individual animals. The vitreous chamber elongated in both infants, similar to healthy eyes ( Figure 2F). Thus, although the overall axial growth of the ZIKV-infected infant eyes appeared normal, 1 of the 2 animals demonstrated anterior chamber shallowing and lens thickening that did not follow normal postnatal ocular development.
Chorioretinal lesions in a ZIKV-exposed infant macaque. Fundic examination of infant no. 1 by indirect biomicroscopy demonstrated a large, colobomatous chorioretinal atrophic lesion in the superotemporal mid-periphery of the right eye and 2 similar but smaller areas of chorioretinal atrophy nasal and superior to the optic disc of the left eye ( Figure 3A). Multimodal imaging demonstrated a lack of choroidal vascular pattern on near infrared (NIR) imaging and absence of RPE-derived fundus autofluorescence (FAF) within these lesions. At the same time, fluorescein angiography (FA) showed staining of the lesion borders without dye leakage, indicating the absence of any neovascular or exudative features ( Figure 3A). Live cross-sectional imaging of these lesions using SD-OCT revealed near-complete atrophy of retinal and choroidal layers, with some thin, residual retinal tissues in areas of retinal vessels overlying the scleral wall resembling the typical intercalary membrane seen in chorioretinal colobomas ( Figure 3B). Posterior pole examination of infant no. 1 also revealed a crescent-shaped peripapillary atrophy in the right eye, along the same meridian as the large chorioretinal lesion, with superotemporal dragging of the macula and superior retinal vascular arcade ( Figure 3C, top left). The macular region of the left eye of infant no. 1 and both eyes of infant no. 2 appeared similar to healthy eyes, with the exception of a small, yellowish spot in the temporal macula of the right eye of infant no. 2, which was not seen on NIR or FAF imaging, suggesting that the spot did not affect the choroid or RPE and is likely nonspecific.
Serial measurements of the 3 chorioretinal atrophy lesions in infant no. 1 showed no noticeable change in lesion diameter during the study period ( Figure 4A). However, the disc-to-fovea distance was noticeably longer in the right eye compared with the left eye, eyes of infant no. 2, or control eyes ( Figure 4B Figure 5C). Examination of individual retinal layers showed that most of the retinal thinning was a result of reduction in the GCL and outer nuclear layer (ONL), which consists of the cell bodies of retinal ganglion neurons and photoreceptors, respectively ( Figure 5D). The thinning of these retinal layers was more pronounced in the right eye of infant no. 1, which exhibited the large chorioretinal coloboma, peripapillary atrophy, and macular dragging, and less severe in the left eye, which exhibited smaller colobomas and no macular distortion ( Figure 5D). The other ZIKV-infected infant no. 2 did not show noticeable thinning in most retinal layers, except for the ONL, which was slightly reduced compared with healthy control eyes ( Figure 5D).
Ocular histopathology in ZIKV-exposed infant macaques. Both animals were euthanized at approximately 2 years of age. The histology findings within the brain noted in the previous fetal studies (31) (changes in ependymal lining) were not observed in these infants, although minimal mineralization was observed in infant no. 2.
Macroscopic examination of the 2 eyes of infant no. 1 confirmed the presence of the chorioretinal colobomas ( Figure 6, A and B), where histological analysis revealed disorganization and thinning of all retinal and choroidal layers. The retina in this area was reduced to a thin layer of dysplastic neuropil with scant glial cells, and the choroidal stroma was reduced to thin linear bundles of pigmented fibrous connective tissue that blend with the dysplastic retina ( Figure 6, C-F). Neither eye of infant no. 2 demonstrated any pathologic histologic findings, including chorioretinal lesions or thinning of chorioretinal layers. Macroscopic pathology and histology of other major organ systems, including spleen, lymph nodes, lung, heart, jejunum, liver, kidney, spinal cord, and middle ear, did not reveal any lesions associated with ZIKV.
Discussion
CZS is a devastating cause of congenital ocular malformations resulting from maternal ZIKV infection during pregnancy (8,9,11). However, the disease is poorly modeled in mice because rodents are not natural hosts of ZIKV and lack ocular anatomic features, such as the macula, which are unique to primate species. In this study, we employed a well-characterized model of CZS by infecting pregnant rhesus monkeys with ZIKV during the late first trimester and provided a detailed characterization of postnatal ocular development in 2 ZIKV-exposed infants over 2 years. We found that one of these animals exhibited large chorioretinal colobomas in both eyes, with macular dragging, peripapillary atrophy, and retinal thinning caused by loss of retinal ganglion neuron and photoreceptor layers that were more pronounced in the right eye of this animal. Despite the presence of these congenital ocular malformations, axial elongation and retinal development in ZIKV-infected infants appeared to follow normal postnatal maturation trajectories. The evolution of these ocular findings, along with the normal weight gain, absence of behavioral deficits, and loss of ZIKV-specific IgG after birth, suggests that active ZIKV infection and development of ocular defects occurred primarily in utero, with no indication of viral replication based on the absence of viral RNA in blood and CSF from infants and no continued impact on ocular development postnatally.
In our study, the ZIKV-exposed infant macaques exhibited congenital ocular anomalies in the absence of microcephaly or apparent neurological or behavioral deficits. This is similar to human CZS, in which ocular abnormalities have also been identified in patients with normal head circumference (10, 11), highlighting the need for eye screening among at-risk infants. In humans, the majority of ocular anomalies in CZS affect posterior segment structures, such as chorioretinal atrophy, torpedo maculopathy, and peripapillary atrophy (8,9,15,(18)(19)(20). These fundus findings are clinically descriptive but do not ascribe the pathologic cause of these types of lesions, which may occur as a result of trauma, infection, inflammation, or developmental defect, as in a chorioretinal coloboma. The pathogenesis of ocular abnormalities in CZS remains incompletely understood. Early studies identified ZIKV throughout the visual system, including the retina, optic chiasm, suprachiasmatic and lateral geniculate nuclei, and superior colliculus, which led to the hypothesis that ZIKV may be transmitted across the CNS through axonal transport (45). This is supported by SD-OCT retinal imaging of CZS infants that showed prominent thinning of the GCL, which consists of cell bodies of retinal ganglion neurons that send axon projections to the brain (21). However, additional studies also showed that ZIKV can effectively infect Muller glia as well as retinal vascular endothelium and RPE that respectively line the inner and outer blood-retinal barriers in mice, suggesting that circulating ZIKV can bypass these barriers to directly infect retinal tissues (28,46). Our study supports this latter hypothesis based on (a) the presence of multiple ocular anomalies in the absence of neurological findings; (b) prominent thinning of ONL in addition to GCL, indicating loss of photoreceptors, which do not project directly to the CNS; and (c) constellation of macular dragging and peripapillary atrophy along the same meridian of the chorioretinal atrophy. These data suggest that these lesions are congenital colobomas potentially caused by infection of retinal progenitor cells during retinal development in utero rather than atrophic scars left by a ZIKV-related chorioretinitis. In fact, experimental models suggest that ZIKV does not directly infect photoreceptors (46,47), and the ONL layers in the right eye of infant no. 1 increased with age, suggesting that the ONL thinning may result from mechanical distortion from the large coloboma rather than photoreceptor degeneration. By characterizing the postnatal evolution of ocular abnormalities in ZIKV-exposed infants over 2 years, our study provides additional insight into the relationship between the ocular defects and in utero exposure to ZIKV detection in CZS. In previous studies that employed a similar model of combined i.v. and IA inoculation of pregnant macaques, when fetuses died in utero or were euthanized at the end of gestation or immediately after birth, the animals displayed diffuse viral tropism with the highest ZIKV RNA concentration found in neural, lymphoid, and cardiopulmonary systems, even though virus could not be found in cord blood plasma (31). In our study, the prolonged detection of viral RNA in amniotic fluid indicates the presence of viral replication in the fetal-placental compartment, which gradually declined toward the end of gestation, possibly due to increased transplacental transfer of maternal antibodies (48). Postnatally, the absence of viral RNA and gradual loss of antibodies in the infants support the lack of ongoing virus replication and thus insufficient antigen exposure to induce and sustain antibody responses. In our study of postnatal ocular development, despite the presence of congenital chorioretinal colobomas and retinal thinning in one infant, both axial length ( Figure 2C) and total retinal layer thickness ( Figure 5C) of both ZIKV-infected infants showed normal growth compared with healthy controls. Over the 2 years, the size of the chorioretinal colobomas remained unchanged ( Figure 4A), and neither the retinal ganglion neurons in the GCL nor the photoreceptors in the ONL underwent further degeneration ( Figure 5D). Thus, the in utero ZIKV infection appeared to be self-limited, and the ocular insult to the fetus occurred mostly during the early stages after infection. The absence of detectable viral RNA and loss of antibodies have been described in a human infant with ocular defects and CZS (19). This further highlights the difficulty of determining the long-term impact of CZS, because, for example, a child who presents to the clinic with neurological or ocular abnormalities but without a known history of ZIKV infection during pregnancy or detectable ZIKV or antibody, may pose difficulty establishing a causal relationship between the defect and in utero ZIKV exposure (49,50). To date, few studies have longitudinally followed the progression of congenital ocular anomalies in human infants with CZS. Using a well-established macaque model of CZS, our study showed that despite the presence of chorioretinal colobomas and retinal thinning at birth, postnatal ocular and retinal development appear to follow normal growth trajectory over the first 2 years of life, without evidence of active viral replication or further deterioration of ocular defects. Although we noted a slight anterior chamber shallowing and lens thickening in the animal with ocular pathology, we did not observe any visible anterior segment abnormalities, IOP elevation, or visual behavioral deficits. Longterm human studies in children with CZS could provide additional insight into the risk of glaucoma or cataracts in this pediatric population. Importantly, despite the stability of the ocular defects observed in this study, continued ophthalmic monitoring of suspected patients with CZS remains paramount to minimize the risks of amblyopia or long-term visual or neurological impairment.
Methods
Animals and care. The adult female rhesus macaques (Macaca mulatta) in the study were born and raised in the conventional (not specific pathogen-free) breeding colony at the California National Primate Research Center (CNPRC). None of the animals were positive for type D retrovirus, SIV, or simian lymphocyte tropic virus type 1. All animals had prior successful pregnancies (range 2-6). For time-mated breeding, the female macaques were monitored for their reproductive cycles, and at the time of optimal receptiveness, they were temporarily housed with reproductively viable males. Pregnancy was confirmed via ultrasound. Gestational ages were determined from the menstrual cycle of the dam and the fetus length at initial ultrasound compared with growth data in the CNPRC rhesus macaque colony. Fetal health and viability were rechecked via ultrasound immediately before the first ZIKV inoculation and regularly thereafter. The 2 infants described in this report were born naturally by vaginal delivery on GD 168 (infant no. 1) or GD 171 (infant no. 2). Infants were reared by and lived with their mothers until they were approximately 17 months of age. At that time point the infants were then housed together until they were euthanized at 23-24 months of age. For control animals, ocular biometry and SD-OCT data from 10 age-matched rhesus macaques (mean age 62.5 ± 32.6 weeks, 6 males and 4 females) were randomly identified from the same colony and found to have no ocular abnormalities.
Macaques were housed indoor in stainless steel cages (Lab Product Inc.), the sizing of which was scaled to the size of each animal, as per national standards, and were exposed to a 12-hour light/dark cycle, 64°F-84°F, and 30%-70% room humidity. Animals had free access to water and received commercial chow (high-protein diet; Ralston Purina Co.), fresh produce 2 times per week, and forage (pea and oat mix) daily.
Virus inoculations. A combination of 2 virus isolates was used to inoculate the pregnant animals; these included a 2015 Puerto Rico isolate (PRVABC-59; GenBank, KU501215) and a 2015 Brazil isolate (strain Zika virus/H.sapiens-tc/BRA/2015/Brazil_SPH2015; GenBank, KU321639.1), which were used earlier in pregnant and nonpregnant animals (31,40,51). The use of 2 strains was intended to mimic a hyperendemic area where different variants may circulate. Aliquots of both virus stocks were kept frozen in liquid nitrogen, and new vials were thawed shortly before each inoculation. The inoculum was adjusted to 2000 PFUs (1000 PFUs of each strain) in 1 mL of RPMI-1640 medium, then kept on wet ice. Each pregnant animal was inoculated by both i.v. and IA routes, each route with 1 mL (2000 PFUs of the mixture). Whereas the normal gestation of rhesus macaques is 165 days, inoculations occurred between GDs 42 and 53, corresponding to the first trimester of human gestation. The 2 pregnant dams described in detail in this report were inoculated on estimated GD 51 (animal no. 1) or GD 53 (animal no. 2).
Sample collection and clinical monitoring. Macaques were evaluated twice daily for clinical signs of disease, including poor appetence, stool quality, dehydration, diarrhea, and inactivity. When necessary, macaques were immobilized with ketamine hydrochloride (Parke-Davis) at 10 mg/kg and injected intramuscularly after overnight fasting. Animals were sedated on days 0 (time of first virus inoculation; ~GD 30), 2, 3, 5, and 7 and then weekly for sample collection and ultrasound monitoring of fetal health. Fetal measurements were collected as previously described (40). After delivery, the animals were bled initially every few weeks, then less frequently. Blood was anticoagulated with EDTA and collected using venipuncture at every time point for complete blood counts (with differential count), and a separate aliquot of blood was centrifuged for 10 minutes at 800g to separate plasma from cells. The plasma was spun an additional 10 minutes at 800g to further remove cells, and aliquots were immediately frozen at -80°C.
Ultrasound-guided amniocentesis was conducted starting on day 7 after inoculation and then at all time points listed above according to methods described earlier (40). Amniotic fluid was spun to remove cellular debris, and the supernatant was aliquoted and immediately cryopreserved at -80°C for viral RNA assays.
Isolation and quantitation of viral RNA from fluids and tissues for determination of infection status. ZIKV RNA was isolated from samples and measured in triplicate by qRT-PCR according to methods previously described (40). According to the volume available, the limit of detection (LOD) for plasma and amniotic fluid ranged from 1 to 2.6 log 10 viral RNA copies per mL fluid; because the average LOD was 1.4 log 10 viral RNA copies, this was used as the LOD to graph Figure 1. For tissue, the limit of detection ranged from 3.2 to 3.5 log 10 viral RNA copies/g tissue.
Detection of ZIKV-specific binding IgG in macaque plasma. ZIKV-specific binding IgG was detected using a whole virion ELISA previously described (48). Briefly, high-binding 96-well ELISA plates (Greiner) were coated with 40 ng/well of 4G2 antibody (clone D1-4G2-4-15) in carbonate buffer (pH 9.6 overnight at 4°C). Plates were blocked in Tris-buffered saline containing 0.05% Tween-20 and 5% normal goat serum for 1 hour at 37°C, followed by an incubation with ZIKV (PRVABC59 strain from BEI). Rhesus plasma was tested at 1:12.5 starting dilution in 8 serial 4-fold dilutions, incubating for 1 hour at 37°C. HRP-conjugated goat anti-human IgG monkey ads-HRP (Southern Biotech, 2049-05) was used at a 1:2500 dilution, followed by the addition of SureBlue reserve TMB substrate, followed by stop solution (KPL). Optical densities were detected at 450 nm (PerkinElmer, Victor). Half of maximal effective dilution (ED 50 ) values were calculated with the sigmoidal dose-response (variable slope) curve fit in Prism 7 (GraphPad), which uses a least squares fit. The positive control was plasma from a ZIKV-infected monkey at 6 weeks after infection, and the negative control was plasma from an uninfected monkey. Samples with an ED 50 below the limit of detection of 50 were plotted at the limit.
Ophthalmic examination. For detailed ophthalmic examinations, animals were sedated with ketamine hydrochloride, midazolam, and dexmedetomidine, followed by pupillary dilation with phenylephrine (Paragon Biotech), tropicamide (Bausch & Lomb), and cyclopentolate (Akorn). Ophthalmic evaluations were conducted by portable slit lamp biomicroscopy (SL-7E, Topcon) of the anterior segment and by indirect ophthalmoscopy (Heine) of the retinal fundus by a board-certified ophthalmologist and retinal specialist. IOP was measured by rebound tonometry (TonoVet, Icare). External photographs of the anterior segment were captured using a digital camera (Rebel T3, Canon). A-scan ultrasonography (Sonomed PacScan 300A+) was performed for measuring ocular biometry.
Multimodal ocular imaging and analysis. Color fundus photography was performed using the CF-1 Retinal Camera (Canon) with a 50-degree wide-angle lens. NIR, FAF, FA, and SD-OCT were performed using the Spectralis HRA+OCT system (Heidelberg Engineering), using a 30-degree or 55-degree objective for NIR, FAF, and FA imaging and the 30-degree objective for SD-OCT (52). Confocal scanning laser ophthalmoscopy was used to capture 30 × 30-degree NIR, FAF, and FA images using an excitation light of 820 nm for NIR and 488 nm for blue-peak FAF and FA imaging (53). For FA, animals were injected with 7.7 mg/kg fluorescein sodium (Akorn) by i.v. route, and serial images captured up to 15 minutes after dye injection. SD-OCT was performed using a 20 × 20-degree volume scan and a 30 × 5-degree raster scan protocol, centered on the fovea and in the areas of chorioretinal colobomas, with progression mode using retinal vessel tracking enabled, where possible, to reliably image the same area for longitudinal imaging sessions. All retinal measurements were made using the Heidelberg Explorer software (version 1.9.13.0, Heidelberg Engineering), which has been used in prior studies and calibrated for both humans (54)(55)(56) and macaques (57)(58)(59). Chorioretinal lesion diameters were measured from the widest horizontal dimension of each lesion on NIR imaging. Disc-to-fovea distance was measured from the visual center of the optic disc to the center of the foveal pit based on combined NIR and SD-OCT images. Semiautomated segmentation of the chorioretinal layers was performed by the Heidelberg Explorer software, followed by manual adjustment of the segmentation lines by a masked grader, including the nerve fiber layer, GCL, inner plexiform layer, inner nuclear layer, outer plexiform layer, ONL, photoreceptor inner and outer segments, and RPE. Average retinal layer thicknesses were measured from the nasal quadrant of the 1-3 mm ring of the Early Treatment of Diabetic Retinopathy Study grid (60) for consistency between animals.
Necropsy and tissue collection for histopathology. Animals were euthanized with an overdose of pentobarbital, followed by immediate collection of a specimen of spleen and inguinal lymph node (preserved in RNALater for RT-PCR) and upper body perfusion with 4% paraformaldehyde for optimal preservation of brains and eyes for histological analysis. Brains and eyes were collected immediately. The right hemisphere and both eyes were fixed further in 4% paraformaldehyde; the left hemisphere and other tissues were preserved in 10% neutral buffered formalin, routinely paraffin-embedded; and sections were stained with H&E and evaluated by board-certified anatomic pathologists. Histological sections were imaged using a ×40 objective lens on a Virtual Slide Microscope (VS120-S6-W, Olympus).
Statistics. Graphing and statistical analysis were performed with Prism 9 (GraphPad). P values of less than 0.05 were considered significant.
Study approval. Research was carried out at the CNPRC, which is accredited by the Association for Assessment and Accreditation of Laboratory Animal Care International. All studies using rhesus macaques (Macaca mulatta) followed the guidelines of the Association for Research in Vision and Ophthalmology Statement for the Use of Animals in Ophthalmic and Vision Research, complied with the Guide for the Care and Use of Laboratory Animals (National Academies Press, 2011), and were approved by the University of California, Davis, Institutional Animal Care and Use Committee.
Author contributions
KKAVR conceived and designed the project. GY, MIC, AR, RIK, JW, JU, AS, EEB, EBM, WG, HW, and TS acquired the data. GY, AR, RIK, AS, EBM, WG, HW, TS, SP, AA, and LLC analyzed the data. GY and KKAVR drafted the manuscript, and all authors critically revised and edited the manuscript. GY, SMT, and KKAVR also provided administrative support. | 8,492.4 | 2020-11-12T00:00:00.000 | [
"Medicine",
"Biology",
"Environmental Science"
] |
Application of Four-Point EGSOR Iteration with Nonlocal Arithmetic Mean Discretization Scheme for Solving Burger’s Equation
The main objective for this study is to examine the efficiency of block iterative method namely Four-Point Explicit Group Successive Over Relaxation (4EGSOR) iterative method. The nonlinear Burger’s equation is then solved through the application of nonlocal arithmetic mean discretization (AMD) scheme to form a linear system. Next, to scrutinize the efficiency of 4EGSOR with Gauss-Seidel (GS) and Successive Over Relaxation (SOR) iterative methods, the numerical experiments for four proposed problems are being considered. By referring to the numerical results obtained, we concluded that 4EGSOR is more superior than GS and SOR iterative methods in aspects of number of iterations and execution time.
Introduction
In recent yeras, the Burger's equations have been known as one of the most important nonlinear equation. This equation can be found in applied mathematics, physics and engineering. There are many researchers proposed various methods in solving the problems of Burger's equations such as Cole-Hopf Transformation [1], Standard Explicit Method [2], Weak Galerkin Finite Element Method [3] and Adomian Decomposition Method [4].
In this paper, we focus on the application of nonlocal arithmetic mean discretization scheme in getting the linear system of Burger's equation. To solve the linear system since it is sparse and large scale, the family of iterative methods such as point and block iterative methods is used as a linear solver. To scrutinize the efficiency of 4EGSOR iterative method with nonlocal (AMD) scheme, there are four examples of Burger's problems that can be solved by using the block iterative methods.
In order to get the approximation solution, let us consider the general form of nonlinear Burger's equation: is the nonlinear term.
There are many sections in this paper. For the first section, we elaborate the introduction of Burger's equation. In section 2, we state the formulation of nonlocal AMD scheme to form a linear system. Next, the formulation of the proposed iterative methods is shown to solve the linear system. Then, in section 4, we introduced four examples of problem 1 and we also present the numerical results. Finally, in the last section we made the conclusion of this paper.
Formulation of Nonlocal AMD Scheme
To start this section, problem (1) can be rewritten as To discretize problem (2), we need to consider the following general formulations of nonlocal AMD scheme at any time level 1 j which given as follows [5] By considering equations (3) and (4) Actually, the expression shows the nonlinear term of problem (2). Then, we need to use the nonlocal AMD scheme to form a system of linear equation. By substituting equation (4) into equation (6), we get the new expression as Both equations (5) and (7) can be represented in general form as By considering any group of two neighboring node points,
Formulation of EGSOR Method
In the previous section, by referring to the linear system [9], it is obviously show that the feature of its coefficient matrix is sparse and large scale. According to the previous studies, Evans [6] has proposed the Explicit Group iterative method to solve the sparse linear system. Hence, this paper attempts to scrutinize the effectiveness of the 4EGSOR method. Figure 1 shows the implementation of 4EGSOR iterative method and as can be seen in this figure, the last three node points are known as an ungroup case [7]. The general formulation of the EG iterative method can be shown as , By determining the inversion matrix of equation (10), the formulation of the 4-point Explicit Group Gauss-Seidel (4EGGS) iterative method can be shown as By adding the weighted parameter, into equation (11), the general form of 4-Point EGSOR iterative method can be shown as If yes, proceed to next step. Or else repeat step (iii). v. Stop.
Numerical Examples
For the numerical comparison to testify the efficiency of 4EGSOR iterative method, there are four examples that are proposed. There are three aspects will be measured such as number of iterations, execution time (second) and maximum absolute error. The numerical results obtained are presented as shown in Table 1.
The exact solution of problem (13) is specified by
Example 2 [9]
Let the initial value equation be define as The exact solution of problem (15) can be stated as shown 2 , . 1 2 x v x t t (16)
Example 3 [10]
Let the initial value equation be define as sin , 0 2 , for 0. cos The exact solution of problem (17) is shown as
x Table 2 show the reduction percentage of SOR and 4EGSOR iterative methods with GS iterative method. By referring to Table 1, the number of iterations of SOR iterative method has decreased by approximately and respectively compared to GS iterative method. Hence, it required less execution time than GS iterative method by and .
Meanwhile for the 4EGSOR iterative method, the number of iterations has decreased by approximately and respectively. Thus, the 4EGSOR iterative method also required less execution time than GS iterative method by and . By referring to the reduction percentage in Table 2, it can be concluded that the 4EGSOR iterative method is more superior as compared with GS and SOR iterative methods.
Conclusion
In this study, the nonlocal AMD scheme is successfully applied to solve the nonlinear Burger's equation. The nonlinear equation are then transform to linear system. By referring to the numerical results obtained, it show that 4EGSOR iterative method is efficient than GS and SOR iterative methods in aspects of number of iterations and execution time. For the future research, other block iterative methods such as EDGAOR method can be considered in solving Burger's equation. | 1,320.2 | 2019-11-01T00:00:00.000 | [
"Mathematics"
] |
A Magnetic Sensor with Amorphous Wire
Using a FeCoSiB amorphous wire and a coil wrapped around it, we have developed a sensitive magnetic sensor. When a 5 mm long amorphous wire with the diameter of 0.1 mm was used, the magnetic field noise spectrum of the sensor was about 30 pT/√Hz above 30 Hz. To show the sensitivity and the spatial resolution, the magnetic field of a thousand Japanese yen was scanned with the magnetic sensor.
Introduction
Many kinds of high sensitive magnetic field sensors have been developed. Among them, the inductive coil sensor [1] is one of the most commonly used magnetic types. A highly sensitive inductive coil sensor with a noise level around 50 fT/Hz at 10 kHz was fabricated using amorphous ribbon (Metglas 2714AF) with length of 150 mm, cross section of 5 × 5 mm 2 , and a coil of 10,000 wound turns [2]. However, inductive coil sensors cannot measure the DC magnetic field and it is difficult to obtain low noise levels at low frequency with small inductive coil sensors.
A fluxgate magnetometer can measure the DC magnetic field [3]. It consists of three coils wound around a ferromagnetic core: an AC excitation winding, a detection winding that indicates the zero field condition and a DC bias coil that creates and maintains the zero field. The use of modern materials for magnetic cores has improved the sensitivity of fluxgate magnetometers to about several pT/Hz [4], but the operation frequencies of flux gate magnetometers are normally low, which limits their measuring bandwidth.
The sensors of GMR sensors, AMR sensors and MI sensors normally have electrical connections with the sensing parts, which are not convenient to fabricate in some applications, such as the construction of magnetic microscopes, where a small distance of several micrometers between the sensor and the sample is needed. In this paper, we will describe a small simple sensitive magnetic field sensor using (Fe 0.06 Co 0.94 ) 72.5 Si 2.5 B 15 (FeCoSiB) amorphous wire with a coil wrapped around it. Figure 1 shows the configuration of the magnetic sensor, which is composed by a coil and a FeCoSiB amorphous wire. An AC current and a DC current flow in the coil to produce the AC modulation magnetic field and DC bias magnetic field. The capacitors C1, C2 and the inductor L are used to isolate the DC current or AC current. When a proper DC bias field is applied to the sensor, due to the nonlinearity of the B-H curve of the amorphous wire, the amplitude of the AC voltage V AC changes with the external field. This is the principle of the magnetic sensor. The inductance of the coil in Figure 1 can be estimated by following formula [17]:
Analysis of the Magnetic Sensor
where, L is the inductance of the coil; l is the length of the coil; N is the turns of the coil; A is the cross area of the amorphous wire; μ 0 is the vacuum magnetic permeability; μ re is the effective relative magnetic permeability, which is related with the permeability of the amorphous wire, the diameter of the amorphous wire and the diameter of the coil; k is a constant factor determined by the geometry of the coil. For the frequency ω, the impedance Z of the coil can be expressed as: If a single frequency current source I AC = Ie jωt flows in the coil; the voltage V AC across the coil can be expressed: where, I is the amplitude of I AC and V is the amplitude of V AC . Due to the nonlinearity of the M-H curve of FeCoSiB amorphous wire, the effective relative permeability of the amorphous wire μ re = əB/əH changes with the external magnetic field, then the amplitude of V AC also changes with the external field H. Figure 2 shows the block diagram of the driving circuit of the magnetic sensor. A 5 mm-long FeCoSiB amorphous wire with the diameter of 100 μm is used, which is made by UNITIKA Ltd. (Nagoya, Japan) using water quenched spinning method. For this FeCoSiB amorphous wire with the ratio of Fe (0.06) and Co (0.94), the magnetostriction value S is close to zero [18], and the B-H curve is steep with small hysteresis [19]. The saturation magnetization of the wire is about 0.81 T with relative permeability at zero magnetic field of about 2000. The saturation magnetic field is about 300 A/m. The wrapped coil is 30 turn single layer coil. The diameter D of the coil is about 0.6 mm. The signal generator is used to supply a sine wave current. In our experiments, 1 MHz sine wave current is used and the amplitude is about 20 mA. A DC current is used to produce the bias DC magnetic field, which is necessary to achieve the best operation of the magnetic sensor. The AC voltage across the coil is amplified by a preamplifier. A demodulator is used to get the amplitude of the AC voltage. After the demodulator, an amplifier is used and the output DC voltage is adjusted to zero when there is no external magnetic field. The signal of V OUT corresponds to the external magnetic field. Figure 3 shows the output voltage changes with the external magnetic field when the bias DC magnetic field is about 2.5 Gauss. For external magnetic field between −2 Gauss and +2 Gauss, the magnetic field response is nearly linear. The output voltage was about 0.9 V/Gauss. To measure the magnetic field noise spectrum, we first measure the noise spectrum of the output voltage of the GMI sensor using a spectrum analyzer, then divide it by the value of 0.9 V/Gauss obtained from Figure 3. Figure 4 shows the magnetic field noise spectrum of the magnetic sensor measured in a 1 mm one layer permalloy shielding box. The peaks are the 50 Hz inference and its harmonics. The white magnetic field noise spectrum is about 30 pT/Hz.
Application and Discussion
Due to the small diameter of the amorphous wire, this magnetic sensor can be used to construct a magnetic microscope. To prove the sensitivity and the spatial resolution of the sensor, we measured the magnetic field produced by a Japanese thousand yen bill, which is printed with magnetic ink. The bill is put on an X-Y stage for the scanning with scanning steps of 0.1 mm. The lift off between the bill and the sensor is about 0.1 mm. The measurement is done in an unshielded environment. Figure 5 shows the scanning result. The number 1000 is clearly observed. Because the magnetic properties of FeCoSiB amorphous wire change with the temperature, the variance of environmental temperature will cause low frequency drift of the sensor. In our scanning measurements of the thousand yen bill and eddy current testing using the sensor, the influence is small because the measuring time is not long. In the future, we will develop a bridge type magnetic sensor to reduce the influence of the variance of environmental temperature.
Conclusions
A simple small high sensitive magnetic sensor with FeCoSiB amorphous wire was developed. This sensor can be used for magnetic microscope and eddy current nondestructive evaluation. | 1,682.8 | 2014-06-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
A Novel Ferroptosis-related Lncrna Prognostic Signature for Colorectal Cancer by Bioinformatics Analysis
Background: Recently, extensive studies have shown that ferroptosis in cancer treatment has been increasingly conrmed. The current study aims to construct a robust ferroptosis-related lncRNAs signature prediction model of colorectal cancer (CRC) patients by bioinformatics analysis Methods: The transcriptome data were abstracted from The Cancer Genome Atlas (TCGA). Differentially expressed lncRNAs were screened by comparing 568 CRC tissues with 44 adjacent non-CRC tissues. Univariate Cox regression, lasso regression, multivariate Cox regression were conducted to design a ferroptosis-related lncRNA signature. This signature’s prognosis was veried by the log-rank test of Kaplan-Meier curve and the area under curve (AUC) of receiver operating characteristic (ROC) in train set, test set, and entire set. Furthermore, univariate and multivariate Cox regression were used to analyze its independent prognostic ability. The relationship of the ferroptosis-linked lncRNAs' expression and clinical variables was demonstrated by Wilcoxon rank-sum test and Kruskal-Wallis test. Gene set enrichment analysis (GSEA) was performed to signaling pathways it may involve. Results: 2541 differentially expressed lncRNAs were screened, of which 439 are ferroptosis-related lncRNAs. A seven ferroptosis-related lncRNAs (AC005550.2, LINC02381, AL137782.1, C2orf27A, AC156455.1, AL354993.2, AC008760.1) prognostic signature was constructed, validated and evaluated. This model's prognosis in the high-risk group is obviously worse than that of the low-risk group in train set, test set, and entire set. The AUC of ROC predicting the three years survival in the train set, test set, and entire set was 0.796, 0.715, and 0.758, respectively. Moreover, the designed molecular signature was found to be an independent prognostic variable. Compared to clinical variables, this signature's ROC curves demonstrated the second largest AUC value (0.737). The expression of these
Background
In 2018, a total of 1.8 million new patients with colorectal cancer (CRC) and 881,000 CRC-related deaths were reported. This accounted for one in 10 newly diagnosed cases of CRC and CRC-associated deaths.
Hence, CRC is ranked as the third most prevalent but second leading cause of cancer-related mortality [1]. Despite the fact that recent advances in the genetic and molecular characterization of tumors, the 5-year survival rate of early CRC exceeds 90% whereas that of rate of metastatic colorectal cancer is below 14% [2]. Therefore, investigating promising prognostic signatures along with potential targets is considered as an essential phase to achieving this goal.
Ferroptosis is a type of cell death that is characterized by high production of lipid ROS (L-ROS) as a result of inactivation of cellular glutathione (GSH)-dependent antioxidant defenses. This form of cell death is iron-dependent and differs from apoptosis, classic necrosis, ferroptosis, and other forms of cell death [3,4]. Ferroptosis has been associated with the initiation of multiple diseases, including kidney injury, blood circulation diseases, conditions of the nervous system, and ischemia-reperfusion injury. It is therefore being investigated as a potential prognostic marker for various diseases [5]. Scholars have suggested that ferroptosis may be adaptive strategy used for eliminating cancerous cells and hence prevent cancer development in situations of infections, cellular stress, and nutrient de ciency [6]. Previous research has reported that some inducers, such as RSL3 [7], β-elemene [8], Resibufogenin [9], andrographis [10], bromelain [11], IMCA [12], talaroconvolutin A (TalaA) [13], ACADSB [14], erastin [15], dichloroacetate [16], and B. etnensis Raf. extract [17] suppressed the progression of CRC via inducing ferroptosis. Hence, it is essential to discover ferroptosis-linked biomarkers that can be applied as valuable early diagnostic as well as prognostic indicators for CRC.
Long non-coding RNAs (lncRNAs) is a class of non-coding RNAs with more than 200 nucleotides long that have apparently little or no protein-coding ability [18]. LncRNAs regulate critical biological functions related to growth of cells and survival, allosteric regulation of enzyme activities, chromatin modi cations, and genomic imprinting [19]. Besides, a mounting number of studies have chronicled that lncRNAs affect cancer progression and predict dismal prognosis in diverse cancer types by modulating ferroptosis. For example, p53 related lncRNA (P53RRA) promotes apoptosis and ferroptosis of cancerous cells by activating the p53 pathway [20]. LncRNA GABPB1-AS1 regulates the status of oxidative stress in context of erastin-triggered ferroptosis in HepG2 hepatocellular carcinoma cells [21]. LncRNA-linc00336 suppresses ferroptosis in lung cancer tissues by acting as a competing endogenous RNA [22]. Linc00618 accelerates ferroptosis via inhabiting vincristine (VCR) and lymphoid-speci c helicase (LSH) /SLC7A11 in leukemia [23]. In non-small cell lung cancer cells, LncRNA-MT1DP enriched on folate-modi ed liposomes promotes erastin-triggered ferroptosis by modulating the miR-365a-3p/NRF2 axis [24]. Hence, it is critical to explore the pivotal lncRNAs closely linked to ferroptosis along with prognosis in CRC.
This study is the rst to propose a predictive model of lncRNA related to ferroptosis genes in tumors. Therefore, we postulated that ferroptosis-linked lncRNAs could be valuable prognostic biomarkers for CRC patients. Herein, we explored the expression of lncRNAs in CRC from The Cancer Genome Atlas (TCGA) and identi ed ferroptosis -associated lncRNAs with prognostic potential. We constructed and veri ed a seven ferroptosis-correlated lncRNA biosignature with the ability to estimate the survival prognosis of CRC patients.
Methods
Data download and processing (Table 1). Patients with no follow-up time and follow-up time shorter than 30 days were not enrolled in the study.
Furthermore, we identi ed ferroptosis-related lncRNAs by the correlation analysis between the lncRNAs expression levels and the ferroptosis genes based on the criteria of P < 0.001and |Correlation Coe cient| > 0.3.
Development, veri cation, and assessment of prognostic biosignature
We utilized the R language 4.0.1version "caret" package to randomly classify the entire data set (Additional le 1) with FRlncRNAs expression pro les into two sets (train set (Additional le 2) and test set (Additional le 3)), and conducted univariate Cox regression for FRlncRNAs in the train group (P < 0.05). Lasso regression analysis was utilized to minimize over tting using the "glmnet" package [26] (P < 0.05). Afterward, multivariate Cox regression was employed to develop the optimal prognostic risk model and leveraged "coxph" and "direction = both" functions of the R language "survival" package [27] (P < 0.05). Then, the prognostic lncRNA signature's risk score constituting multiple lncRNAs was developed by summing up the product of each lncRNA with its corresponding coe cient. Additionally, the Proportional Hazards Assumption was tested in the Cox model. Similarly, on the basis of the previous training set's risk score formula, we applied it to the testing set as well as the entire set as validation.
This model was employed to explore each patient's survival prognosis by the Kaplan-Meier curve along with the log-rank test on the basis of the median of risk score, namely low-risk group and high-risk group in the train set, test set, entire set. The lncRNA signature's predictive power was explored by computing the AUC of 3 years using the ROC curve by the "survival ROC" package [28].
To further enhance the prognostic signature's credibility, we conducted a strati ed survival prognostic analysis on gender, age, clinical stage, postoperative tumor status, CEA levels, perineural invasion, vascular invasion, mismatch repair (MMR) and gene mutation status (KRAS, BRAF).
Independent and prognostic value of the lncRNA signature Multivariate Cox regression and univariate Cox regression analyses were conducted to analyze the independent and prognostic ability of the lncRNA signature in the train set (Additional le 4), test set (Additional le 5), and entire set (Additional le 6). The clinical parameters include age, gender, clinical stage, T stage, lymph nodes as well as distant metastasis. Besides, compared with clinical variables, The ROC curve was employed to explore whether the lncRNA biosignature has better predictive power. The "rms" package was employed to construct the nomogram according to the multivariate Cox regression result (P < 0.05). To further investigate whether the ferroptosis -associated lncRNAs are involved in CRC development, we explored the relationship of the ferroptosis-linked lncRNAs' expression with clinical variables using the Wilcoxon rank-sum test and Kruskal-Wallis test.
GSEA analysis of the lncRNA signature.
Gene set enrichment analysis (GSEA4.1.0) downloaded from https://www.gseamsigdb.org/gsea/index.jsp website was employed to identify the biological function of the prediction model [29]. Based on the median expression of lncRNA signature riskScore in 568 tumor samples, we divided them into low and high-risk groups for KEGG analysis of GSEA. The abundant signaling cascades in each phenotype were based on the normalized enrichment score (NES), the nominal (NOM) P-value as well as the false discovery rate (FDR). FDR < 25% and NOM P-value < 5% serve as a standard for inclusion.
| Statistical Analysis
R software 4.0.3 version and attached packages were employed to conduct data analyses. All the statistical analyses were two-sided. P < 0.05 signi ed of statistical signi cance.
Screening of ferroptosis-related lncRNAs in CRC
Comparing CRC tissues with adjacent non-CRC tissues, 2541 differentially expressed lncRNAs were found, of which 1805 are up-regulated and 736 are down-regulated (Additional le 7). The correlation results between 259 ferroptosis-related genes and differentially expressed lncRNAs shown that there are 439 ferroptosis-related lncRNAs (FRlncRNAs) (Additional le 8).
Construction, validation, and evaluation of a seven ferroptosis-related lncRNAs prognostic signature The entire set (N = 506) with 439 FRlncRNAs expression data was randomized into the test set (N = 252) and train set (N = 254). In the univariate Cox regression assessment, 22 FRlncRNAs modulated the overall survival of the patients in the train set (Fig. 1a). Lasso regression was used for further analysis to eliminate over tting lncRNAs, and the 16 lncRNAs we obtained were used for the subsequent multivariate Cox regression analysis (Fig. 1b-d) According to the median value of the risk score, results of the Kaplan-Meier curves demonstrate that the high-risk group has a remarkably dismal overall survival (OS) in contrast with the low-risk group in the train set (P = 2.899E-06), test set (P = 5.314E-03), and entire set (P = 1.1E-06) (Fig. 2a-c). The train set shows three years' OS for patients with high and low-risk group were 60.6% and 90.5%, respectively. The test set is 63.9% and 90.1%, respectively. The entire set is 60.6% and 90.5%, respectively. The AUC of three years dependent ROC for the seven-lncRNA biosignature achieves 0.796, 0.715, and 0.758 respectively in the train set, test set, and entire set ( Fig. 2d-f), which demonstrate the good performance of the model in estimating the CRC patients' OS. The mortality rate was higher in patients with high-risk scores relative to those with low-risk scores in the three sets ( Fig. 2g-i). The six lnRNAs' (AC005550.2, LINC02381, C2orf27A, AC156455.1, AL354993.2, AC008760.1) expression of signature were lower in low-risk group compared to the high-risk group in cluster heat map, AL137782.1 oppositely (Fig. 2j-l).
It is worth noting that AC156455.1, and AL354993.2's high expression of this lncRNA signature also has a worse OS than low (Fig. 3). The association of the seven lncRNAs with ferroptosis genes is shown in Fig. 4. In addition, we strati ed according to various clinical factors (clinical stage, gender, age, CEA levels, MMR status, postoperative tumor status, perineural invasion, vascular invasion, KRAS mutation, BRAF mutation) and applied the prognostic model to OS detection, which is shown in Fig. 5, the results shown that the signature has good predictive signi cance for CRC patients in most strati cation factors, and part of results are not satisfactory (P > 0.05), which might be due to there are not enough samples in these strati cation.
Independent prognostic analysis of the seven ferroptosis-associated lncRNAs signature and its correlation with clinical variables.
Based on the strati cation of clinical variables, the correlation between the lncRNAs and clinical variables
shows that LINC02381' expression is related to T stage, Lymph-node status, and clinical stage, KRAS mutation, BRAF mutation, and perineural invasion. C2orf27A' expression is associated with T stage, Lymph-node status, clinical stage, KRAS mutation, MMR. AC156455.1' expression is correlated to Lymphnode status. AL354993.2' expression is connected to distant metastasis, Lymph-node status, clinical stage, KRAS mutation. AC008760.1' expression is concerning to Lymph-node status, distant metastasis, clinical stage, KRAS mutation. AL137782.1' expression is linked to KRAS mutation. The lncRNA signature' riskscore is coupled to T stage, Lymph-node status, distant metastasis, clinical stage, and KRAS mutation. (Fig. 7).
Functional enrichment analysis of the seven ferroptosis-related lncRNAs signature.
GSEA analysis is used to discover potential biological functions of the seven ferroptosis-associated lncRNAs signature of CRC (Fig. 8). The results showed that three signaling pathways (KEGG_HEDGEHOG_SIGNALING_PATHWAY, KEGG_ARACHIDONIC_ACID_METABOLISM, KEGG_ALPHA_LINOLENIC_ACID_METABOLISM) are obviously enriched in the high-risk group, and three signaling cascades (KEGG_FRUCTOSE_AND_MANNOSE_METABOLISM, KEGG_PENTOSE_PHOSPHATE_PATHWAY KEGG_CITRATE_CYCLE_TCA_CYCLE) were abundant in the low-risk group by c2.cp.kegg.v7.2.symbols.gmt. These results suggest that this signature model may in uence CRC progression and prognosis mainly through metabolism-related pathways Discussion CRC is a common and aggressive cancer with poor survival and prognosis, mainly due to the prone to metastasis to the liver and lung [30]. Given that there are no accurate and sensitive markers to predict the prognosis of CRC patients, it is crucial to investigate and develop more speci c biomarkers to improve the survival of patients. Although the current treatment methods have made great advancements, the prognosis is still very poor. Ferroptosis is differs from other types of cell death in terms of biochemically and morphologically and has been shown to regulate cancer development [3]. More and more reports have documented that lncRNA plays a very important role in regulating gene expression and regulation in tumor [19,31]. In addition, many lncRNAs in uence the progression of CRC by regulating ferroptosis. However, there are no reports on that prognostic model of lncRNA related to ferroptosis was constructed. Although two previous genetic prognostic models of ferroptosis have been reported in hepatocellular carcinoma [32] and glioma [33], our study is the rst to report the study of ferroptosis-related lncRNA prognostic models in CRC.
In the present study, we downloaded ferroptosis genes from FerrDb, and used the R language and its attached packages to nd differentially expressed lncRNAs related to ferroptosis (FRlncRNAs). We randomly grouped all the patients into train set as well as the test set, then a seven ferroptosis-related lncRNAs signature model (AC005550.2, LINC02381, AL137782.1, C2orf27A, AC156455.1, AL354993.2, AC008760.1) was established through univariate Cox regression, Lasso regression, as well as multivariate Cox regression in the train set. At the same time, the biosignature was veri ed in the test set as well as the entire set. On the basis of the median risk score, the Kaplan-Meier curves revealed that the high-risk group had an evidently dismal overall survival relative to the low-risk group in the three data sets Among these lncRNAs of the signature, some studies have shown that LINC02381 is related to immune gene [43] and autophagy gene [44] in colon adenocarcinoma. Interestingly, our research shows that this lncRNA is also related to ferroptosis, which is worthy of our in-depth thinking. In addition, Jafarzadeh, M et al' study revealed that LINC02381 might suppress human CRC tumorigenesis partly by regulating PI3K signaling pathway [45]. Meanwhile, LINC02381 inhibits gastric cancer progression and metastasis through regulating wnt signaling pathway [46]. However, LINC02381 functions as a cancer-promoting gene to promote cell migration and viability by regulating mir-133b / RhoA in cervical cancer [47].
AC008760.1 was reported to be related to autophagy, and Li et al. constructed a autophagy-related lncRNA prognosis model in CRC [48]. The remaining lncRNAs have not seen relevant reports in previous studies, which are worthy of further research.
Our study found that the expression of these lncRNAs and the constructed prognostic signature were closely related to the patient's clinical stage, distant metastasis, Lymph-node status, T stage, MMR status, BRAF mutation, KRAS mutation, and perineural invasion, especially the MMR status, BRAF mutation and KRAS mutation. These features have an important guiding signi cance for patients' medication. So can we explore whether these lncRNAs regulate these variables and how to regulate them? There have been many studies about ferroptosis in the drug resistance of tumor patients [49,50]. The current study demonstrated the prognostic signi cance of these ferroptosis-related lncRNAs and signature in CRC. Therefore, we have reason to believe that these lncRNAs are worthy of in-depth research in tumor resistance mechanisms.
Our current study also has some limitations. First, we use the data in the TCGA database as the starting point for research; although the model has been internally veri ed, it is still needed for further veri cation in external data; second, TCGA's race is mainly white (75%), and whether the model ts other race needs further veri cation. Third, the analysis of the lncRNA expression of the model and the KEGG function enrichment analysis by the GSEA model requires further cell function experimental analysis.
Declarations
Ethics approval and consent to participate LncRNA and mRNA sequencing pro les were obtained from the TCGA data portal, which is a publicly available dataset. Therefore, no ethics approval is needed. | 3,858.4 | 2021-03-11T00:00:00.000 | [
"Biology"
] |
Amyloid beta-protein fibrillogenesis. Structure and biological activity of protofibrillar intermediates.
Size Exclusion Chromatography (SEC) System— A Superdex 75 HR 10/30 column (Amersham Pharmacia Biotech, Piscataway, NJ) was attached either to a Waters 650 Advanced Protein Purification system, consisting of a Waters 650 controller and pump, a Rheodyne 9125 injector, a Waters 484 tunable absorbance detector, and a Waters 745 data module, or to a Beckman 110B solvent delivery system module 406 and System Gold detector module 166. Cell-mediated reduction of 2,5-diphenyltetrazolium bromide
are particularly important. Genetic studies of AD have shown that mutations in the gene encoding the precursor of A (the amyloid -protein precursor (APP) gene) (3)(4)(5)(6), or in genes that regulate the proteolytic processing of APP (7-9), cause AD. The phenotypic effects of these mutations show remarkable consistency, they all result in excessive production of A or in an increased A(1-42)/A ratio, facilitating amyloid deposition (10,11). In addition, specific haplotypes and mutations in genes involved in the extracellular transport or cleavage of A are risk factors for AD (12,13). In vitro and in vivo studies of A toxicity indicate that fibrillar A can directly kill neurons or initiate a cascade of events leading to neuronal cell death (14 -16). For this reason, therapeutic strategies targeting A fibrillogenesis are being pursued actively (17)(18)(19)(20). Unfortunately, key areas of A fibrillogenesis are poorly understood. In particular, the three-dimensional structure and organization of fibril subunits are unknown, as are the steps involved in assembly of nascent, monomeric A first into nuclei, then into higher order oligomers and polymers. Identification of structural intermediates in the fibrillogenesis process and elucidation of the thermodynamics of the associated conformational changes in, and assembly of, A will facilitate identification of therapeutic targets.
Rigorous biophysical studies of fibrillogenesis require well characterized, homogeneous starting peptide preparations, free of pre-existing fibrillar material, particulates, or other types of fibril seeds. In prior studies, synthetic A has been dissolved in water or in organic solvents, then diluted directly into buffer for use (21)(22)(23)(24). It has been demonstrated that when synthetic A peptides are resuspended at neutral pH they contain a heterogeneous mixture of different sized species (25,26). In some cases, attempts to physically "de-seed" stock peptide solutions have been made (21). However, in most studies, either no precautions were taken or filtration through 0.2-m filters, incapable of removing anything other than large aggregates, was used. The use of these solutions complicates data interpretation and precludes the study of the earliest phases of fibrillogenesis in vitro. We recently demonstrated that size exclusion chromatography (SEC) can be used to prepare homogeneous populations of A, termed low molecular weight A (LMW A), which are composed of monomeric or dimeric A molecules (26). Using these preparations to study A fibrillogenesis, we discovered and reported the initial characterization of a new fibrillogenesis intermediate, the amyloid protofibril (26). This intermediate was also described independently by Harper et al. (22). Protofibrils are short, flexible fibrils, generally 4 -10 nm in diameter and up to 200 nm in length, as measured by negative staining and electron microscopy. Protofibrils appear transiently during A fibrillogenesis (26,27). Evidence suggests that protofibrils are precursors of the longer, more rigid, amyloid-type fibrils typically produced in vitro using synthetic peptides (22,26). If an analogous fibril maturation mechanism operates in vivo, the protofibril stage could be an important therapeutic focus. This may, in fact, be the case as soluble oligomeric forms of A have been isolated from human AD brain (28). We report here results of studies which significantly extend our knowledge of protofibril morphology, the kinetics and equilibria of protofibril formation and disappearance, the secondary structure of protofibrils and their LMW A precursors, and the biological activity of protofibrils. Our findings suggest that in developing therapies targeting A toxicity, consideration must be given not only to the effects of mature, amyloid-type fibrils, but also to those of protofibrils, and potentially, protofibril precursors.
EXPERIMENTAL PROCEDURES
Chemicals and Reagents-Chemicals were obtained from Sigma and were of the highest purity available. Water was double-distilled and deionized using a Milli-Q system (Millipore Corp., Bedford, MA). Tissue culture components were obtained from Life Technologies, Inc. (Grand Island, NY).
Peptides-A was synthesized and purified in our laboratory as described (26). Peptide mass, purity, and quantity were determined by a combination of matrix-assisted laser desorption/ionization time-offlight mass spectrometry, analytical high performance liquid chromatography, and quantitative amino acid analysis (AAA). Purified peptides were aliquoted, lyophilized, and stored at Ϫ20°C until used. A was also obtained from Bachem (Torrance, CA) and Quality Controlled Biochemicals (Hopkington, MA). Estimates of peptide content were provided by each manufacturer. Iodinated A Isolation of Low Molecular Weight A (LMW A)-In this work, the term "low molecular weight A" (LMW A) signifies an A species which elutes from a SEC column as a single peak and has a hydrodynamic radius consistent with that of either an extended monomer or a compact dimer (determined by quasielastic light scattering spectroscopy (QLS) to be 1-2 nm) (26). To isolate LMW A, A(1-40) was dissolved at a concentration of 2 mg/ml in dimethyl sulfoxide and sonicated in a Branson 1200 ultrasonic water bath for 10 min, after which 200 l of this solution were injected into the SEC column. The column was eluted with 0.05 M Tris-HCl, pH 7.4, containing 0.02% (w/v) sodium azide, at a flow rate of 0.5 ml/min. Peptides were detected by UV absorbance at 254 nm, and 350-l volume fractions were collected during elution of the LMW A peak. Pre-dissolution of A in either dimethyl sulfoxide or buffer gave essentially the same results with respect to SEC and subsequent QLS and circular dichroism spectroscopy (CD) analysis, but dimethyl sulfoxide treatment significantly increased the recovery of peptide.
Isolation of Protofibrils-Protofibrils were prepared essentially as described (26). Briefly, 400 g of A were dissolved in 100 l of water, diluted with an equal volume of 0.2 M Tris-HCl, pH 7.4, containing 0.04% (w/v) sodium azide, then incubated at room temperature for 40 -60 h. The yield of protofibrils varied among different peptide lots, but a 1-2-day incubation period generally yielded equivalent amounts of protofibrils and LMW A. Following incubation, the solution was centrifuged at 16,000 ϫ g (measured at tube bottom) for 5 min, then ϳ160 l of the supernate were fractionated by SEC, as described above. This procedure yields a symmetric peak in the void volume of the column (M r Ͼ 30,000 for dextrans) which contained protofibrils and a peak of LMW A in the included volume (26). Electron microscopic examination of the assemblies in the void peak have revealed small globular structures ϳ5 nm in diameter and rods with lengths up to ϳ200 nm. Based on a 4 -5-nm diameter rod and a linear density of A molecules of 0.8 nm Ϫ1 (29), the molecular masses of these assemblies would range from ϳ25 to 900 kDa.
Electron Microscopy-Samples were prepared for electron micros-copy (EM) using both negative contrast and rotary shadowing techniques. Preparation of samples for negative contrast was performed as described (26). Briefly, sample was applied to a carbon-coated Formvar grid, fixed with a solution of glutaraldehyde, then stained with uranyl acetate. Samples were observed using a JEOL 1200 EX transmission electron microscope. For rotary shadowing, casts of samples were prepared essentially as described (30). 100-l aliquots of protofibril fractions were first diluted in 5 mM imidazole, 50 mM NaCl, to ϳ1 ml and then diluted with 2 volumes of freshly distilled glycerol. The resulting solution was sprayed onto newly cleaved mica sheets and rotary shadowed using a Denton vacuum evaporator and a platinum source such that an ϳ1 nm thick sheet of platinum was deposited on the mica. Following this treatment, a thin carbon film was deposited on top of the platinum. The replica was floated off on water and picked up with a 400-mesh copper grid and examined using a JEOL 100 CX transmission electron microscope. Dialysis of Radiolabeled LMW A and Protofibrils-400 g of A were dissolved in 20 l of dimethyl sulfoxide, to which was added 10 l of 125 I-A . This mixture was then diluted with 70 l of water, 100 l of 0.2 M Tris-HCl, pH 7.4, containing 0.04% (w/v) sodium azide, and then incubated at room temperature for 48 -60 h. Following incubation, the solution was centrifuged at 16,000 ϫ g for 5 min and 160 l of supernate fractionated by SEC, as described above. 200-l aliquots of the LMW A and protofibril fractions were placed in 1-ml sterile Spectra/Por CE DispoDialyzers (Spectrum Scientific, Laguna Hills, CA) and dialyzed with gentle stirring at room temperature versus 20 ml of 0.05 M Tris-HCl, pH 7.4, containing 0.02% (w/v) sodium azide. In addition, other aliquots of the SEC fractions were used for negative contrast EM, AAA, and scintillation counting. To ensure that the 125 I-A was accurately tracing the cold peptide, all SEC fractions were subjected to scintillation counting and the radiotracer profile compared with the UV chromatogram. Only samples which showed a similar distribution of radiolabel and UV absorbance were used. In order to monitor the release of LMW 125 I-A(1-40) from the dialysis bag, 1-ml aliquots of dialysis buffer were removed and counted. The aliquots were returned to the dialysis chamber after counting (normally Ͻ5 min after their removal). At the end of the experiment, the bag was removed, counted, and a sample of the contents taken for negative contrast EM.
Monitoring LMW A and Protofibril Size by QLS-QLS was performed as described previously (26). Briefly, measurements were performed at 25°C using a Langley Ford model 1097 autocorrelator and a Coherent argon ion laser (Model Innova 90-plus) tuned to 514 nm. LMW A and protofibrils were isolated as described above. To avoid interference from dust, QLS tubes were washed in a continual flow of eluent from a Superdex 75 column and LMW A or protofibril material were collected directly into these tubes by displacement (31). The tubes were then heat-sealed and QLS monitoring begun, usually within 2-5 min of collection.
Preparation of Fibril Standards for Dye-binding Experiments-Fibrils were prepared by dissolving 800 g of A in 200 l of water and then diluting with an equal volume of 0.2 M Tris-HCl, pH 7.4, containing 0.04% (w/v) sodium azide. This solution was incubated for 5 days at 37°C, then thoroughly mixed, diluted with an equal volume of water, and an aliquot examined by EM to confirm the presence of mature fibrils. The remaining solution was serially diluted to yield concentrations of approximately 500, 250, 125, 62, 31, and 16 g/ml in 0.05 M Tris-HCl, pH 7.4. Standards were used immediately or stored at Ϫ20°C until required. The concentrations of the standards were determined by AAA.
Congo Red Binding Assay-Congo red binding was assessed essentially as described by Klunk et al. (32), but with volumes adjusted to perform the assay in a microtiter plate. Briefly, 225 l of 20 M Congo red in 20 mM potassium phosphate, pH 7.4, containing 0.15 M sodium chloride, was added to 25 l of sample, mixed, and incubated for 30 min at room temperature. The absorbance of the resulting solutions was then measured at 480 and 540 nm using a Molecular Devices Thermo Max microplate reader. All samples were assessed in triplicate and the amount of Congo red bound (Cb) calculated using the formula Cb (nM) ϭ Thioflavin T Binding Assay-Thioflavin T (ThT) binding was assessed as described by Naiki and Nakakuki (33). 100 l of sample was added to a 1-cm path length cuvette containing 800 l of water and 1 ml of 100 mM glycine-NaOH, pH 8.5. The reaction was then initiated by the addition of 50 l of 100 M ThT in water and the solution vortexed briefly. Fluorescence was measured after 90, 100, 110, and 120 s. Measurements were made using a Perkin-Elmer LS-5B Luminescence spectrometer with excitation and emission wavelengths of 446 nm (slit width ϭ 5 nm) and 490 nm (slit width ϭ 10 nm), respectively. Each sample and standard was done in triplicate.
Circular Dichroism Spectroscopy-Solutions of protofibrils or LMW A isolated by SEC were placed into 1-mm path length quartz cuvettes (Hellma, Forest Hills, NY) and spectra obtained from ϳ195-250 nm at room temperature using an Aviv 62A DS spectropolarimeter. Raw data were manipulated by smoothing and subtraction of buffer spectra, according to the manufacturer's instructions. Deconvolution of the resulting spectra was achieved using the program CDANAL (34) and the Brahms and Brahms reference library (35). The relative amounts of random coil, ␣-helix, -sheet, and -turn in each sample were determined from the normalized contribution of each secondary structure element function to the observed spectrum following curve fitting.
Preparation of LMW A, Protofibrils, and Fibrils for Biological Activity Studies-LMW A and protofibrils were prepared by SEC. Briefly, 1 mg of peptide was dissolved in 250 l of water containing 0.01% (v/v) phenol red, diluted with an equal volume of 0.2 M Tris-HCl, pH 7.4, then incubated at room temperature for 2 days. Solutions were then centrifuged at 16,000 ϫ g for 5 min and 400 -440 l of the supernate fractionated on a Superdex 75 column eluted with 5 mM Tris-HCl, pH 7.4, 70 mM NaCl, at 0.5 ml/min. The elution solvent was chosen empirically after preliminary experiments showed that 0.05 M Tris buffer was toxic to cultured neurons and that LMW A and protofibril yields were unacceptably low in the absence of salt. The Tris/ NaCl system produced chromatograms indistinguishable from those seen using 0.05 M Tris-HCl, pH 7.4. In addition, the morphology and hydrodynamic radii of protofibrils prepared by this method were essentially the same as those obtained using 0.05 M Tris buffer. Peptides were detected by UV absorbance at 254 nm and 450-l fractions were collected during elution of the LMW A and protofibril peaks. Fractions used for studies of biological activity were also subjected to AAA and EM.
In attempting to produce fibrils, we found that when A(1-40) (from a variety of sources) was dissolved at Ͼ1 mg/ml in water, it produced a solution whose pH (Ͻ3) could not be adjusted properly with 5 mM Tris buffer. To overcome this problem and facilitate monitoring of the pH under sterile conditions, peptide was suspended initially at ϳ3.2 mg/ml in 1 mM NaOH, containing 0.01% (v/v) phenol red. 10 mM NaOH then was added at the empirically determined ratio of 200 l/mg of peptide. This ratio varied slightly among different peptide lots. Finally, the solution was diluted sequentially with 100 mM Tris-HCl, pH 7.4, containing 1.4 M NaCl, and water to give a concentration of ϳ1.6 mg/ml A(1-40) in 5 mM Tris-HCl, pH 7.4, containing 70 mM NaCl. These solutions were incubated for 2 days at 37°C, and then used. This procedure consistently produced solutions of amyloid fibrils which could be sedimented readily by brief centrifugation (16,000 ϫ g, 5 min) and which were indistinguishable from those formed by incubation in 50 mM Tris-HCl, pH 7.4.
MTT Assay-Cell-mediated reduction of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) was assessed according to the method of Hansen et al. (37). Freshly isolated protofibrils or LMW A fractions were mixed with concentrated stock solutions of individual tissue culture components to produce a final solution containing 10 mM glucose, 500 units/ml penicillin, 500 g/ml streptomycin, 20 mM HEPES, and 26 mM NaCO 3 , all in 1 ϫ minimal essential medium. Peptide concentrations were determined prior to this supplementation. Fibril standards were prepared in a similar fashion to yield nominal final peptide concentrations of 5, 10, and 15 M. Cells were incubated either in 50 l of medium without A or in 50 l containing fibrillar A, protofibrils, or LMW A. After 2 h, 10 l of 2.5 mg/ml MTT was added to each well and the incubation continued for a further 3 h. Cells were then solubilized in 200 l of 20% (w/v) SDS in 50% (v/v) N,NЈ-dimethylformamide, 25 mM HCl, 2% (v/v) glacial acetic acid, pH 4.7, by overnight incubation at 37°C. Levels of reduced MTT were determined by measuring the difference in absorbance at 595 and 650 nm using a Molecular Devices Thermo Max microplate reader. The effects of treatments were compared with controls by using the one-way analysis of variance Tukey test. No reduction of MTT was observed in fibril controls (even at a concentration of ϳ30 M) in the absence of cells.
Morphological Characterization of Protofibrils-Previous
studies of protofibril morphology utilizing negative staining and EM (26), or AFM (22,27), required avid macromolecule adherence to the sample support for their success. If certain structures were washed away during preparation of the supports, potentially important species would not be observed. To address this issue, and to further our efforts at understanding the gross morphology of protofibrils, we performed electron microscopic examination of protofibrils prepared by rotatory shadowing. In this procedure, which involves no washing, a thin, uniform film of sample is sprayed onto a mica support from which shadow casts are then generated and examined. Both shadowed and negatively stained protofibrils appeared as flexible rods of length up to ϳ200 nm (Fig. 1, B and C). However, three significant differences were observed between the two preparations. First, the estimated diameters of the shadowed fibrils were larger (8 -14 nm compared with 4 -7 nm). This was expected due to the accretion of platinum and carbon on the fibrils. Second, the protofibrils appeared more beaded when visualized by rotatory shadowing. The periodicity of this "beading" was 3-6 nm. Third, the proportion of small protofibrils (Ͻ10 nm) was higher, suggesting that many of these structures are lost during routine negative staining. The smallest assemblies appear as somewhat imperfect spheres, approximately one fibril diameter in size.
Protofibrils Are in Equilibrium with LMW A-As a first step toward elucidating the structural and kinetic relationships among LMW A and its assemblies, we asked whether protofibril formation was an irreversible process or whether an equilibrium existed between protofibrils and LMW A. To do so, radiolabeled protofibrils were isolated by SEC, immediately placed in dialysis bags of 8000 molecular weight cut off, then aliquots of the reservoir removed periodically for counting. Dialysis bags of 8,000 molecular weigh cut off retain Ͼ90% of a test solute of molecular weight 8,000 after a 17-h dialysis period. A monomers thus are not retained. The dialysis rate for A dimers is unknown, but would depend on the shape and hydrated volume of these molecules. However, based simply on dimer molecular weight (8,662), release would likely be limited. Representative results from a series of seven experiments are illustrated in Fig. 2. Diffusion of LMW A into the dialysis reservoir was rapid and reproducible, with ϳ90% of the total counts passing out of the sac within 72 h. The exponential curve shape reflects a simple dialysis process in which free diffusion of solute through the dialysis membrane occurs. A release was also observed from protofibrils, however, it was significantly lower and more variable, with between 18 and 41% of the total counts found in the reservoir after 96 h. In addition, the sigmoidal shape of the release function is consistent with a process in which A must first dissociate from protofibrils before diffusing through the dialysis membrane. The plateauing of the curve at a low level of A release shows that a significant portion of the A present in the dialysis bag is unable to diffuse out. Interestingly, in three of seven experiments, electron microscopic studies revealed typical fibrils (as in Fig. 1A) in the dialysis bags after 96 h (data not shown). Protofibrils thus appear to be in equilibrium with LMW A and to give rise to fibrils, from which dissociation of A does not readily occur.
Fibril Formation by Protofibrils-The equilibria found to exist among LMW A, protofibrils, and fibrils complicates the analysis of precursor-product relationships. For example, although unlikely, it is formally possible that protofibrils are reservoirs for LMW A, but do not themselves directly evolve into fibrils. To address this issue, populations of protofibrils were isolated by SEC, then their temporal change in size monitored by QLS. Initially, protofibrils had an average hydrody-namic radius R H ϭ 27.8 Ϯ 1.8 nm (Fig. 3). This value grew steadily with time, reaching a maximal value of 80.6 Ϯ 14.4 nm at 236 h. For rigid rods, this value of R H would correspond to lengths of the magnitude of 1 m. Later, the scattering intensity decreased, a phenomenon routinely observed as large aggregates sediment and leave the illuminated portion of the cuvette. After 263 h, the sealed tube was opened, the contents gently homogenized by pipetting, and aliquots removed for EM and AAA. EM revealed the presence of both fibrils and protofibrils with morphologies similar to those seen in Fig. 1 (data not shown). The EM findings were consistent with the changes in R H observed by QLS, supporting the hypothesis that protofibrils are direct precursors of fibrils.
Tinctorial Properties of Protofibrils-One of the distinguishing features of amyloid is its capacity to bind the dyes Congo red and thioflavin T, an activity dependent on the presence of extensive arrays of -pleated sheets (38,39). In six independent experiments, protofibrils and LMWA were isolated by SEC and their ability to bind Congo red compared with that of fibrils. We have observed that protofibril solutions at A concentrations Ͼ20 M readily form fibrils, thus in order to ensure that any dye binding ascribed to protofibrils was not due to fibrils formed de novo, A concentrations were kept below 20 M. In addition, the protofibrillar nature of each sample was confirmed directly by electron microscopy. We found that LMW A, even at concentrations as high as 70 M, did not bind Congo red, whereas both fibrils and protofibrils did, even at concentrations as low as 2 M (Fig. 4A). Protofibrils bound Congo red in a concentration-dependent manner, however, variability in this binding was observed, especially at low concentration (Ͻ5 M). This effect is likely due to dissociation of protofibrils into LMW A (which does not bind the dye), a process whose rate may depend on protofibril length and thus could differ among samples due to stochastic variations in the fibril length distributions. Little variability was displayed by fibrils, which also consistently bound slightly higher amounts of dye than did equivalent amounts of protofibrils.
In four of the six Congo red binding experiments, samples were also examined for their ability to bind thioflavin T. As with Congo red, both protofibrils and fibrils, but not LMWA, bound thioflavin T (Fig. 4B). Interestingly, in two experiments, protofibrils bound more ThT than did equivalent amounts of fibrils (data not shown), whereas the opposite was true in the other two experiments. Absolute values of dye binding can differ depending on the protofibril or fibril preparation. This can occur due to differences in the distribution of polymer sizes, and to post-fibrillogenesis fibril-fibril interactions, which cause equivalent amounts of A to display different binding activities. Nevertheless, the data show clearly that protofibrils bind both Congo red and thioflavin T, a property of amyloid fibrils not possessed by LMW A. This suggests that protofibrils contain significant amounts of -sheet structure and must thus evolve following significant conformational changes in LMW A.
Secondary Structure of Protofibrils-Numerical estimates of the secondary structure content of protofibrils were obtained using circular dichroism spectroscopy. Protofibrils were isolated by SEC and examined immediately. The prominent features of the resulting spectrum were a minimum at ϳ215 nm and a maximum at ϳ200 (Fig. 5A). The two low wavelength points of inflection are characteristic of -sheet structure, however, the negative absolute value of the 200 nm maximum suggests that a significant level of random coil structure exists in the sample. In fact, deconvolution of the spectrum showed 47% -structure (-sheet or -turn), 40% random coil, and 13% ␣-helix. Examination of numerous other protofibril samples has consistently yielded percentages of -content ranging from 45 to 50 (data not shown). The -content of protofibrils is quite similar to that of fibrils (see day 31 data in Table I), even though no fibrils were detected by EM in any of the protofibril samples used for CD. The modest level of ␣-helix found in protofibrils is interesting in light of the fact that during fibrillogenesis of LMW A, the peptide undergoes a conformational transition from a predominately random coil structure to a -sheet-rich form, during which a transitory ␣-helical component is observed (Fig. 5B and Table I). In the case of protofibrils, because CD is an averaging technique, it is not possible to say whether the ␣-helix signal observed emanates from all protofibrils or whether discrete subpopulations of protofibrils or of A monomers or oligomers exist which are significantly richer in this secondary structure element. However, comparative analysis of the CD data from fibrils, protofibrils, and LMW A, does allow the conclusion to be made that protofibrils are a relatively mature stage of the fibrillogenesis process.
Biological Activity of Protofibrils-An important question is whether protofibrils are biologically active. To answer this question, structure-activity studies must be performed rapidly, over a time scale of minutes to hours, before protofibrils produce fibrils. Assays measuring cell death typically require incubation periods of days (40). The MTT assay, in contrast, can reveal physiologic effects induced by treatment of cells with exogenous agents after incubation times of only a few hours (23,(41)(42)(43). We thus used this assay to determine whether protofibrils could affect the normal physiology of cultured pri- mary rat cortical neurons. Protofibrils were isolated by SEC and aliquots of the protofibril peak used for the assay, for electron microscopic studies, and for AAA. This procedure ensured that protofibril preparations of proven morphology and known protein concentration were used. We found that protofibrils caused a significant (p Ͻ 0.01) reduction in the levels of reduced MTT (Fig. 6). As a positive control, preformed A fibrils were also assayed. As expected, fibrils significantly and consistently produced decreases in reduced MTT levels (Fig. 6). Among different experiments, the absolute levels of inhibition caused by protofibrils and fibrils varied, however, in all cases, statistically significant levels of inhibition were observed at A concentrations exceeding ϳ9 M. The effect of LMW A was then compared with those of the fibrils and protofibrils. In two experiments, LMW A caused a slight but insignificant increase in levels of reduced MTT (Fig. 6), while in a third experiment, a slight but insignificant decrease was seen (data not shown). These results indicate that protofibrils alter the normal physiology of cultured neurons, whereas LMW A does not. DISCUSSION An intriguing and important area of biomedical research is that of the amyloidoses, a group of diseases caused by the fibrillogenesis and deposition of otherwise soluble and physiologically normal proteins and peptides (38,39). At least 17 different molecules have been shown to have the capacity, under appropriate conditions, to form amyloid (44). Among these molecules, A is archetypal. Through studies of A fibrillogenesis, therefore, we hope not only to develop therapeutic strategies for Alzheimer's disease, but to elucidate common features of amyloid fibril assembly, thereby accelerating progress toward treatment of other amyloidoses. In the studies reported here, our focus was the assembly, structure, and biological activity of protofibrils, important intermediates in the fibrillogenesis process (22,26,27).
In our initial description of protofibrils (26), temporal changes in the levels of LMW A, protofibrils, and fibrils suggested that protofibrils were intermediates in the conversion of LMW A into fibrils. Here, we examined this question directly and found that protofibrils were indeed in equilibrium with LMW A and were capable of forming fibrils. In our dialysis paradigm, the fact that we observed neither complete conversion of protofibrils into fibrils, nor complete protofibril dissociation into LMW A (a range of 18 -41% was observed), demonstrates that competing rate constants for protofibril dissociation and fibril formation must be of similar magnitude. The kinetic description of this system is complicated by additional rate constants for protofibril nucleation and elongation. Empirical evidence also suggests that systematic variation in protofibril dissociation rates may occur with protofibril length, further increasing the complexity of this system. Independent of these issues, the most straightforward interpretation of the data is that protofibrils are precursors of fibrils and that fibrils, once formed, do not readily dissociate into protofibrils or LMW A. Irreversible protofibril maturation into fibrils is consistent with the results of our experiments in which temporal increases in average protofibril size were observed by QLS and accompanied by electron microscopically confirmed fibril formation. The same conclusion has been reached in AFM studies of the temporal changes in A polymer structure occurring during fibrillogenesis (22,27). Our data are also concordant with results of a number of studies showing that A fibrils do not dissociate in the absence of strong chaotropic agents or solvents (28,45,46).
Additional support for a protofibril 3 fibril transition comes from studies designed to elucidate the structural relationships among LMW A, protofibrils, and fibrils. In these experiments, each species was studied using dye binding and CD approaches. Because binding of Congo red and thioflavin T is dependent on the presence of -sheet structure (47), the data show that protofibrils have significant -sheet content. Whether statistically significant differences in dye binding exist between protofibrils and fibrils is difficult to determine due to variations in dye binding capacity of different fibril preparations and to the confounding effects of light scattering by different A polymers (48). Interestingly, but not surprisingly, LMW A, even at concentrations up to 70 M, showed no Congo red or thioflavin T binding, indicating that the assays can differentiate fibrillar and non-fibrillar A. CD data were consistent with the above observations. On average, both protofibrils and fibrils contained substantial and equivalent levels (up to 50%) of -structure (-strand and -turn), along with lesser amounts of random coil (ϳ40%) and ␣-helix (ϳ10%). LMW A, on the other hand, was predominantly disordered. By these measurements, protofibrils are similar to fibrils and are thus TABLE I Temporal change in A conformation during fibrillogenesis CD spectra were deconvoluted using the algorithm of Perzcel et al. (34) and the Brahms reference spectra library (35). The percentage of each secondary structure element is listed. 0 6 2 1 1 1 3 1 4 11 57 15 17 11 20 46 25 23 6 24 41 20 30 9 27 39 17 31 13 31 37 13 32 18 FIG. 6. Biological activity of protofibrils. Primary rat cortical neurons were incubated for 2 h with fibrils, protofibrils, LMW A, or medium alone, MTT was added, and then the cells were solubilized 3 h later. Data are expressed as average percent inhibition of MTT reduction Ϯ S.D. (n Ն 8), relative to cells treated with medium alone. Total A concentrations (M) in each treatment group, determined by AAA, are listed on the abscissa. The data shown are from a single experiment, but are representative of a total of three independent experiments in which protofibril, fibril, and LMW A concentrations ranged from 6 to 26 M, 4 to 30 M, and 6 to 44 M, respectively. The concentration variation shown for protofibrils was achieved by fractionation of the protofibril peak as it eluted from the SEC column (see "Experimental Procedures"). Pre facto preparation of a protofibril dilution series is difficult due to the rapid equilibria among protofibrils, LMW A, and fibrils, which effectively limits protofibril concentration to a maximum of ϳ20 M. Relative to medium alone, fibrils and protofibrils both produced significant decreases in levels of reduced MTT (*, p Ͻ 0.01), while LMW A did not. relatively advanced intermediates in the fibrillogenesis process.
An interesting observation in our study of the temporal change in secondary structure of A during fibril formation was that of a transitory ␣-helical component. CD and QLS studies showed that LMW A lacked significant ordered structure. However, upon prolonged incubation, a random coil 3 -sheet transition was observed, during which the percentage of ␣-helix rose and fell. Other studies of A fibrillogenesis at neutral pH also revealed a random coil 3 -sheet transition (49 -51). However, to our knowledge, no transitory ␣-helical component has been described previously under conditions where helix-stabilizing solvents (fluorinated alcohols) were not used. Our ability to observe this transition may result from the use of LMW A rather than A lyophilizates which are simply solvated and used directly. For example, we find that LMW A(1-42) has little regular structure, 2 whereas in other studies of this peptide, even in solutions containing fluorinated alcohols, CD spectra have consistently yielded a high content of -sheet (49,52). These contrasting observations suggest that the starting materials used by others contained significant amounts of A aggregates. The significance of the transitory ␣-helical component is unclear. Because CD is a global averaging method, it is formally possible that not all A molecules conformationally transform through this "␣-helix" pathway. However, we feel it is most likely that the conformational transition of A from a predominately unstructured monomer (or dimer) to an assembled -sheet-rich fibril involves a folding intermediate containing one or more ␣-helices which then unfold and reform into -strands. Interestingly, in the case of the scrapie prion protein, a helix 3 strand folding pathway has, in fact, been postulated to occur during the conversion of the cellular form of the molecule (PrP C ) into its scrapie form (PrP Sc ) (53,54). In addition, recent studies of a model 38 residue peptide, ␣t␣ (55,56), have shown that a stable monomeric helical hairpin peptide can rearrange to form classical -sheet-rich amyloid fibrils. 3 At the core, both literally and figuratively, formation of amyloid fibrils results from mutually dependent local and global conformational changes in A and its assemblies. We have discussed above certain of the conformational transitions in A occurring during protofibril and fibril formation. We find, as well, that maturation of protofibrils into fibrils may involve subtle alterations in the structural organization of the fibril. In particular, the "beaded" substructure of protofibrils is less prominent in the fibrils. Harper et al. (27) have reported a ϳ20 nm periodic structure in A(1-40) protofibrils studied by AFM. These protofibrils give rise to fibrils in which this period doubles, as does fibril diameter. However, fibrils also form which have diameters approximately equivalent to those of protofibrils and which have a much smoother appearance, a result of substantially less frequent axial discontinuities (often Ͻ0.01 nm Ϫ1 ) (27). A granular 3 smooth transition has been reported by Seilheimer et al. (57) during fibril formation by Met(O)-A . In this study, the authors noted the appearance of large globules and beaded complexes, but these were larger (ϳ30 nm) than those observed here. The protofibril structures observed here may result from the assembly of globular subunits. Small structures of this type have been observed in fibrillogenesis studies of A(1-40) and A , both using AFM (22,27,58) and EM (26,59). In addition, recent cryoelectron microscopic studies have revealed prominent inhomogeneities within protofibrils, which in some samples appear to de-rive from the presence of globular subunits. 4 The diameters of the globular assemblies reported here (3-6 nm) are similar to those of ADDLs (58). In fact, this type of small globular assembly may represent a structural unit from which protofibrils are assembled (59). Geometric considerations suggest that as few as 5 or 6 A molecules could constitute this structure. This size is consistent with that of the "-crystallite" suggested, on the basis of fiber x-ray diffraction studies, to be a building block of A fibrils (60). A pentameric or hexameric building block has also been proposed by the Murphy group (61). It should be noted, however, that depending on the resolution of the visualization method, helices of appropriate pitch can also appear as stacked arrays of globular units.
An important goal in studies of amyloid fibrillogenesis is the correlation of structure with biological activity. In preliminary experiments, treatment of cultured cortical cells with protofibrils or fibrils produced no detectable changes in cell number or LDH release within a time frame (Ͻ24 h) precluding maturation of protofibrils into fibrils. 5 We therefore chose to use the MTT assay because it has been shown to be a rapid and sensitive indicator of A-mediated toxicity (23,(41)(42)(43). Changes in MTT reduction may reflect alterations in endocytosis, exocytosis, or cellular MTT reductase activity (43,62,63). The use of this type of assay, in which effects can be evaluated within 30 min of treatment (43), was critical for allowing a direct correlation between the structures of A assemblies and their biological activities. To measure A-induced cell death requires days of incubation (40), during which protofibrils can be converted to fibrils. This makes determination of the actual active moieties difficult. We found that fibrils and protofibrils both produced highly significant, concentration-dependent decreases in levels of reduced MTT in cultures of rat cortical neurons, whereas no effects were observed for LMW A. Our prior studies of the kinetics of protofibril formation, dissolution, and maturation support the conclusion that the observed effects resulted from the direct interaction of protofibrils, and not fibrils, with the cultured neurons. This conclusion is further corroborated by studies demonstrating that protofibrils (prepared identically to those used here) instantaneously alter the electrical activity of cultured rat cortical neurons (64). 6 Whether the metabolic changes mediated by A are induced at the cell surface by interaction with specific receptors (43,62) or require internalization of protofibrils or fibrils is currently unknown. However, our results show clearly that whatever the mechanism, protofibrils and fibrils perturb neuronal metabolism whereas LMW A does not. The alteration in neuronal MTT metabolism observed here may be an early indicator of a process leading to neuronal dysfunction and subsequent cell death.
The toxic potential of A has been an area of active investigation since the first demonstration that an A peptide could kill cultured neurons (65). Subsequent studies provided evidence that the A molecule had to be fibrillar to be neurotoxic (66 -68), and this observation stimulated the development of strategies to inhibit fibril formation and to dissolve preformed fibrils (17,18). However, the work reported here, and the recent observation of neurotoxicity of non-fibrillar A-derived diffusible ligands (58), suggest that the notion that only fibrils are toxic must be revisited. For example, if inhibition of fibril formation were to cause an accumulation of protofibrils, Aderived diffusible ligands, or other neurotoxic pre-or non-fibrillar assemblies, this strategy clearly would not be of value. To avoid this outcome, a better understanding of the assembly of fibrils, and in particular, of their prefibrillar intermediates, must be achieved. This will facilitate proper targeting and design of fibrillogenesis inhibitors. | 8,950.2 | 1999-09-03T00:00:00.000 | [
"Medicine",
"Biology"
] |
Glutathione S Transferases Polymorphisms Are Independent Prognostic Factors in Lupus Nephritis Treated with Cyclophosphamide
Objective To investigate association between genetic polymorphisms of GST, CYP and renal outcome or occurrence of adverse drug reactions (ADRs) in lupus nephritis (LN) treated with cyclophosphamide (CYC). CYC, as a pro-drug, requires bioactivation through multiple hepatic cytochrome P450s and glutathione S transferases (GST). Methods We carried out a multicentric retrospective study including 70 patients with proliferative LN treated with CYC. Patients were genotyped for polymorphisms of the CYP2B6, CYP2C19, GSTP1, GSTM1 and GSTT1 genes. Complete remission (CR) was defined as proteinuria ≤0.33g/day and serum creatinine ≤124 µmol/l. Partial remission (PR) was defined as proteinuria ≤1.5g/day with a 50% decrease of the baseline proteinuria value and serum creatinine no greater than 25% above baseline. Results Most patients were women (84%) and 77% were Caucasian. The mean age at LN diagnosis was 41 ± 10 years. The frequency of patients carrying the GST null genotype GSTT1-, GSTM1-, and the Ile→105Val GSTP1 genotype were respectively 38%, 60% and 44%. In multivariate analysis, the Ile→105Val GSTP1 genotype was an independent factor of poor renal outcome (achievement of CR or PR) (OR = 5.01 95% CI [1.02–24.51]) and the sole factor that influenced occurrence of ADRs was the GSTM1 null genotype (OR = 3.34 95% CI [1.064–10.58]). No association between polymorphisms of cytochrome P450s gene and efficacy or ADRs was observed. Conclusion This study suggests that GST polymorphisms highly impact renal outcome and occurrence of ADRs related to CYC in LN patients.
GSTP1, GSTM1 and GSTT1 genes. Complete remission (CR) was defined as proteinuria 0.33g/day and serum creatinine 124 µmol/l. Partial remission (PR) was defined as proteinuria 1.5g/day with a 50% decrease of the baseline proteinuria value and serum creatinine no greater than 25% above baseline.
Introduction
Systemic lupus erythematosus (SLE) is an autoimmune disease that particularly affects young women with a prevalence of 50-150/100,000 in Caucasians [1,2]. Renal involvement is frequent, from 30-74%, depending on the study and the definition of lupus nephritis (LN) and strongly impact prognosis [3,4,5]. Clinical trials have shown that intravenous (IV) CYC, an alkylating agent with a low therapeutic index, is effective in achieving remission and preserving renal function in proliferative LN [6,7]. However, between 30-40% of patients treated with CYC fail to achieve renal remission and response to CYC treatment is difficult to predict [6,7]. The pharmacokinetics and metabolism of CYC have been much studied [8]. As a prodrug, CYC requires bioactivation through multiple hepatic cytochrome P450s (CYP2B6, CYP2C19) to form 4-hydroxy-CYC (4-OH-CYC), which is finally converted to cytotoxic alkylating phosphoramide mustard [9]. Phosphoramide mustard is the therapeutically active metabolite while acrolein is responsible for toxicity. Additionally, 4-OH-CYC is further conjugated with intracellular glutathione by multiple glutathione S transferases (GSTM1, GSTP1, and GSTT1), producing non-toxic 4-glutathionyl-CYC.
Several polymorphisms of CYP2C19 are known to be associated with reduced enzyme activity, among these are CYP2C19 Ã 2 characterized by a 681G!A substitution in exon 5, and CYP2C19 Ã 3, leading to a stop codon [10]. Carriers of one allelic variant CYP2C19 Ã 2 or CYP2C19 Ã 3 alleles are considered to have a poor metabolizers (PM) phenotype while homozygous carriers of CYP2C19 Ã 1 allele (wild-type allele) are classified as extensive metabolizers (EM). On the other hand, patients presenting CYP2C19 Ã 17 allele are considered as ultrarapid metabolizers (UM) [11]. Polymorphisms of CYP2B6 have also been described, patients with CYP2B6 Ã 5 or CYP2B6 Ã 6 allele are considered as PM compared to the wild-type allele (CYP2B6 Ã 1) [12,13]. Thus, PM patients could present a poor response to CYC although UM patients could have an enhanced response to CYC linked to wide inter-patient variability upon exposure to CYC. Deletions in GSTs (GSTT1, GSTM1 and GSTP1) lead to reduction in detoxification enzymatic activity and prolonged exposure to CYC with increased risk of ADRs (adverse drug reactions) but also lead to the possibility of improved response [14].
Therefore, in this study, we assessed the hypothesis that genetic polymorphisms of GSTs or CYP could impact remission and ADRs related to CYC in LN patients.
Patients and Methods
We carried out a multicentric retrospective study in France.
Patients
Patients with biopsy-proven proliferative LN (World Health Organization WHO class III or IV) who were referred to French hospitals and had been treated with CYC pulses before 2006 were identified. Patients included in the "PLUS" study were also screened for eligibility. The diagnosis of SLE in patients was confirmed based on the American College of Rheumatology (ACR) criteria published in 1997. All patients provided written informed consent. This survey was conducted in compliance with the protocol of Good Clinical Practices and Declaration of Helsinki principles and was carried out with the approval of the Regional Ethics Committee Caen.
Data collection
Clinical, biological data were retrospectively collected. Data were collected from charts using a standarized form that included the following information: gender, month/year of birth, date of first symptoms and diagnosis, clinical and biological lupus manifestations, histological data of renal biopsy, significant comorbidities and ADRs.
Primary endpoint
Complete remission (CR) was defined as proteinuria 0.33g/day and serum creatinine 124 μmol/l. Partial remission (PR) was defined as proteinuria 1.5g/day with a 50% decrease of the baseline proteinuria value and serum creatinine no greater than 25% above baseline, at the 12 th month after the first CYC infusion. Global remission (GR) was calculated by identifying all patients with either CR or PR.
DNA extraction and cytochrome P450/GST genotyping Salivary DNA samples were collected prospectively from each of patient, except for the patients included in "PLUS" study for who the blood DNA samples were already collected. DNA was extracted from salivary samples using the Puregene DNA Isolation kit (Puregene DNA isolation Kit; Merck Eurolab, Lyon, France), according to the manufacturer's instructions. Genotyping was performed with the Taqman allelic discrimination technique on an ABI Prism 7000 (TaqMan1) as previously described [15]. Genotyping for common variant alleles of the CYP2B6 gene [CYP2B6 Ã 5(1459C>T, rs3211371), CYP2B6 Ã 6 (G516T, rs3745274 and A785G, ]. GSTM1 and GSTT1 null mutations were analyzed by a polymerase chain reaction (PCR)-multiplex procedure. This technique clearly identifies the homozygous null genotype but does not discriminate the deletional heterozygotes from non deletional homozygotes, both of which were classified as GSTM1 and T1 positive genotype (GSTM1+, GSTT1+) or GSTM1 and T1 null genotype (GSTM1-, GSTT1-) [16]. The GSTP1 codon 105 polymorphism (Ile!Val; C.31A>G) was analyzed by a PCR-restriction fragment length polymorphism (RFLP) assay
Statistical analysis
Descriptive statistics used included the mean (SD) as appropriate for continuous variables, and frequency (percentage) for categorical variables. Univariate analysis used included the chisquare or Fisher's exact test as appropriate to compare categorical variables and the nonparametric Mann-Whitney test to compare continuous variables. Multivariate analyses were performed with logistic regression. Efficacy was reported by treatment period. Statistical analyses were performed using EpiData TM (EpiData Software version 2.0, "The EpiData Association" Odense, Danemark).
Patient characteristics
The clinical and biological characteristics of the 70 patients included in this study at diagnosis of LN are shown in Table 1. Most patients were women (female/male ratio 5.36) and on the 26 patients whom ethnic origin was analysed 77% were Caucasian. The mean age was 41 ± 10 years. All patients carried anti-DNA antibodies. The mean glomerular filtration rate (GFR) was 66 ± 33 ml/min/1.73m 2 . Eighty percent of the patients presented with a class IV WHO LN. All received IV pulses of CYC in first line and the cumulative dose of CYC was 6.2 ± 2.9 g. Eight patients were treated with low dose of CYC (6 pulses of 500 mg) according to the "Eurolupus" schedule. All patients received corticosteroids. Forty percent have also been treated with an angiotensin-converting enzyme inhibitor and 68.6% had received hydroxychloroquine.
Efficacy and ADRs related to CYC
CR, PR and GR rates at the 12 th month after first CYC infusion or during the first 12 months are indicated in Table 2. The GR rate at the 12 th month after first CYC infusion was 79%; 58%
Study population and allele frequencies
For the entire study population, the observed allele frequencies for each enzyme were in Hardy-Weinberg equilibrium and there is no absence of linkage disequilibrium for CYP or GST variants. The CYP2C19 Ã 2 allele was found in 33% of the study population, with 3 homozygous CYP2C19 Ã 2 alleles. One patient carried the heterozygous CYP2C19 Ã 3 allele. The CYP2C19 Ã 17 allele was found in 35% of the population, with 3 homozygous alleles. The CYP2B6 Ã 5 allele was found in 17% and the frequency of the CYP2B6 Ã 6 allele was 60% with 9 homozygous alleles. The frequency of patients carrying the deficient allele GSTT1 (GSTT1), GSTM1(GSTM1-), GSTP1 p.Ile105Val allele, were 37.7%, 59.4% and 44.3%, respectively. All results are consistent with previous studies on healthy individuals [13,14] or LN populations [17].
Association between allele frequencies and renal remission induced by CYC
The global response to CYC during the first 12 months of treatment according to the genetic polymorphisms of CYP2B6, 2C19 and GST are illustrated in Tables 3 and 4. In univariate analysis, the polymorphism of GST, CYP2B6 and CYP2C19 did not influence the efficacy of CYC, except the Ile!105Val GSTP1genotype, which showed a trend toward a lower probability of achieving GR (72.4% versus 91.2%, p = 0.059). CR or PR at month 12 were not influenced by the polymorphisms of the GST, CYP 2B6 and CYP2C19 (data not shown).
Association between genotype frequencies and ADRs related to CYC
Association between ADRs and genetic polymorphisms of CYP2B6, CYP2C19 and GST are summarized in Tables 3 and 4. The polymorphisms of the GSTT1, GSTP1, CYP2B6 and CYP2C19 genes did not influence occurrence of ADRs. Only the deficient GSTM1 allele
Univariate and multivariate analyses of variables associated with the achievement of global remission and ADRs related to CYC
Several bio-clinical variables associated with the achievement of GR at the 12 th month after the first CYC infusion are illustrated in Table 5. None of these variables, including the variable reflecting LN severity (mean GFR, proteinuria, LN histological class) or LN treatment (cumulative dose of CYC or corticosteroids, angiotensin-converting enzyme inhibitors or hydroxychloroquine treatment) influenced the achievement of GR. In multivariate analysis, the Ile!105Val GSTP1genotype was an independent factor of poor renal outcome (GR) (OR = 5.011 95% CI [1.025-24.510] p = 0.047).
Discussion
Our study attempted to test possible prognostic factors of therapeutic response and ADRs related to CYC treatment in LN by investigating genetic polymorphisms of cytochrome P450s and GST. The observed frequency of the polymorphisms of CYP2C19, CYP2B6 and GST were as expected in a predominantly white population of healthy subjects or LN patients, as shown in Table 3. In this study, the polymorphisms of CYP2C19, CYP2B6 did not influence the response to CYC or the occurrence of ADRs. However, the small sample size of this cohort did not allow subgroup analyses concerning the impact of the homozygous deficient allele. Previously, two studies demonstrated that among LN individuals the CYP2C19 Ã 2 deficient allele was associated with lower ovarian insufficiency [17,18]. Others ADRs were not investigated in these studies. Premature ovarian failure was defined as sustained amenorrhea occurring before 45 years. Our study follow-up period was too short (10.5 years) to detect such long-term ADRs and this ADR was difficult to collect in a retrospective study. Concerning the absence of relationship between polymorphisms of cytochrome P450s and efficacy, our data are consistent with the report of Winoto et al., who analysed 36 patients with LN treated with CYC and showed that there was no correlation between remission and genetic polymorphisms [19]. GSTs are one of the key enzymes that regulate the conversion of toxic compounds to hydrophilic metabolites for the purpose of detoxification. Because of its critical detoxifying role, deficiency in GST enzyme activity due to genetic polymorphisms could attenuate the ability to eliminate CYC and its toxic metabolites which are substrates for GST. Therefore patients carrying GST null genotypes could predispose patients to ADRs but also simultaneously undergo enhanced clinical response. Our study demonstrated an association between the GSTM1 null genotype and ADRs related to CYC: 43.9% versus 21.4%; p<0.05 in univariate analysis. Furthermore, multivariate analysis taking into account age, gender, GFR, cumulative dose of CYC showed that the GSTM1 null genotype was an independent determinant of ADRs (OR = 3.345 95%CI [1.064-10.577] p<0.05). This report is consistent with previous studies that showed that patients with the GSTM1-experienced more toxicity from chemotherapy than patients without this mutation [20]. Concerning SLE, Zhong et al. observed that the GSTP1 codon 105 polymorphism significantly increased the risks of short-term ADRs, including myelotoxicity and gastro-intestinal toxicity among 102 patients treated with CYC [21]. In this study, the Ile!105Val GSTP1genotype had, paradoxically, a trend toward a lower probability of achieving GR in univariate analysis (73.3% versus 91.2%, p = 0.059) and in multivariate analysis Ile!105Val GSTP1genotype was an independent factor poor for renal outcome (OR = 5.011 95%CI = [1.025-24.510] p<0.05). Vester et al. have shown that in pediatric nephrotic syndrome treated with CYC, children with Ile!105Val GSTP1genotype had a significantly lower rate of sustained remission compared to the wildtype genotype (7 versus 38%, p <0.02) [22]. As GSTP1 is dominantly expressed in the kidney and not in the liver [23], we can speculate that GSTP1 is not mainly involved in CYC metabolism and the lower rate of remission of the GSTP1 null genotype could be linked with a decreased detoxification in the kidney. Some studies have suggested that reactive oxygen species (ROS) could be implicated in the pathogenesis of lupus [24]. Thus, we can hypothesis that the Ile!105Val GSTP1 genotype allows ROS to accumulate and induce apoptosis of glomerular cells and thus causes more damage explicating the lower rate of remission. The main limitations encompass the retrospective nature of the survey and the possible insufficient statistical power to detect small impact of genetic polymorphism of others GST and CYP because only 70 of the 120 patients initially planned were included.
In conclusion, our study showed that polymorphims of the GSTM1 and GSTP1 could impact remission and ADRs in lupus nephritis treated with CYC. Further investigations are clearly warranted to confirm these results, if confirmed, identification of such strong prognostic factors could lead to personalized treatment with an optimized benefit/risk balance in lupus nephritis patients. | 3,329.2 | 2016-03-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
Identifying Cell Types from Spatially Referenced Single-Cell Expression Datasets
Complex tissues, such as the brain, are composed of multiple different cell types, each of which have distinct and important roles, for example in neural function. Moreover, it has recently been appreciated that the cells that make up these sub-cell types themselves harbour significant cell-to-cell heterogeneity, in particular at the level of gene expression. The ability to study this heterogeneity has been revolutionised by advances in experimental technology, such as Wholemount in Situ Hybridizations (WiSH) and single-cell RNA-sequencing. Consequently, it is now possible to study gene expression levels in thousands of cells from the same tissue type. After generating such data one of the key goals is to cluster the cells into groups that correspond to both known and putatively novel cell types. Whilst many clustering algorithms exist, they are typically unable to incorporate information about the spatial dependence between cells within the tissue under study. When such information exists it provides important insights that should be directly included in the clustering scheme. To this end we have developed a clustering method that uses a Hidden Markov Random Field (HMRF) model to exploit both quantitative measures of expression and spatial information. To accurately reflect the underlying biology, we extend current HMRF approaches by allowing the degree of spatial coherency to differ between clusters. We demonstrate the utility of our method using simulated data before applying it to cluster single cell gene expression data generated by applying WiSH to study expression patterns in the brain of the marine annelid Platynereis dumereilii. Our approach allows known cell types to be identified as well as revealing new, previously unexplored cell types within the brain of this important model system.
Introduction
Complex organisms are heterogeneous at several levels. For example, one can divide the body into functional organs: the skin, the brain, the liver and so on. This anatomical and functional classification implies that distinct organs are composed of different cell types. Interestingly, these functional building blocks are also not composed of homogeneous cell types. Indeed, they are composed of several tissues that together make up a complex organ. For example, the skin of mammals can be described as the superposition of the Epidermis, Dermis and Hypodermis [1]. However, even with this more precise description, each of these tissues will be heterogeneous. For instance in the Dermis, the cells making up the sweat glands will not be the same as the cells in the hair follicles. Additionally, this heterogeneity does not stop at this sub-sub classification: heterogeneity is still present and, with fine enough measurements methods, this remains true to the single cell level [2].
When reducing the scale of study, the classification of cells into distinct groups ceases to be anatomical. Instead, molecular biology has allowed scientists to define molecular characteristics that distinguish individual cells. The most widely used characteristic is mRNA expression, and gene expression signatures are now commonly used to define cell types [3,4]. Conceptually, if a set of cells have similar expression profiles, this information can be used to gather these cells into a specific cell type; we focus on this, molecular, definition of a cell type in the remainder of this manuscript.
To do this, gene expression measurements at the single cell level within the tissues under study are necessary. Recent technological developments have facilitated this shift from tissue to single cell resolution: in-situ hybridization [5] in a few organisms including P. dumerilii and single cell RNA sequencing assays [6] are amongst a number of methods that allow gene expression to be measured at the single cell level [7]. Given this, one key challenge is to develop computational methods that use the expression data to cluster single cells into robust groups, which can then be examined to determine their likely functional roles.
Many popular clustering methods (e.g., hierarchical clustering, k-means and independent mixture models) exist and can be applied to address this problem [8][9][10]. However, these methods fail to take into account the spatial location of each cell within the tissue under study -when such information is available [3,11,12], it is extremely useful and should be incorporated into the downstream analysis. Specifically, we can hypothesise that cells that are close together are more likely to belong to the same cell type. In other words, if a cell has a "slightly" more "similar" expression profile to a typical cell in cell type b than in cell type a but all the surrounding cells have been classified as belonging to cell type a it seems sensible to assign this cell to cell type a. However, it is also important to note that cell migration, which takes place during the development of complex tissues, can lead to isolated cells with very different expression profiles than their neighbours, which also needs to be accounted for.
To address these problems and, in particular, to utilise both the spatial and the quantitative information, we extended a graph theoretical approach developed for image segmentation to reconstruct noisy or blurred images [13], a method that finds its roots in the field of statistical mechanics as the Ising model [14] and its generalization, the Potts model [15]. The core concept of this method ( Figure 1) is to estimate the parameters of a Markov Random Field based model using mean-field approximations to estimate intractable values as described in [16]. We use an Expectation-Maximization (EM) procedure to maximize the parameters as described in [13,16]. To the best of our knowledge, such methods have not previously been applied to 3-Dimensional gene expression data. Additionally, from a theoretical perspective, we extended current models by allowing the degree of spatial cohesion per cluster to differ, thus allowing for the possibility that some cell types are more spatially coherent than others. After validating our approach using simulated data, we demonstrated its utility by applying it to data generated using methods described in Tomer et al. [3] who were interested in studying the ancestral bilaterian brain.
Results
Motivating data: Single cell in-situ hybridization in Platynereis dumerilii Tomer et al. [3] used Wholemount in Situ Hybridisation (WiSH) to study the spatial expression pattern of a subset of genes, at single cell resolution, in the brain of the marine annelid Platynereis dumerilii 48 hours post fertilization (hpf). P. dumerilii, is an interesting biological model, sometimes considered a "living fossil" as it is a slow-evolving protostome that has been shown to possess ancestral cell types, and thus may provide a better comparison with vertebrates than fast evolving species like Drosophila and nematodes where derived features can obscure evolutionary signal [17,18].
Wholemount in-Situ hybridization (WiSH) is an experimental technique where the practitioner uses labelled probes designed to be specific to a given mRNA to determine in which cells of the tissue under study that message is expressed. For a small organism like Platynereis, the staining can be applied to the whole animal and a 3-Dimensional representation of the expression pattern of a gene can be deduced using confocal microscopy to study the patterns of gene expression slice by slice. In practice, following the staining, imaging and alignment, the brain volume was partitioned into 32,203 3mm 3 voxels. The 3mm 3 volume was chosen to be slightly smaller than the average cell in Platynereis's brain but it is possible to consider this grid as a simple cellular model where each voxel roughly corresponds to a cell in the brain. Within each voxel, the light emission (assumed to be correlated to the gene's expression level) was measured ( Figure 2). Theoretically, this luminescence data is quantitative but, on such a small scale, light contamination between voxels means that the quantitative measurements have to be interpreted with caution ( Figure 3). Additionally, the light efficiency of probes can differ leading to a high experiment-to-experiment variability. Consequently, we binarized the dataset by setting the value of expression within a voxel to 1 or 0, depending upon whether the gene was or was not expressed, respectively (see Discussion).
By repeating this process with different probes, expression patterns for 86 genes of interest were mapped. Importantly, due to the stereotypic nature of early Platynereis development [17], the expression patterns can be overlaid, meaning that for each 3mm 3 voxel it is possible to determine which subset of the 86 genes is expressed. We can represent this information in a 86|32,203 matrix of binary gene expression, where the location of each voxel, roughly representing a cell within the brain, is referenced in a 3D coordinate system. Given this coordinate system, we can create a neighbouring graph representation, where each node in the equivalent undirected graph corresponds to a voxel in the in-situ data. The edges of the graph were computed following a simple neighbouring system taking only the 6 closest neighbours, one in each direction of the 3D space.
Clustering method
Markov random fields (MRF) are statistical models that provide a way of modeling entities composed of multiple discrete sites such as images where each site is a pixel or, in our case, a biological tissue where each site is a single voxel roughly corresponding to a cell, in a context-dependent manner [19]. MRF based methods find their roots in the field of statistical mechanics as the Ising model [14] and its generalization, the Potts model [15]. Since then, they have been and are still mainly used in the field of image analysis, and the literature about them is ever growing [20][21][22]. More specifically, MRF methods are found in a wide range of applications such as image restoration and segmentation [23], surface reconstruction [24], edge detection [25], texture analysis [26], optical flow [27], active contours [28], deformable templates [29], data fusion [30] and perceptual grouping [31]. MRFs have also been used in a variety of biological applications from analysing medical imaging data [23,32,33] to analysing networks of genomic data [34]. Additionally, the Cellular Potts Model [35] has been used to model tissue development at a sub-cellular resolution.
Author Summary
Tissues within complex multi-cellular organisms have historically been defined in terms of their anatomy and function. More recently, experimental approaches have shown that different tissues express distinct batteries of genes, thus providing an additional metric for characterising them. These experiments have been performed at the whole tissue level, with gene expression measurements being "averaged" over millions of cells within a tissue. However, it is becoming apparent that even within putatively homogeneous tissues there exists significant variation in gene expression levels between cells, suggesting that additional cell subtypes, defined by distinct expression profiles, might be obscured by "bulk" experimental approaches. Herein, we develop a computational approach, based upon Markov Random Field models, for clustering cells into cell types by exploiting their gene expression profiles and location within the tissue under study. We demonstrate the efficacy of our approach using simulations, before applying it to identify known and putatively novel cell types within the brain of the ragworm, Platynereis dumerilii, an important model for understanding how the Bilaterian brain evolved.
Mathematically, MRF models are built around two complementary sub-models. The field represents the sites and their spatial structure. The Hammersley-Clifford (1971) theorem states that the probability distribution of the Markov field can be represented as a Gibbs measure, which incorporates an energy function into which the spatial coherency parameters of the model are incorporated. Some critical choices in terms of the modeling framework are the structure of the neighbourhood system and the energy function. The emission model is used to describe the underlying data (gene expression measurements in our case) and it is necessary to make some assumptions about its form depending upon the underlying data.
In our study the goal is to allocate the S~32,203 voxels described above into K clusters, where K is unknown, using the binarised matrix of M~86 gene expression measurements, Y . To incorporate spatial information into our clustering scheme, we assume that Z, the (latent) vector of length S that describes the allocation of voxels to clusters, satisfies a first-order Markov Random Field (MRF), where the probability that a voxel is allocated to a given state depends only upon the states of its immediate neighbours. Additionally, within cluster h (h[1,::,K), we assume that the expression of gene m follows a Bernoulli distribution with parameter h m,h ; we denote the full set of Bernoulli parameters using the M|K matrix H. In a typical MRF, the degree of spatial cohesion is determined by a single parameter b, which is assumed to be constant for all clusters [36,37]. However, in the context of tissue organisation, it is reasonable to expect that the degree of spatial cohesion will differ between clusters; consequently, we estimate a separate value of b for each of the K clusters (Methods). To estimate the parameters we use a fullyfactorized variational expectation maximization (EM) approach in conjunction with mean-field approximations to infer intractable values [16]. To choose the optimal number of clusters, K, we use the Bayesian Information Criterion (BIC).
Model validation and comparison with alternative approaches
Simulating data with a spatial component is a non-trivial problem. Existing methods rely on MCMC approaches as described in [38]. However, in our case with a relatively large number of nodes in the graph (*32,000), this is computationally expensive. To overcome this problem, we exploited the fact that the Platynereis dataset already possesses a spatial structure, and use this as a synthetic example on which to base our simulations. As outlined in Figure 4, we start by clustering the gene expression data using different values of K and store the corresponding parameter estimates. Subsequently, the estimated Bernoulli parameters, H, were used to simulate binarised gene expression data from K clusters where, for each voxel contained within cluster h, the expression of gene m is simulated from a Bernoulli distribution with parameter h m,h ( Figure 4).
Next, each simulated voxel was assigned to the same spatial location as the corresponding voxel in the biological dataset. As a result, the simulated and the biological datasets have the same neighbouring graph. We can then cluster these simulated datasets using the method outlined above and determine how accurately we can estimate the parameters (b,H) and choose the correct number of clusters, K.
The most important criterion for assessing the efficacy of our approach is the similarity between the inferred and true clusters. This also implicitly assesses the accuracy of the estimation of H: if the inferred and true clusters are identical, the estimates of H must Figure 2. Wholemount in-situ hybridization expression data for 86 genes in the full brain of Platynereis. The whole larvae is hybridized with two dyed probes targetting specific mRNAs, one corresponding to a reference gene and the other a gene of interest. Using confocal microscopy, the whole larvae is visualized slice by slice and the dyed regions are reported with laser light reflecting back to the detector. Every image is then divided into &1 cell large squares which allows the reconstruction of the 3D map of expression for the two genes in the full brain. The process was repeated 86 times for key genes in Platynereis development [3]. doi:10.1371/journal.pcbi.1003824.g002 be equal to the true values. In practice, we used the Jaccard coefficient to compare the inferred and the true clusters (Methods), where a Jaccard coefficient of 1 implies perfect agreement. To benchmark our approach's performance, we also assessed the ability of two other models to cluster the simulated data: hierarchical clustering (hClust), a very widely used approach in genomics and elsewhere, and an independent mixture model, which allows the relative improvement in performance added by the spatial component to be studied.
Additionally, the likelihood function that needs to be maximised possesses many stationary points of different natures. Thus, convergence to the global maximum with the Expectation-maximisation algorithm (see Methods section), depends strongly on the parameter initialisation. To overcome this problem, different initialisation strategies have been proposed and investigated (see for instance [39][40][41]). Herein, we compare a random initialisation scheme with an initialisation based upon the solution obtained by applying hClust.
The results of these experiments are shown in Figure 5 for K K[½4,70. Our method, when used with a random initialization scheme (Methods), has an average Jaccard coefficient of 0:8, and clearly demonstrates better performance than the other methods. The second best performing method is the independent mixture model with a random initialization, which has an average Jaccard coefficient of 0.7. Since the independent mixture approach is equivalent to the MRF with all the b parameters set equal to 0 (i.e., without a spatial component) this suggests that accounting for the spatial aspect yields improved results. Given this, it is perhaps unsurprising that hClust also performs relatively poorly. Additionally, we note that initializing the MRF with the hClust output yields results that are superior to those generated by hClust but that are still poorer than either the randomly initialized independent mixture model or the MRF approach. This is likely explained by noting that, depending upon the initialization, the EM algorithm might converge to a local maximum. Consequently, for the rest of this study we use the random initialization strategy to initialize the EM algorithm.
As well as directly comparing the clusters, we can also determine how accurately the b parameters are estimated. To this end, in Figure 6 we compare the true and inferred mean values of b for different values of K. The values of b increase with K, which is to be expected since more clusters implies the existence of more transition areas, thus making an increase of b necessary to maintain the optimal spatial coherency of the model. Figure 6 also shows a slight but consistent underestimation of b. This can be explained by noting that the simulation scheme used may reduce the spatial coherency within clusters. Specifically, as illustrated in Figure 7, clusters may not display homogeneous expression of a given gene: instead, depending upon the value of h, a gene will be expressed only in a fraction of voxels. In reality, the voxels in which such genes are expressed may have a coherent spatial structure within the cluster that is lost in the simulation, thus explaining the consistently smaller values for b that are estimated. To confirm this, we performed a second simulation using the parameter values estimated from the first simulation as a reference. In this context we did not expect any further loss of spatial coherency, which was indeed confirmed as shown by the blue curve in Figure 6.
To validate further our estimation of b, we randomized the coordinates of the voxels to lose any spatial component before reclustering the data. As expected, we observed that the estimates of b were very close to 0 for all clusters (Figure 6), as well as there being very similar Jaccard coefficient values (relative to the true values) for the independent mixture and the MRF model. Both of these observations provide confidence in our assertion that the spatial component plays an important role in the fit.
Finally, we assessed the ability of the model to choose the correct number of clusters, K. To do this, we noted the "true" number of clusters underlying the simulated data and compared this with the chosen value,K K. The results for two representative choices of K are shown in Figure 8 and demonstrate that our clustering approach, in conjunction with the BIC, is able to accurately determine the optimal number of clusters.
Biological interpretation
After validating our method using simulated data, we next studied the biological meaning of each of the K~33 clusters generated by applying the HMRF model to the real data. To do this, we combined each cluster's spatial location with its corresponding expression parameter h h~ ( h 1,h , . . . ,h m,h ). The latter parameter allows a stereotypical expression "fingerprint" to be associated with every cluster.
In practice, not all of the 86 genes will provide insight into the biological function of a given cluster. For instance, in the case of a ubiquitously expressed gene, g, the value of h :,g will be high for all clusters. To overcome this problem, we developed a score, S, for each gene, m and each cluster h, where: For each gene, m, and cluster, h, s m,h is large if gene m is specific to cluster h. Consequently, the top scoring 3 or 4 genes for each cluster will represent a specific stereotypical expression pattern that will help us infer or confirm the identity of the functional tissue represented by each cluster.
To provide confidence in our approach, we first considered well characterised regions within the Platynereis brain. Arguably the best-studied regions of the brain in Platynereis are the eyes: the brain has 4 eyes, two larval and two adult, and their locations and expression fingerprints are well known. As shown in Figure 9A, our approach generates two spatially coherent clusters that correspond to each of these regions. Importantly, the genes that best characterise these clusters are biologically meaningful: rOpsin and rOpsin3, both members of the well-described opsin family of photosensitive molecules [42,43], best distinguish the adult eye and larval eyes respectively, consistent with the in-situ data images shown in Figure 10. As well as the eyes, a second region of the Platynereis brain, the mushroom bodies (which corresponds to the pallium, layers of neurons that cover the upper surface of the cerebrum in vertebrates [3]), are also clearly identified by our approach ( Figure 9B).
As well as identifying clusters corresponding to known cell types, we also identified clusters that might correspond to less well studied subtypes with specific biological functions. In Figure 11, the green cluster defines a region on the basal side of the larvae that can be associated both by its localization and by its most representative genes (MyoD [44,45] and LDB3 [46,47]) with the starting point of the developing muscles of the adult animal. Indeed, MyoD has been shown to play a key role in the differentiation of muscles during development in vertebrates and invertebrates [44,45] and LDB3 codes for the protein LDB3, which interacts with the myozenin gene family that has been implicated in muscle development in vertebrates [47].
Given the location of the eyes and the developing muscles, the location of the pink cluster in Figure 11 is interesting. This cluster surrounds the larval eyes, the adult eyes and reaches the hypothetically developing muscles described above. Looking at the most representative genes for this pink cluster, it is interesting To confirm that this underestimation come from the simulation scheme and not the clustering method, we used the simulated data as the reference to generate a "second generation" of simulated data, suppressing the simulation scheme bias (see Figure 7). The results of this re-simulation are shown by the blue dots, which exhibit no underestimation of b. Finally the brown dots represent the mean value of b on the same simulated data but spatially randomized, as expected the b are now estimated to 0. doi:10.1371/journal.pcbi.1003824.g006 Figure 7. Decrease in spatial coherency due to the simulation scheme. For an example cluster h, gene m may only be expressed in half of the voxels. This will yield h m,h~0 :5. However, in the biological data, the voxels expressing gene m may be spatially coherent (i.e., located close to one another), leading to a reduced area of expression discontinuity (the green line). By contrast, in the simulated data the expression of such a gene will lose its spatial coherency, leading to an increased area of expression discontinuity. The number of voxels having a neighbour with some differences in the gene expression pattern is directly linked to the value of b h through the energy function (Methods). This explains the underestimation of b observed in Figure 6. doi:10.1371/journal.pcbi.1003824.g007 to note the presence of Phox2, a homeodomain protein that has been shown to be necessary for the generation of visceral motorneurons (neurons of the central nervous system that project their axons to directly or indirectly control muscles) as described generally in [48] and in Drosophila [49]. The second most representative gene, COE, has also been shown to play a role in Platynereis and Drosophila neural tissue development [50]. In this context, although we lack biological validation, we can hypothesise that the cells within this particular cluster could be developing neurons that link the eyes to the muscles of Platynereis. Although this hypothesis remains purely speculative and would need validation in the laboratory, we believe this example is an interesting proof-of-concept that our clustering method can prove useful for hypothesis generation. Indeed, the analysis of the parameter values and the spatial localization attached to the clusters has allowed us to place with a reasonable level of confidence a functional hypothesis about a tissue that was not clearly defined either spatially or functionally. It is also interesting to note that hClust does not separate either putative region when clustering the same data with the same number of clusters.
When we used an independent mixture model approach (i.e., with no spatial component) to cluster the data the results were more comparable to those obtained when using the HMRF strategy. However, as can be observed when comparing Figures 12 and 11, the clusters generated via the independent EM approach are considerably noisier and, as expected, less spatially coherent than those generated by the HMRF model. Further, for the developing muscle region, this noise is linked to biological imprecisions. When compared to in situ data generated by Fischer et al. [17], who used a phalloidin in situ stain to investigate the location of the muscles at . In-situ hybridization image for rOpsin and rOpsin3 in the full brain at 48hpf (Apical view). Z-projection of the expression of rOpsin (red) in both the adult eyes and the larval eyes, rOpsin3 (green) specifically in the larval eyes and co-expression areas in some areas of the larval eyes in the full brain of Platynereis at 48hpf. This image been obtained directly from the data obtained in [3]. doi:10.1371/journal.pcbi.1003824.g010 this developmental stage, it can be observed that the muscles are restricted to regions located away from the axes of symmetry, more consistent with the HMRF clustering output. Similarly, the independent mixture model method associates to the hypothesized region of developing neurons around the eyes, some ventral areas that seem unlikely to belong to the same sub tissue. Consequently, it seems likely that the HMRF not only performs better than the independent mixture model on simulated data but also better reflects the underlying biology.
Data binarization
As shown in Figure 3, we overcame problems linked to light contamination by binarizing the "quantitative" luminiscence information. To do this, it is necessary to specify a threshold above which a gene is considered expressed. Ideally the same threshold would be applied to all genes -however, when we examined the density plots of light intensities for each gene we observed significant differences that rendered such an approach impossible. Specifically, for some genes, the density of intensities clearly separated the voxels into two groups, corresponding to those where the gene is expressed and unexpressed, respectively (Figure 13 (left)). For the remaining genes, however, the density plot was diffuse, with no clear separation of the voxels into expressed and unexpressed clusters (Figure 13 (right)). Consequently, we binarized each gene manually by choosing an optimal threshold based upon inspection of the raw fluorescent microscopy images. This is possible since the number of genes under study is relatively small. However, as the number of genes for which data is available increases (as will be the case, for example, with singlecell RNA-sequencing studies), an automated method, perhaps based upon mixtures of Gaussians in the context of the WiSH data, will be required. Importantly, if the noise level in single cell expression datasets decreases to the extent where we can safely consider the results as quantitative, our method can easily be transformed to take this feature into account. The general outline of the model will stay exactly the same, the change will occur in the emission distribution. Instead of representing a Bernoulli parameter for each gene and each cluster, each h m,h could instead represent the parameter of a Poisson distribution.
Validity of the model's independence hypothesis
In our model we assume that, conditional upon the allocation of a voxels to a cluster, the gene expression levels can be described by independent Bernoulli distributions. This is a reasonable assumption in the context of the 86 genes chosen by Tomer et al. [3], since they were selected to have largely orthogonal expression profiles. In other words, they were chosen since they were known to correspond to distinct and potentially interesting regions of the Platynereis brain. However, in many other settings a larger number of genes, many with correlated expression profiles (i.e., genes in the same regulatory network) will be profiled and this assumption will be invalid. Consequently, extending the model to allow for dependence structure in the emission distributions will be a critical challenge. Additionally, as the number of genes increases, our approach for choosing the most specific genes will become less practical. Instead, entropy based approaches, such as the Kullback-Leibler divergence, might be more suitable.
Summary
In summary, we have illustrated, using both simulations and real data, that accounting for spatial information significantly improves our ability to cluster voxels roughly representing brain cells into coherent and biologically relevant sets. While our approach converges very quickly (on the order of minutes) for the motivating dataset described herein, as the volume of data increases (i.e., by assaying the expression levels of thousands of genes in each cell using single-cell RNA-sequencing) it will be important to carefully investigate how easily our model scales. Nevertheless, we anticipate that our method will play an important part in facilitating interpretation of single-cell resolution data, which will be an increasingly important challenge over the next few years.
Methods
In this section we describe the Hidden Markov Random Field based approach that we developed to cluster the in-situ Figure 11. A putative tissue of developing neurons between the eyes and the larvae's developing muscles. The yellow and red clusters are the eyes as seen on figure 9. The green cluster represents the developing muscles on the basal side of the larvae, as the location and the most specific genes strongly suggest. The pink cluster is a putative tissue that makes an interesting link between the eyes and the muscles. The most representative gene of this tissue is Phox2, a homeodomain protein required for the generation of visceral motorneurons in Drosophila [49] doi:10.1371/journal.pcbi.1003824.g011 hybridization data into K clusters (K[½2,?½). Subsequently, we will describe our approach for choosing K.
Let y i~f y 1,i , . . . ,y M,i g be the gene expression measurement for each voxel i[S~f1, . . . ,Ng where M is the number of considered genes. Originally, 169 gene expression patterns where generated, but due to experimental constraints -confocal laser microscopy artefacts and high background noise in some samples -we filtered out 83 of those genes to create a gold standard dataset with M~86 manually validated genes. Our goal is to assign each voxel, i, to one of the K possible clusters. We define a set of discrete random variables Z~fZ i ,Vi[Sg that represents the cluster each voxel is assigned to. Each Z i takes a value in f1, . . . ,Kg denoting the K possible clusters. The aim of the method is to restore the unknown clustering structure with regard to gene expression similarity as well as spatial dependencies between voxels. To do this, we assume that the Z i are dependent variables and we encode the spatial relationship using a neighbourhood system defined through a graph G.
In this work, we use a first order neighbourhood system, i.e, the 6 closest sites. The set of voxels is then represented as a graph G with edges projecting from each voxel to its closest neighbours. The dependencies between neighbouring voxels are modelled by assuming that the joint distribution of fZ 1 , . . . ,Z N g is a discrete MRF: where W (b) is a normalising constant summed over all the possible configurations z that soon becomes intractable as the number of sites increases. b is a set of parameters, and H is the energy of the field. This energy can be written as a sum over all the possible cliques of the graph G. We restrict this summation to the pairwise interactions and the function H is assumed to be of the following form: where V ij represents the pair-wise potentials, i.e the dependency between z i and z j for two neighbouring voxels i and j. The Potts model [15] traditionally used for image segmentation, is the most appropriate discrete random field of the form of (1) for clustering as it tends to allocate neighbours to the same cluster, thus increasing the spatial coherency. The Potts model is defined by the Energy function H: with b the interaction parameter between two neighbours. Note that the greater the value of b, the more weight is given to the interaction graph (i.e., there is more spatial cohesion). Although this feature is appealing for clustering, the standard Potts model penalizes the interaction strength in different clusters with the same penalty. In practice, given the nature and the biological context of our data, it may be more appropriate to allow cell types that are more spatially coherent to have a higher value of b (i.e stronger interaction) than other cell types, in other words to use adaptive smoothness related to the type of cells in the cluster. To this end, we propose a variant of the Potts model, which we define as: This extended version allows each cluster K to have its own parameter for interaction strength. For the model to be fully defined, we need to specify, besides the prior described above for the labels Z, the emission model. To this end, a Bernoulli distribution is used as the sampling distribution: The 86 genes selected can be considered as independent because they are all key genes in the development of the brain of P. dumerilii. Consequently, we can assume conditional independence of the observed variables y given the clustering z. This leads to: The log likelihood of the complete model is thus given by: We denote the parameters of the model as y~fb,Hg.
As mentioned before, our aim is to assign each voxel i to one of the K possible clusters. To do so, we chose to consider the Maximum Posterior Marginal (MPM) that maximizes P(Z i~h Dy,y), where the y are unknown and need to be estimated. We solved this problem using the EM algorithm [51]. For HMRFs, contrary to independent mixture models, the exact EM can not be applied directly due to the dependence structure and some approximations are required [13]. We chose to use approximations based on the Mean field principle [16]. We used this to approximate the posterior probabilities t (l) ih that voxel i belongs to cluster h at iteration (l). We also used a mean-field approximation to approximate the value of the intractable normalizing constant W (b) (details are given in the Appendix). Once the t (l) ih 's are computed, we assign each voxel to the cluster h for which this posterior probability is the highest.
After the E step, maximizing y is relatively straightforward. For H, once the the t (l) ih~p (Z i~h Dy; y (l) ) have been computed during the E-step, we use those probabilities to assign each voxel to its cluster at iteration (l). To maximize b (lz1) , we iteratively applied a gradient ascent algorithm, the positive version of the gradient descent algorithm [22] to the function R z (bDy ( l)) for each b (lz1) h ,h[1,:::,K A detailed description of the algorithm is described in Text S1.
Choosing K
To select the optimal number of clusters we used the BIC [45], which finds the optimal number of clusters,K K, by selecting the value of K that minimises its value. However, due to the symmetry of the brain we used a slightly different approach. As shown in Figure 14 (blue dots), the BIC does not reach a clear minimum when applied to all voxels in the brain but instead reaches a plateau after a given number of clusters. This is most likely due to the highly, but not perfectly symmetrical nature of the brain: with a small K, the same "tissue" on both the left and the right hand side of the brain will belong to the same cluster. However, because the two sides of the brain are not perfectly symmetrical, as K increases the left and right part of the same "tissue" will be clustered separately. As a result, the likelihood continues to increase sufficiently to explain the flattened BIC curve. Moreover, this hypothesis seems to be confirmed by the fact that when computing the BIC on the right and left side of the brain separately, the curve has in both cases a clear minimum as shown in Figure 14 (red and green dots). Given this, we opted to choosê K K as the point where the BIC curve reaches a plateau.
Data and code availability
The data are available as a binarized datset of single cell gene expression data for the 86 genes in the brain of Platynereis dumerilii. An implementation of the EM algorithm in the C programming language is also available on the Github page of the project [52].
Supporting Information
Text S1 Mean field approximations and EM procedure.
We provide more information about the mean field approximation used to estimate both the conditional probabilities of a voxel belonging to a particular cluster given the parameter values as well as the intractable normalizing constant. We also present pseudo- code to outline the EM Mean-field algorithm used in our HMRF implementation. (PDF) | 8,806 | 2014-09-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Optimizing Libraries’ Content Findability Using Simple Object Access Protocol (SOAP) With Multi-Tier Architecture
The aim of this paper is to describe a developed application of Simple Object Access Protocol (SOAP) as a model for improving libraries’ digital content findability on the library web. The study applies XML text-based protocol tools in the collection of data about libraries’ visibility performance in the search results of the book. Model from the integrated Web Service Document Language (WSDL) and Universal Description, Discovery and Integration (UDDI) are applied to analyse SOAP as element within the system. The results showed that the developed application of SOAP with multi-tier architecture can help people simply access the website in the library server Gorontalo Province and support access to digital collections, subscription databases, and library catalogs in each library in Regency or City in Gorontalo Province.
Introduction
The demand of advanced methods of search and information retrieval in the library was growth rapid due to the amount of data available in electronic libraries. In each "modern" library is a clear demand for fine-tuning the performance of information retrieval systems. Like in other library, the demand of advance search and information retrieval become one important need of library in Province Gorontalo, Indonesia.
Nowadays all districts / cities in Gorontalo province already has a library area. Books Data access in each library area has been equipped with an application. For the local library on the Gorontalo city and Gorontalo Regency already have an online system and four (4) Other districts still use offline system so that people are required to come to the library to be able to access the data of the book. This is less than optimal because, if the book is sought does not exist in the library, the people will be disadvantaged in terms of time, cost, and energy.
The solutions of this problem that can be done by build a website for each regional library in the district / city, but it is not as flexible because the public are required to access any local library website to find the desired book. Another solution is to build a unified information system and web-based centralized at the provincial library, but it would violate the responsibility and authority data processing library resources that should be in the respective local library. Various studies have been done to address the above issues instead build the applications that allow users to search the collections at each library work together with technology grid data [1], but the applications built conduct a search using the facility server computers in the library provinces to find data in each of the library district / city, when the server computer has a problem, it will interfere with the search data.
Based on the description problems above, the research was conducted to develop a system using web services technologies (SOAP protocol) with a multi-tier architecture. SOAP protocol is placed in the respective local library, because SOAP is a text-based protocol that is used to wrap the data and information within the framework of XML documents used in the Internet network. SOAP defines a common message format for communication between applications, running on top of the HTTP protocol (SOAP over HTTP) [2]. Architectural applications on each server using a local library multitier architecture in which applications in the server is divided into several units, called tier, each of which can be run on different computers. These units communicate with each other via a LAN (Local Area Network) or via the Internet. The simplest form of a multi-tier application is a three-tier model [3].
The purpose of this research is to build a model system based on the SOAP protocol on Multi-Tier architecture to optimize the flexibility of accessing information and library resources. To achieve these objectives, the methods used in this study are the research and development, with the phases: Data collection, literature studies, and system architecture development, testing system architecture and system architecture improvements.
Simple Object Access Protocol (SOAP)
SOAP is a file sending or receiving messages formatted XML, SOAP is a text-based protocol that is used to wrap the data and information within the framework of XML documents used in the Internet network. SOAP defines a common message format for communication between applications running on top of the HTTP protocol (SOAP over HTTP) [2]. SOAP specifies clearly how to encode an HTTP header and an XML file so that program on a computer can call a program on another computer and sends responses. SOAP is a lightweight protocol for exchange of information shown on the scope of decentralized structures and distributed. Figure 1 shows the basic structure without the SOAP header elements.
Web Service Document Language (WSDL)
WSDL is an XML based language that provides a model to describe web services, usually combined usage by using SOAP and XML web services and then provide the network, which was developed by IBM, Ariba and Microsoft [4]. WSDL is an XML format published to explain the web service. WSDL defines: a.
Messages (both abstract and concrete) that are sent to and from the web service. b.
Digital collections of messages (port type, interface). c.
How port type specified wire protocol used. d.
Where the service is placed. By using an XML grammar, WSDL defines web service with elements of the concept of representation of the WSDL document shown in Figure 2.
Universal Description, Discovery and Integration (UDDI)
UDDI is a directory WSDL web services that provides publications, search and recovery of the global registry, which is a block up enables an organization or user doing a search service and transaction data quickly with other organizations using open standards [5]. UDDI interaction shown in Figure 3. .
Multi-tier
Multi-Tier Application is an application which is divided into several units, called tier, where can run on different computers. These units communicate with each other via a LAN (Local Area Network) or via the Internet. The simplest form of a multi-tier application is a three-tier models. Based on this model, the application is divided into three parts [3]: 1. Client Application (front-tier) a unit used by the user application.
Application server (middle-tier)
This unit provides to services can be used by the client application to accessing data. This unit is the liaison between the client applications to the database server 3. Database server (back tier) This unit is responsible for managing data storage. Usually a RDBMS, for the database server, use existing database server. So only need to create a client application and server application. In a multitier application is more complex, may require more than one application server unit. For example, it takes a special unit to deal with security issues (security), or as a bridge to connect with a variety of database server or multiple platforms. Figure 4 shows the architecture of Multi-Tier
Design System
Use Case Diagram showing the interaction between the Use-Case and 5 Actor ie Member, Admin / Operator, inspectors and procurement committee, head of the library and BPKAD. Interaction between actors on the system shown in Figure 5. 3. The committee of inspectors and procurement of books have the right access to the system is logged on, enter the data book, a distributor of data input and validation of the list of books to be purchased / messages. 4. Head libraries have authorized access to the system is logged and view all reports (for example: report lending, the report returns, reports and statements fines procurement of books). 5. The Board of Finance and Asset Management (BPKAD) equal authority at the head of the library.
However, in view of the report is restricted only to see reports of procurement / books
Model communication between the library in general
Communication between the Libraries using the approach of two-way data exchange architecture, which searches every book, by one of the libraries is done in all local libraries connected to each other. Figure 6 shows the relationship between library, where every library can communicate with others using web services technologies (SOAP protocol). Request process from the library will be given to the regional library, then the library is intended will give response to whether a specific book is available or not. The result of this response is then displayed in the client application from a library looking for the book data. Figure 6. Communication between the library architecture in general
Model communication between the library in detail
The process of request and response of the architecture described in Figure 6 and will be described in detail in Figure 7, where the user (user) at a local library that wants to find a book then the first time the application searches for the book in the local library server where the user is visiting. If a book that you're not finding the library server where users visit the server will direct the search process in other areas of the library server. Search results will then be displayed by the client application.
Step-by-step of process finding books are as follows: 1. Users come to the local library to find a book that will be read / borrowed by accessing computer 2. The computer that accessed acts as the computer running the client application 3. Computer looking for a book on a computer server in the library 4. If the book you are looking for does not exist then the client application will access the WSDL of each local library server computers 5. The search results are displayed on the computer, in the form of books and library areas that provide the book.
Multi-tier model of each library
Architectural design applications available in each regional library using a multi-tier architecture approach, where the top level is the level of application that is accessed by the user in the respective library. The middle level is an application used for accessing database, so that the client application does not access the database directly. The applicatin uses the SOAP protocol is this level, so that each Library can use the database with a variety of platforms and database management is the responsibility of each it. The lowest level is the database level. Architectural design created by researchers is in the respective level using a computer to the development of the future can be done easily, for example, where will be made the application at the middle level, the admin does not need to existing security level database, because the application only needs to be installed on the computer in medium level and vice versa (in case of changes to the database it will not interfere with the arrangement / setting that has been done in the middle level).
Application Transaction Loan and Refund
The process of borrowing and returning books to be done in a library that provides the desired book. This application allows library members to conduct lending transactions as long as it is in the library that provides the book (as when performed in other libraries, the application of borrowing in a state of the recording button is disabled), so also for the return of books. Applications at any library can only be used to see the books available in the library (instead of borrowing and the return). Figure 10 shows the application view of borrowing and repayment.
Conclusion
Based on the results of research conducted at the office of the regional library and the discussion that has been described previously, we can conclude several things: 1. Some libraries respective areas rely on manual systems in service procurement of books, borrow and return books, so that related data library management still kept manually, so the need for a policy in utilizing IT as part of their local library for easy management of service users / members and sharing resources in the respective library. 2. Results of the design of the new library management system using object-oriented design approach resulting in the development / manufacturing of application should be using object oriented programming With multi-tier architecture and application server using SOAP protocol, the communication and exchange of information resources between different areas of the library client application platform is easy then to optimize the application of multi-tier architecture, each regional library should provide 3 (three) servers. | 2,846.2 | 2017-03-01T00:00:00.000 | [
"Computer Science"
] |
Composition of meiobenthonic Platyhelminthes from brackish environments of the Galician and Cantabrian coasts of Spain with the description of a new species of Djeziraia (Polycystididae, Kalyptorhynchia)
From 1997 to 1999, the fauna of free‐living Platyhelminthes of the rias ecosystem was studied along the Galician and Cantabrian coast in northern Spain. In total, 72 platyhelminth species are listed in this study. Forty‐two species represent new records for the Iberian Peninsula, three of which represent new genera records. A new species belonging to the genus Djeziraia (Polycystididae, Kalyptorhynchia), Djeziraia longistyla sp. nov., is described in this paper. In this broad‐scale study, a large data set (27 localities) of the estuaries of northern Spain allowed an analysis of the turbellarian species assemblages and the relation of species distributions to salinity, conductivity, oxygen, temperature, and sediment characteristics. Species assemblages (species diversity) of each habitat of the brackish water ecotone are shown. The present study contributes to knowledge on the ability of adaptation of free‐living Platyhelminthes to regimes of brackish water ecotones.
Introduction
Research on the taxonomy and ecology of meiobenthonic organisms is well established for estuaries of northern and western Europe (Gourbault 1981;Warwick and Gee 1984;Ax 1991;Warwick et al. 1991;Mettam et al. 1994;Smol et al. 1994;Soetaert et al. 1994;Attrill and Thomas 1996;Blome 1996;Blome and Faubel 1996). In contrast, the meiobenthonic fauna of estuaries of the Iberian Peninsula is relatively unknown (Soetaert et al. 1995). Most studies in this region have been carried out in the delta of the Ebro River and deal with ecological aspects (Muñ oz 1990;Ibañ ez et al. 1995Ibañ ez et al. , 1996Ibañ ez et al. , 1997Guillen and Palanques 1997;Mikhailova 2003), whereas studies on the Galician and Cantabrian coasts of Spain have been focused on the distribution and composition of meiofaunal and macrofaunal communities (Palacio et al. 1992;Curras et al. 1993;García-Á lvarez et al. 1993;Lastra et al. 1993;Sanchez-Mata et al. 1993;Garmendia et al. 1996;Arroyo et al. 2004;Borja et al. 2004;Garcia-Arberas and Rallo 2004). In Europe, taxonomic investigations on free-living Platyhelminthes have been mostly conducted in aquatic environments of Fennoscandia and estuarine areas of the North Sea (Nasonov 1926;Luther 1943Luther , 1960Luther , 1962Luther , 1963Karling 1963Karling , 1974Den Hartog 1977;Armonies 1987;Ax 1956a;1995;Faubel and Warwick 2005). Comparable investigations are relatively scarce in Spain. Until now, only a few species were known and described from the Iberian Mediterranean coast (Gieysztor 1931;Steinbö ck 1954).
Hence, it was the aim of the present investigation to contribute to the knowledge of species composition and distribution of estuarine brackish water turbellarians along the Galician and Cantabrian coast. A brief account is given of the relationships between species distribution and community structure, and intrinsic abiotic factors during the study are shown. A new species, Djeziraia longistyla sp. nov. is described.
Material and methods
The northern region of the Iberian Peninsula, specifically the Galician and Cantabrian coast (Figure 1), is characterized by more than 26 rias of different sizes and depths. Rias are estuaries with important variations in salinity in the superficial waters due to evaporation or to the contribution of freshwater from rivers. These areas are usually very fertile, which, apart from the stratification, results in high oxygen consumption and therefore oxygen depletion, especially in the depressions on the estuary bottom (e.g. Ria de Vigo, Galicia) where anaerobic sediments have been deposited. In addition, the rias are subjected twice a day to tides, one of which is generally of greater amplitude. During high tide, the meiobenthonic organisms descend 5-10 mm into the sediment (Margalef 1989).
During the years 1997-1999, samples were taken (Project Fauna Iberica: SEUI-DGES PB95-0235) in the rias of the coastal region of northern Spain. Sixty-one sample sites were selected within the Comunidad de Galicia (March 1998 andMarch 1999), Comunidad de Asturias (October 1998), Comunidad de Cantabria (September 1997), and Comunidad de el País Vasco (July 1998). In addition to the sampling in the rias, the investigations were extended to the most important rivers contributing a great amount of freshwater to the system (e.g. Arnoia, Ribadil, Miñ o, Carvallo, Miñ or, Oubia, Ferreras, Uncín, Ason, Ebro, Mieras, and Lea) (Table I; Figure 1).
The samples were obtained from sandy or muddy sediments in the eulittoral zone along the coastline and from the mouths of rivers during low tide using a small shovel to take surface sediments of about 320 cm 3 from an area of about 2068 cm and a depth of 1-2 cm (samples 4, 5, 7-10, 13, 14, 18, 22-27; Table I; Figure 1). Three replicates were taken for each sediment sample. In general, below a sediment depth of 2 cm, the chemocline showed a strong contrast from aerobic to anaerobic sediment layers in the eulittoral and shallow sublittoral areas. Therefore, free-living Platyhelminthes were found only in the first 2 cm of sediment. Samples from stagnant waters (limnetic, brackish or marine), such as rock-pools covered with aquatic vegetation or detritus, were extracted with a plankton-net (mesh size 125 mm) (samples 12, 15, 16, 19-21; Table I). The Karaman-Chappuis method (Chappuis 1942) was applied to obtain samples from the gravel of river banks (samples 1-3, 6, 11, 17; Table I). The water obtained with this method was filtered (approximately 30 litres) with a sieve of mesh size 125 mm. The individual number of each turbellarian species was recorded for all samples taken.
The samples were deposited in storage jars and transported to the laboratory. Individuals were separated under a dissecting microscope. Sexually mature specimens were identified alive and in squash preparations, i.e. flattened under the increasing pressure of the cover slip as the preparation dries. For anatomical study of the hard-structures such as stylets, whole mounts (polyvinyl-lactophenol) were prepared. For histological observation, specimens were fixed in Bouin's fixative. Sagittally prepared serial sections (4 mm thick) of sexually mature specimens were stained with Azan. Most of the individuals were also photographed (Video Graphic Printer UP-86 OCE) and filmed (Canovision EX-HI 8) Figure 1. Location of the sample sites, with turbellarians (black points) and without turbellarians (white circles). The locality numbers correspond to those given in Tables I-III. under a microscope. The type material has been deposited in the Museo Nacional de Ciencias Naturales (MNCN), Madrid.
For characterization of the habitats, the abiotic factors of conductivity, temperature, oxygen concentration, and salinity (refractometric determination) were measured mainly in the water column (conductivity, Tu: CRISON CDTM-523; oxygen, Tu: YSI 33-SCT). Salinity regimes and classification were based on the Venice System (Karling 1974): limnetic (,0.5%), oligohaline (0.5-5%), mesohaline (5-18%), polyhaline (18-30%), euhaline (30-40%). Table II shows the distribution of the principal species found in localities along the Galician and Cantabrian coastal regions. Microturbellarians were found in 27 of the 61 sampling sites. Therefore, only these sites are considered in the faunistic approach presented. The total number of Turbellaria amounted to 72 species. From the total species number, 48 could be identified to species level and 13 could not be clearly identified (sexually immature juveniles) and are designated as ''cf''. Eleven taxa could be taxonomically determined to (Karling, 1935) 24 Brackish (meso) the genus level. One species found in the polyhaline area of Ria de Arosa is new to science (Djeziraia longistyla sp. nov., description below). All species found are new records for the Galician and Cantabrian coast and 43 species represent new records for the Iberian Peninsula. The two sampling visits indicate high species richness for the area, with most species occurring in the freshwater/oligohaline and polyhaline/marine zones. There are no species common to all Spanish districts. Promesostoma marmoratum is the species with the broadest distribution, always found in habitats with salinities between 0.5-5%. Another species with a wide distribution found in oligohaline habitats is Mecynostomum auritum. Forty-seven species were found only in one of the sample localities (Table II) and 18 species live exclusively in limnetic areas (up to 0.5%).
Meiobenthonic composition
In the study area, one species, Rhynchoscolex simplex, appears to be ubiquitous without any definite relationship to a specific substrate. However, Rhynchoscolex simplex is a characteristic representative of freshwater sands, although the species is also abundant in stony areas of running waters (ubiquist- Table III). The species settles in the sandy interspaces between the stones and pebbles.
Some species were captured in only one type of habitat, and are therefore characteristic for the limnetic, brackish, or marine areas. Between them the most abundant species for the marine area was Archilina papillosa, in the limnetic environments Stenostomum leucops and Microdalyellia tennesseensis, and in the brackish environments Macrostomum rostratum.
On the other hand, the brackish habitats are characteristically overlapping zones for the marine and limnetic species. Therefore, 17 species captured in this study belong to these olig, oligohaline; meso, mesohaline; poly, polyhaline. a Locality numbers correspond to those given in Table I. Meiobenthonic Platyhelminthes from brackish environments overlapping areas. Within this group we find three species originally described for marine and brackish environments: Archilopsis unipunctata, Monocelis lineata, and Promesostoma marmoratum.
Another group is formed by species that can be found in different types of substratum, such as Bothrioplana semperi, Microdalyellia fusca, Phaenocora unipunctata in limnetic habitats and, for example, Proxenetes flabellifer or Ptychopera westbladi (see Table III) within brackish water habitats.
Alien species includes species that were found only once or in very small numbers, such as Nematoplana cf. nigrocapitula, a marine species, captured in a limnetic habitat (brook in America Beach, Ria de Bayona), and Paramecynostmum diversicolor and Pelophila pachymorpha, marine species but captured in brackish habitats.
The casual species represent a group of species with low individual number (sometimes only one individual). Their presence in the studied habitats is casual and their association with the locality cannot be asserted. Within this group we find species such as Archimonotresis limophila, Brunetia camarguensis, and Castrella truncata (Table III).
Etymology. The species name refers to the long stylet of the male copulatory organ.
Description. Sexually mature individuals 0.9-1.2 mm long ( Figure 2A). Body conical, rounded posteriorly. Colourless or light grey due to subepidermal pigment layer; whitish at posterior end due to well-developed caudal glands (in incident light). Well-developed proboscis glands and brain between the proboscis and the pharynx (Figure 2A). Two dark eyes clearly separated, posterior to the proboscis and surrounded by the brain. The ciliary cover forms narrow furrows that run parallel to the longitudinal axis, only visible in slightly squashed individuals (Figure 2A, posterior end of the body).
The general organization of the epidermis and the basal membrane corresponds to the organization described for Djeziraia pardii Schockaert, 1971(Schockaert 1971. The proboscis is very small (0.045 mm long in living animals) and at the anterior end of the body. The pharynx (0.112 mm diameter) is directed forward, and lies in the anterior half of the body.
Male reproductive system with two elongated testes dorso-laterally at the posterior end of the pharynx. The male copulatory organ is composed of an unpaired vesicula seminalis, a vesicula granulorum or prostatic vesicle, and a long slender stylet ( Figure 2B, C). The spherical vesicula seminalis is surrounded by a thin epithelium and weakly developed diagonal muscle layers while the prostatic vesicle is smaller and surrounded with a strong muscular layer consisting of spiral fibres. Both vesicles are clearly separated by a sphincterlike constriction. Extra-vesicular prostatic glands enter the prostatic vesicle proximally at the constriction. The stylet is a straight delicate tube, 250-300 mm long, slightly curved and with a terminal opening ( Figure 2B, C).
In the female reproductive system, the bilateral vitellaria represent two large tubes running from the posterior end of the pharynx to the posterior end of the body (Figure 2A). The ovaries lie at the middle of the body, postero-laterally to the male copulatory organ. Ovaries connected with the single receptaculum seminis via short oviducts ( Figure 2B). The receptaculum seminis passes communicates with the atrium genitale communis via the female duct (type II, after Artois and Schockaert 2005). The ovoid bursa is dorso-caudally located, and provided with a well-developed, muscular stalk or female duct type I and a distal common oviduct ( Figure 2B). The female duct type I connects the bursa with the atrium genitale while the common oviduct connects the bursa with the receptaculum seminis. It is not discernible whether the distal part of the common oviduct and the proximal part of the female duct type I join the distal bursa together or separately ( Figure 2B). In one animal the bursa contains a ''ball'' of compact sperm. The uterus lies ventrally and its opening is just ventral to the opening of the female duct type II. Eggs were not observed in the studied animals.
Female and male atrial organs (male copulatory organ, female ducts I and II, and uterus) open independently into the latero-frontal section of the small atrium genitale communis ( Figure 2B). A long narrow genital duct leads from the distal part of the genital Figure 2. Djeziraia longistyla sp. nov. (A) General organization in dorsal view (from the living animal); (B) atrial organs; (C) stylet (photograph from the living animal). b, bursa; cg, caudal glands; cod, common oviduct; fd-I, female duct type I; fd-II, female duct type II; ga, common genital atrium; gd, genital duct; gp, gonopore; ov, ovary; ovd, oviduct; p, proboscis; pg, prostatic glands; ph, pharynx; pv, prostatic vesicle; rs, receptaculum seminis; st, stylet; te, testis; u, uterus; vi, vitellaria; vs, vesicula seminalis; vd, vitelloduct. atrium to the common genital pore. This pore is situated at the beginning of the last third of the body.
The most conspicuous difference between Djeziraia longistyla sp. nov., D. pardii, and D. euxinica is the length of the stylet. The two last mentioned species have short stylets: 85 mm in D. pardii and 105-120 mm in D. euxinica. In D. longistyla sp. nov. the stylet is 250-300 mm long. The presence of a long, slender stylet (.150 mm) is shared with D. incana. Djeziraia incana has a stylet of 162-198 mm (up to 295 mm in one specimen) in length. Based on the length of the stylet, D. longistyla sp. nov. and D. incana are the most similar species within the genus Djeziraia. The differences between D. longistyla sp. nov. and D. incana concern the shape of the prostatic vesicle and the vesicula seminalis. Djeziraia longistyla sp. nov. shows a prostatic vesicle with a coat of strong spiral muscle layers, as in D. pardii, whereas D. incana has a thin-walled prostatic vesicle. As previously mentioned, both prostatic and seminalis vesicles are clearly separated in D. longistyla sp. nov. through a sphincter-like constriction. This prominent division between the vesicles is present in D. pardii, but not in D. incana.
On the other hand, Djeziraia longistyla sp. nov. and D. incana share a common oviduct that is surrounded by a thick circular muscle layer, as is the case in D. pardii (''diverticulum'' after Schockaert 1971, or ''insemination duct'' after Artois andSchockaert 2001). However, the peculiar horns at the connection of the common oviduct to the recepaculum seminis (Schockaert 1971;Artois and Schockaert 2001) are absent in both D. longistyla sp. nov. and D. incana.
Environmental remarks
Relationships between environmental factors and turbellarian presence in the sample sites during the study are presented in Figures 3-5. The number of specimens is plotted against temperature, conductivity, and oxygen.
High levels of conductivity did generally indicate high levels of salinity. The sites with lower salinity values are areas that correspond to the brackish water ecotone. The substrates consisted of muddy, stony, or sandy sediments. In these areas Turbellaria abundance shows the highest values (cf. Figure 3). The maximum value at site 12 was essentially caused by the high abundance of Macrostomum rostratum (.30 individuals).
The temperature recordings (Figure 4) reflect the actual values at the time of sampling, which varied between 11.6 and 27.8uC at different times of day.
Dissolved oxygen ranged between 1 and 10 mg l 21 (Figure 4). The upper sediment layers (0-2 cm sediment depth) were well oxygenated and contrasted well against the lower greyish chemocline in which the oxygen values rapidly declined to very low values. Low values of oxygen correspond with sites from which turbellarian species were generally absent or which were very poor in species richness (Figure 4). That holds for the limnetic and marine areas, but in the brackish water ecotone there were some sites (10, 12, 25, and 27) with low oxygen values and high abundance values. This may be a result of a favourable combination of abiotic and biological factors.
In Figure 5, the percentage of species in each of the different types of substrate is shown. Habitats consisting of sand or mud each accounted for 29% of the species. The dominant Figure 3. Number of turbellarians found in relation to the conductivity and to the salinity (conductivity data of the brackish localities were not considered, because they were punctual measurements). substrate covering the bottoms of the sites sampled was mud, sometimes blended with sand. In relation to the occurrence of sandy-gravel sites (5% of the total of sample sites), the abundance of species appears to be relatively higher than in muddy habitats (Figure 4).
With respect to species richness (Figure 4), a clear gradient occurs from habitats arranged according to sand, stony, sandy stony, sandy muddy, and muddy. The lowest numbers of species were found in sands and in stony habitats. However, it is generally the case that muddy sediments are most densely settled. These results suggest that the presence of ''Turbellaria'', at the species level only, is closely related to characteristic types of substrates.
Discussion
The sampling sites were located on the coast and riverbanks of the rias along the Galician and Cantabrian coasts where the habitats range from limnetic (salinity between 0 and 0.5%) to euhaline areas (e.g. Ria de Ajo, Ria de Rada, salinity of up to 30%; Figure 3). The habitats considered as brackish (with variable salinity, between 0.5 and 30%) are characterized by short-term salinity variations (shock-biotopes after Den Hartog 1964). These variations are normal occurrences in the rias. Therefore, only species with a high degree of adaptation to salinity variations would be expected to occur (Meixner 1943;Reuter 1961;Ax and Ax 1970).
For the distribution of the turbellarians and colonization of brackish habitats, two environmental factors appear to be fundamental: the substratum and the salinity. Most of the species were found in sandy, sandy-gravel, or sandy-stony substratum ( Figure 5). Consequently, the interstitium as a tidal shelter and colonization medium appears to have been verified (Karaman 1955;Ax 1956a;Riemann 1968;Karling 1974). On the other hand, muddy bottom is also an adequate habitat for Platyhelminthes, but only in the first few centimetres of sediment and with the simultaneous presence of green algae or periphyton ( Figure 5; Table I).
In relation to the salinity and the studied turbellarian fauna, some species were exclusive to limnetic habitats, a fact that coincides with previous records (e.g. Figure 5. Percentage of turbellarians in relation to the substrate type. M, muddy; S, sandy; SG, sandy-gravel; SM, sandy-muddy; SST, sandy-stony; ST, stony. | 4,401.6 | 2007-10-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |