text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Influencing Factors Determination Land Use Change Trend in the Pheryurban Area of Malang City Based On V-Cramer The limited land needed to fulfill the development space of a city if it has reached the saturation point has implications for the urban sprawl phenomenon that leads to the pheryurban area. Malang City is the center of the Malang Raya agglomeration area which is one of the prospective metropolitan areas in East Java. The development of the pheryurban area in Malang City is also influenced by growth centers both functionally and geographically. If viewed from the service center, development will be more directed to areas that are included or reached on the scale of service facilities so that the closer to the center of the facility, the higher the potential for the area to develop. Because of the urgency above, it is very important to do research related to factors that affect land use changes based on v-cramers in the pheryurban area INTRODUCTION The rapid urban development of the pheryurban area and the increasing rate of population growth are directly a response to the need for land.The limited land required to fulfill the development space of a city if it has reached the saturation point has implications for the urban sprawl phenomenon that leads to the pheryurban area.The centrifugal force resulting from the limited land in the core urban area has resulted in an increase in the complexity of the activities of the pheryurban area.The response of the pheryurban area to this centrifugal force varies depending on the driving force that forms the force and the dominance of activity functions in the bordering area. Land use change is a change in the use or activity of a land that is different from previous activities.1 Land use change can be interpreted as a process of changing the previous land use to another land use which can be permanent or temporary and is a logical consequence of growth and transformation of structural changes social, economic, physical aspects of a developing region.Change in land use is not merely a physical phenomenon of decreasing land area, but is a dynamic phenomenon that involves aspects of human life, because it is directly related to changes in the economic, social, cultural and political orientation of the community. Malang City is the center of the Malang Raya agglomeration area which is one of the prospective metropolitan areas in East Java Province and is part of the Gerbangkertosusila Megapolitan area.The positive externality of the formation of this agglomeration area is the formation of the conurbation phenomenon from the downtown area and the pheryurban area which results in an increase in the area and order of an urban area.The increase in order and complexity is a positive thing that creates benefits in the development process because it is positively correlated with increasing the economy of an urban area. Each pheryurban area has a different response to each centrifugal force that drives urban sprawl.Each factor, both constraint and driving, will form its own pheryurban character because in the border area in this pheryurban there is assimilation of urban and rural activities.The urban phery area in Malang City has various characters and responses to this centrifugal force, some of which lead to industrial activities, housing activities, and some that lead to trade and service activities.The morphological patterns formed are also varied, either leading to ribbon development (linear), leap frog or already forming conurbation areas.Identifying the factors that influence the formation of this pheryurban area has a high urgency because it will be a justification or basis for spatial planning to be carried out so that it will be easier to direct trends or trends to targets to be achieved in the plan or maximize trends and trends if the direction of the plan is more to accommodation of a positive trend.The dominant factor influencing the formation of the pheryurban area and its morphology is the transportation system. Transportation routes and node points in a transportation system have a significant role in the development of city morphology.3This was added by Berry (1964) who stated that the transportation network in the form of a ring road has a major role in the development of the city, especially at the intersection of the ring road and the ring road.other roads that cause the phenomenon of "mini peaks" or the peak of land values that have the potential to become built-up areas, Based on this, the development of this infrastructure significantly affects land use conditions and has implications for the formation of the character of the pheryurban area.In addition, urban development in the pheryurban area is also influenced by growth centers in the form of the CBD.CBD as a growth center has a locational advantage effect for areas that have a high level of accessibility to CBD. If it is correlated with the tendency of high land values in areas that have high accessibility to facilities, urban development has a positive type of correlation with this factor.The closer to the facility, the higher the probability of an area developing into a pheryurban area with high activity complexity. The pheryurban phenomenon in Malang City is also strongly influenced by the dominance of the function of the adjacent core urban area, for example the area is an industrial area.The development of the pheryurban area in Malang City is also influenced by growth centers both functionally (industry) and geographically (concentration of facilities).The growth center in the form of an industrial area significantly affects the development of the city.This is based on the fulfillment of the need for industrial activities in the form of settlements for industrial workers so that the closer the area to the industrial center, the Spatial : Wahana Komunikasi dan Informasi Geografi |3 higher the potential for the area to develop.If viewed from the service center, development will be more directed towards areas that are included or affordable on the scale of service facilities so that the closer to the center of the facility, the higher the potential for the area to develop. Based on the urgency above, research related to changes in land use in the pheryurban area is very important, especially as a basis or justification for future planning so as to be able to get an overview of the direction of development and trends in changes in the research area.By knowing the trend of change, it is hoped that it will reduce the gap between the existing and the plan so that the planning process in the future will be more optimal and measurable, especially in the realization of the plan. METHOD Research Approach The approach used in this study is a rationalism approach that is based on empirical and ethical theory and truth (Muhadjir, 1990).First of all, in the preparation of the research, the conceptualization of the theory related to the concept of characteristics and their indicators is formulated, as well as the theory of the concept of tidal inundation. Then, the object of research is still seen in its context which is included in the construction of the theory, because basically the topics related to modeling cannot stand alone because of the relationship between the factors in it then the last stage is the stage of generalizing the results, namely drawing a conclusion based on the results of the analysis.and supported by the theoretical basis used with empirical facts that emerged from the results of the analysis. Research related to the pattern of economic disparity in this region is located in Malang Raya which consists of 3 regions, namely Malang City, Malang Regency and Batu City.The general description related to each region. The type of research conducted is a type of combination research or a combination of qualitative and quantitative research.Where there are several explanations that can be described in sentences but there are also problems that must be explained mathematically. The type of research that will be conducted is descriptive with a case study research model.The purpose of descriptive research is to make a systematic, factual and accurate description of the facts and characteristics of the object of research which in this case is land use change.In addition, there are also those who state that descriptive research aims to describe the nature of a situation that is currently ongoing at the time the research is conducted and examines the correlation between factors of a particular symptom (Travers, 1978). Method of collecting data In this activity are observations and interviews where the Land Use of the Malang City pheryurban area in 2015 and the Malang City pheryurban area of land use in 2022.The interview technique is a data collection technique in order to assist and complete data collection that cannot be expressed by field observations.By using this technique, the data in the form of opinions or how the attitude of the population towards the symptoms or problems studied can be carried out.Interviews were more focused on extracting data related to land use changes. ANALYSIS METHOD Identifying Land Use Change At the identification stage of land use change, a visual interpretation of the Quickbird 1:5,000 image from Google Earth in 2015, 2020 and 2022 is carried out.The following is a flowchart of the image that presents a schematic of the remote sensing image interpretation process. Factor Analysis With V-Creamer Factor analysis is used to obtain factors that influence or cause changes in land use.The variables that have been determined based on the synthesis of the literature review will be compared with the factual conditions in the research area, so that the factors that influence changes in land use in the research area will be obtained.Factor analysis used in terms of the value of v-cramer's, v-cramers is used to measure the strength of the association between variables.In this study, V Cramer was used at the factor analysis stage to determine whether land use variables had a relationship to land use changes in the research area.the lower limit value used as a reference that the land use factor has a relationship or correlation with land use change is 0.20.The factor that has a v-cramer's ≥ 0.20 is used as a factor of land use change. Analysis Method Geographic Information System (GIS) According to (Aronoff, 1989) in (NTB, 2013) Geographic Information System (GIS) or the socalled Geographic Information System (GIS) is a computer-based information system that is used to process geographic data or information.Geographic information systems became known in the early 1980s.The development of GIS coincides with the development of computer equipment, both software (software) or hardware (hardware).In 1990 the development of GIS began to grow very rapidly and is currently growing. In general, a geographic information system is a component consisting of hardware, software, human resources, and data that collaborate effectively to enter, manage, integrate, enter, improve, store, display, analyze, update, and manipulate data in geographic information. Urban Fairy Area Malang City Based on population data from Badan Pusat Statistik (BPS) Indonesia in 1970 had a population of about 119 million people and in 2010 it increased to 237 million people, meaning that it has increased.This of course also has an effect on increasing the need for higher space and of course the conversion of agricultural land to construction land, especially in urban areas, is increasing.so that means there will be a physical, social, economic and cultural transformation in the region.Changing the area can be understood as changes that occur in an area in the process within a certain period of time in various aspects at certain regional boundaries. The most visible transformations in an area are physical or spatial transformations.A suburban area is an area which is also known as an "urban fringe" area or an "urban fairy" area (Yunus, 2008).The development of peri-urban areas in Indonesia has spread to almost all cities, which are generally big cities, one of which is Malang City. Malang is the second most populous city in East Java and is a student city with several universities that attract residents to settle here.The situation in the city center seems to be starting to feel uncomfortable because of high activities such as pollution and also the density of buildings that occur in the city center, this phenomenon causes a shift in development to suburban areas directly adjacent to Malang City and causes regional variations. Malang is the second largest city in East Java after Surabaya.Based on the Malang City Regional Regulation concerning Rencana Tata Ruang Wilayah (RTRW) Malang City From 2010 to 2030, Malang City will be coordinated as a city with tourism, industry, and education.As a city of education, the city of Malang attracts thousands of students from all over Indonesia and even from foreign countries to come to Malang City, even after completing their education, not a few stay in Malang City.The Department of Population and Civil Registration (Dispendukcapil) noted that in 2015 the population in Malang City reached 881,794 people, while in 2016 there was an increase in population by 1.58% every year to 887,443 people. Malang Regency is located in an area around Malang City which causes Malang Regency to be one of the areas affected by the physical development of Malang City.This is quite clearly illustrated from the Existing Conditions of the Districts in Malang Regency which are directly adjacent to Malang City, namely Dau District, Karangploso District, Tumpang District, Pakis District, Tajinan District, Pakisaji District, and Wagir District.Land changes that occur due to the impact of Malang City are not evenly distributed throughout the sub-districts.In these sub-districts there are still villages that are still dominated by agricultural land use. Changes in land use that are not evenly distributed in peri-urban areas cause differences in the characteristics of each region.For this reason, the locations chosen as research are 8 subdistricts that are directly adjacent to Malang Regency, namely Dau District, Singosari District, Karangploso District, Pakis District, Tumpang District, Tajinan District, Pakisaji District and Wagir District.Based on the table above, it is known that in 2022 the land use in the peri-urban area of Malang City is the widest field/field with an area of 200,150,871.18Ha or dominates 38.00% of the area of the Peri-Urban Area of Malang City.As for the smallest land use is the river of 0.072 m2.For Residential Land Use and Activities or Built-up Areas in 2022, it covers an area of 7,048.54Ha or 13.38% of the area of the Peri-Urban Area of Malang City. V-Cramer Value Factors Affecting Land Use Change The analysis of the driving factors of land change used in terms of the v-cramer's value, vcramers was used to measure the strength of the association between variables.(Widiyanto, 2014) In this study, V Cramer was used at the factor analysis stage to determine which land use variables had a relationship to land use changes in the pheryurban area of Malang City. Based on the test variable using v-creamer which is focused on residential land use with the following results: Conclusion Malang City is the center of the Malang Raya agglomeration area which is one of the prospective metropolitan areas in East Java Province and is part of the Gerbangkertosusila Megapolitan area.The positive externality of the formation of this agglomeration area is the formation of the conurbation phenomenon from the downtown area and the pheryurban area which results in an increase in the area and order of an urban area.The pheryurban phenomenon in Malang City is also strongly influenced by the dominance of the function of the adjacent core urban area, for example the area is an industrial area.The development of the pheryurban area in Malang City is also influenced by growth centers both functionally (industry) and geographically (concentration of facilities). Based on the results of research analysis, it can be seen that in 2015 the most extensive land use in the peri-urban area of Malang City, namely dry fields/fields of 41.56% of the area of the urban peri-urban area of Malang City and underwent a change in 2022 where the area of the moor became 38, 00% of the area of the Peri Urban Area of Malang City.Other land uses that are experiencing rapid changes are plantations where many plantation areas are used as residential areas and built-up areas or places of activity. Based on the results of the calculation of the value of the v-creamer variable that affects changes in land use in the peri-urban area of Malang City, the highest v-creamer value is proximity to forest land use, which is 0.3139.Then in the table above, there is one uncorrelated variable, namely proximity to bushland use.The vcreamer value for the proximity of Settlement and Activity Places is 0.2721, which means that the closer it is to residential land and activities, the greater the potential for land use changes. Figure 1 Figure 1 Schematic of Remote Sensing Image Interpretation Process. Figure 4 . Figure 4. Malang City Fairy Delineation Land Use 2015 Existing land use in the peri-urban area of Malang City in 2015 which is located in 8 Districts of Malang Regency is divided into several Image 6 Land Use In 2022 Land Use Change Identification of land use changes in the Urban Peri Area of Malang City from 2015 to 2022 was carried out as an analysis to determine the change value of each land use and aims to determine the pattern of land use change.In addition, this analysis also aims to identify changes in land use that can be used as input for the next step.The map base used in the analysis of land use change is the Land Use Map sourced from the Existing Map of Malang Regency in 2015 and Peta Rupa Bumi Indonesia (RBI) in 2022.These two maps will be the initial basis for knowing changes in land use in the Peri Urban Area of Malang City.The following will describe the distribution of each land use in the Peri-Urban Area of Malang City and its changes.Based on the results of spatial analysis by means of change analysis of land use in the Peri-Urban Area of Malang City from 2015 to 2022, it was found that the land for settlements and places of activity or the Built-up Area in the Peri-Urban Area of Malang City experienced a considerable change, amounting to 3,728 Ha.For more details on the changes can be seen in the following table: Table 3 Land Use in Urban Fairy Area Malang City In 2022 Table 3 Transition to Land Change in the Urban Peri Area of Malang City Determination of factors that influence land use change in the pheryurban area of Malang City is the most important aspect.This is because the more specific the driving or influencing factors and according to the characteristics of the planning location, the more accurate it is when we want to predict the land development of a location.In this case, there are several factors that influence land use changes in the pheryurban area of Malang City, including: Table 4 . V-Creamer Results Land Use Change Factor
4,405.8
2023-03-07T00:00:00.000
[ "Economics" ]
Ensuring privacy protection in the era of big laparoscopic video data: development and validation of an inside outside discrimination algorithm (IODA) Background Laparoscopic videos are increasingly being used for surgical artificial intelligence (AI) and big data analysis. The purpose of this study was to ensure data privacy in video recordings of laparoscopic surgery by censoring extraabdominal parts. An inside-outside-discrimination algorithm (IODA) was developed to ensure privacy protection while maximizing the remaining video data. Methods IODAs neural network architecture was based on a pretrained AlexNet augmented with a long-short-term-memory. The data set for algorithm training and testing contained a total of 100 laparoscopic surgery videos of 23 different operations with a total video length of 207 h (124 min ± 100 min per video) resulting in 18,507,217 frames (185,965 ± 149,718 frames per video). Each video frame was tagged either as abdominal cavity, trocar, operation site, outside for cleaning, or translucent trocar. For algorithm testing, a stratified fivefold cross-validation was used. Results The distribution of annotated classes were abdominal cavity 81.39%, trocar 1.39%, outside operation site 16.07%, outside for cleaning 1.08%, and translucent trocar 0.07%. Algorithm training on binary or all five classes showed similar excellent results for classifying outside frames with a mean F1-score of 0.96 ± 0.01 and 0.97 ± 0.01, sensitivity of 0.97 ± 0.02 and 0.0.97 ± 0.01, and a false positive rate of 0.99 ± 0.01 and 0.99 ± 0.01, respectively. Conclusion IODA is able to discriminate between inside and outside with a high certainty. In particular, only a few outside frames are misclassified as inside and therefore at risk for privacy breach. The anonymized videos can be used for multi-centric development of surgical AI, quality management or educational purposes. In contrast to expensive commercial solutions, IODA is made open source and can be improved by the scientific community. Objectives To achieve this objective, we used computer vision to develop and validate an algorithm that discriminates between inside and outside positioning of the laparoscopic camera. Background In the era of big data and the ever-increasing use of artificial intelligence (AI) in surgical procedures [1], patient privacy protection is playing an increasingly important role. However, the general data protection regulation of the European Union currently represents a restriction for these possible uses [2]. Especially in the medical care system, high demands are placed on data protection [3]. Thus, surgical procedure videos, for example in laparoscopy, can still not readily be used for AI development, because outside the abdomen people could be recognizable (e.g., skin of the patient with identifying tattoo, faces of personnel). On the contrary, perfectly anonymized videos could be used and shared without the consent of the patient, because the General Data Protection Regulation does not apply to anonymous information (GDPR art. 26). More specifically, even the processing of personal data in order to fully anonymize them does not require a consent (GDPR art. 29). Similarly, anonymized data is not regulated under The Health Insurance Portability and Accountability Act (HIPAA) in the USA. Anonymized videos of the surgical procedure can be analyzed and used to ensure a high quality management and for development of decision support systems in the OR [4,5]. Current research results show that the developed algorithms in the surgical field lack adequate data quality and especially quantity [6]. Nevertheless, patient consent is important, not only to connect surgical video data with meta-data about patient and procedure (disease stadium, blood loss, surgeon experience), but also to avoid losing the public's trust in the scientific process. To overcome this video shortage, we have developed and validated an inside-outside-discrimination-algorithm (IODA) that discriminates the laparoscopic camera placement inside and outside of the abdomen. As a result, IODA helps to ensure data privacy in video recordings of laparoscopic surgery by censoring extraabdominal parts that may identify patients or personnel while also maximizing the remaining video data. There has been previous work to anonymize videos in the operating room in general. For example, Flouty et al. created a R-CNN network that detects faces in videos and consequently blurs them with a recall of up to 93.45% [7]. Disadvantage of this method is that, beside the face of a person, there might be additional security compromising details in a video. This could be the skin color, tattoos, or other identifying body morphologies. Therefore, this method is not suitable to anonymize laparoscopic videos. By anonymizing laparoscopic videos based on inside and outside scenes, they can be safely used for big data analysis, such as surgical AI, quality management or educational purposes while immensely reducing the effort of data collection. To realize this aim, the following research questions were investigated: • Is it possible to reliably discriminate between inside and outside frames of laparoscopic videos using IODA based on deep neural networks? • Does additional training information, such as classification of trocars, translucent trocars or outside for cleaning, improve algorithm performance? • Where does the algorithm fail and does this result in privacy impairment? Inside outside discrimination algorithm (IODA) IODA has been developed to discriminate between the camera view of the inside of the abdomen and the camera view of the outside parts. Two different computer vision algorithms were developed. They share the same architecture and only differ in the number of classes they can discriminate against. One algorithm discriminates between a binary outcome (inside, outside), and the other one considers five classes ( Fig. 1), two for inside (abdominal cavity, trocar) and three for outside (translucent trocar, outside, outside for cleaning). Deep neural network architecture The neural network architecture is based on the work of Bodenstedt et al. [8]. As a basis, we use the feature layers of an AlexNet and replace the classification layers by a simple dropout and linear layer with 4,096 neurons (FC6) [9], Fig. 2. The AlexNet is pretrained on the ImageNet dataset [10]. Pretraining on a diverse image set ensures that the neural net already learns to discriminate basic features like Gabor filters and color blobs [11]. These basic features are represented in the weights of the lower level hidden feature layers. This way, we can employ a technique called transfer learning, which speeds up training of the neural net by only training the upper classification layers. Following the modified AlexNet is a stateless long-shortterm-memory (LSTM). The aim of using an LSTM is that usually in a laparoscopic video each class (e.g., inside and outside) appears for an extended time and usually not as an isolated frame. Indeed, at a frame rate of 25 frames/s, an individual frame is 0.04 s long. Going from inside the abdominal cavity to outside and back in such a short time frame is very unlikely. A stateless LSTM was chosen because it analysis a single sequence of frames for correlations, but does not take into previously seen sequences. On the contrary, stateful LSTMs would memorize all sequences seen in training and are therefore more commonly used for phase recognition of laparoscopic videos, where this information is of importance [12]. Lastly, classification is done by applying a dropout of 50% and a linear layer with 2 or 5 nodes, depending on whether the binary or multiclass case was trained. The weights of this layer were adjusted during training in terms of transfer learning. As an optimization method, we opted for backpropagation using the Adam optimizer [13] due to its fast convergence and good performance when the hyperparameters are carefully chosen compared to stochastic gradient descent used in the original work for the AlexNet [14]. The training process was repeated five times on the training set, equaling 5 epochs. Additionally, mixed precision training was used for faster training speeds [15]. Imbalanced data Standard metrics that are commonly used work best on balanced class distributions. In the course of annotation, it became apparent that the two classes inside and outside are not balanced. Thus, the imbalanced class distribution made an introduction of a better fitting metric necessary. We opted for the Focal Loss, which adds a modulating term to the cross-entropy to focus learning on not only imbalanced classes but specifically hard to classify classes [16]. In particular, the scaling factor decays to zero as the confidence in correctly classified frames increases. Dataset The data set contains a total of 100 laparoscopic surgery videos with 23 different operation types distributed over four categories: upper gastrointestinal, cholecystectomy, colorectal, and miscellaneous. In total, this amounts to 207 h of video (median of 1 h 30 min, [1 h 0 min, 2 h 40 min] interquartile range). Consisting of 18.6 million frames, of which only 1 frame /second (743,810 frames) was used to validate the algorithm in order to reduce computation time. The video files cover a range from short procedures, e.g., diagnostic laparoscopies, to extended procedures like laparoscopic-thoracoscopic esophagectomies performed at Heidelberg University Hospital. Table 1 gives an overview of the data set. The operation videos were recorded with a laparoscopic 2D camera (Karl Storz SE & Co KG, Tuttlingen Germany) with 30° optics, a resolution of 960 × 540 pixels and 25 frames per second. No distinction was made neither between different surgeons and their skill and experience level nor between patients and their individual case. Data analysis was approved by the local ethics committee (committee's approval: S-248/2021). Definition of inside and outside. A frame is classified as "Abdominal cavity" when the abdominal cavity can be seen on more than 50% of the frame and "Trocar" when at least 50% of the frame show parts of a trocar. A frame is classified as "Translucent trocar" when any outside parts (e.g., skin) is visible through a translucent tro-car. When the outside is not visible through a trocar but through the camera view directly, the frame is classified as "Operation side." A frame is classified as "Outside for cleaning" when outside parts are visible with the intention to clean the camera Data annotation For the data annotation, the original procedure video was cut into ten minute sections and manually annotated by a medical expert using the annotation software Anvil [17]. Two main categories (inside and outside) as well as three additional sub-categories (one for inside, two for outside) were introduced ( Fig. 1) While the camera view of the inside of a solid trocar can still be categorized as "Inside" (category trocar), the usage of a translucent trocar and therefore resulting in partial visibility of the patient's skin has to be annotated as outside (category (4) translucent trocar). Another outside category is introduced to identify phases of camera being outside for cleaning the camera lens to ensure standardization and reliability of annotation, explicit rules have been defined (Fig. 1). Algorithm training & testing For algorithm training, we used a five-fold stratified crossvalidation. Five equally sized sets of 20 videos were formed and alternated to form the training and test sets. To ensure that these sets are as homogeneous in total video length and operation types as possible, the original data set was manually stratified into the five sets (Table 2). Stratified splits ensure a better generalization of the neural network. IODA was trained and validated on a Gigabyte G482-Z51 gpu server (Gigabyte Technology Co. Ltd., Taipeh Taiwan) with 2 AMD 7352 CPUs (Advanced Micro Devices, Inc., Santa Clara USA) and 6 NVIDIA A40 GPUs (Nvidia Corporation, Santa Clara USA). The algorithm was written in python 3.9 [18] using the packages nvidia-dali for data loading and encoding [19], as well as, torch and torchvision for modeling the neural network [20]. There is no additional data preprocessing necessary, when running the code. This is fully integrated in the nvidia-dali pipeline which directly loads the videos and applies the necessary image transformations. The working code can be found on GitLab [https:// gitlab. com/ aicor/ ioda]. Statistical analysis of the results is done via F1-score, sensitivity, and specificity of the outside class. The F1-score is chosen as it is the harmonic mean of precision and recall (sensitivity in the binary case) and hence a good overall metric for classification model performance. Sensitivity of the outside class indicates whether all outside frames are detected as outside, and is therefore important in terms of privacy protection. Any outside frame not detected may pose a privacy risk. Specificity of the outside class on the other hand shows how much of the inside frames are misclassified as outside. Specificity should be high to reduce the loss of valuable frames which are falsely censored. When trained for binary classification, IODA matched the annotation in 611,061 out of 616,113 (99.18%) inside frames and 123,757 out of 128,079 (96.63%) outside frames , which is the expected format for the then following AlexNet (c), which is pretrained on the ImageNet data set. The AlexNet is followed by a stateless long-short-term-memory (d) and a linear layer for classification (e), which returns the predicted classes for the 32 frames in the sequence. During training of the proposed neural network, transfer learning is used so that only the weights of the last linear layer need to be adjusted. Conv convolutional layer, Pool pooling layer, FC feature classifier, which is a linear layer, LSTM long-short-term-memory layer (Fig. 4). Figure 5 illustrates the resulting sensitivity, specificity, and F1-score for the outside class. The sensitivity was 96.6% for "Outside" predictions, the specificity was 99.2% and the F1-Score was 0.96. A total of 9,374 frames deviate from the initial annotation, 5,052 frames are annotated as "Inside" and predicted as "Outside," 4,322 frames are annotated as "Outside" and predicted as "Inside." In the multiclass experiment, the predictions for "abdominal cavity" match the annotation in 601,161 out of 605,771 (99.24%) frames. For "trocar" 10,342 frames are annotated of which are almost in equal parts predicted as "trocar" (4,345 frames, 42.01%) and "abdominal cavity" (4,773 frames, 46.15%). For "Outside for cleaning" 8,061 frames are annotated of which 1,036 (12.85%) frames are predicted as "Outside for cleaning," while 6,242 (77.43%) frames are predicted as "Outside." For "Outside" 119,541 frames are annotated of which 114,887 (96.10%) frames are predicted as "Outside." For "translucent trocar" 477 frames are annotated of which none are predicted as such, 206 (43.19%) frames are predicted as "trocar" and 196 (41.09%) frames as "abdominal cavity." Out of all misclassified frames by our algorithm, there were three sequences (once 2 min and twice 15-20 s). Other than that, misclassification happened mostly in single or a series of few frames. Fig. 6 gives some examples. Data set quality In order to provide the algorithm with a sample of laparoscopic videos as representative as possible, 23 different surgery types and a total of 100 surgeries were selected. Since the majority of a laparoscopy is situated intraabdominally, there is an inevitable class imbalance between the inside and outside classes. Regarding the outside classes, there is also high imbalance toward the class "Outside no cleaning," which is caused by quite long sequences before and after the actual laparoscopy, where video recording was already running or was still running. Algorithm limitations A striking feature of the analysis of discrepancies between manual annotation and algorithm is that the transition frames between inside and outside are especially critical and prone to errors. After analyzing the misclassified frames, especially in the multiclass experiment, it becomes apparent that the algorithm struggles with transitional areas between the camera view of the abdominal cavity and the trocar: 46.2% of frames predicted as "abdominal cavity," 40% as "trocar." Most probably it is caused due to our set annotation rules. The camera view of the circle and, depending on the angle, elliptical shaped trocar makes it difficult to precisely determine when it transcends 50% of the screen. Despite that all frames have been annotated to the best of the annotator's abilities, it can't be guaranteed that the view of a category in all frames is approximated correctly. Another hurdle of the multiclass experiment is the unbalanced sampling; while frames of "abdominal cavity" make up the most part, there are only 477 used frames of a translucent trocar. The "Outside for cleaning"-category was initially intended to be used in future research projects to analyze the quantity of "Outside for cleaning"-phases and allow drawing conclusions about the complexity of the operation, the surgeon's skill level and adapt the assisting systems. Due to the fact that this category is based on an intention, it creates a massive hurdle for our algorithm. The long-short-term memory sequences were chosen to be 32 frames, equaling 32 s of consecutive videos, which might not be long enough such that an outside sequence contains the cleaning specific activities. This is reflected in the results, where 77.4% of frames that are annotated as "Outside for cleaning" are predicted as "Outside no cleaning." To ensure the goal of privacy protection, it is of most importance to correctly predict outside frames as "Outside," which is reflected by a high specificity for "Outside." To apply our results to a practical example in case of the binary experiment: for a specificity of 99.2%, in an 1-h laparoscopic video the time span of 21 s is at risk to be falsely predicted as "Inside" while the camera view shows an outside part, Fig. 3 Distribution of classes. There is a class imbalance between the abdominal class (81.4%), the outside class (16.1%), and the remaining three classes (2.5%). The ground truth classes were annotated by one annotator using sequential annotation while also taking into account that the video consists of 17.2% outside frames. On the other hand, the sensitivity of 96,6% results in 25 s of video data of a 1-h laparoscopic video are at risk to be lost, because it is classified as "Outside" and therefore anonymized while the camera view correctly shows the inside of the abdominal cavity. The results of the multiclass experiment show similar values as 17 s are at risk to be falsely predicted as "Inside" and 25 s are at risk to be lost. While a human annotator can adapt to rare and special events, our algorithm had difficulties to classify these. Figure 7 depicts two misclassified examples for each reference annotation. As a basis, we used the multiclass experiment. For example, a misclassification occurred when a latex glove, which is almost exclusively seen outside, appeared inside the abdominal cavity during a hand-assisted living donor nephrectomy. Another common reason for failure were frames close to transitions. Frames which show approximately 50% abdominal cavity and trocar were regularly misclassified as either of the wrong classes. Fortunately, these frames are not a security risk for anonymization. Also, a cause of some misclassifications might have been the subtle definition of classes. For example, "cleaning" was defined by the intention to clean, which is very difficult to determine and usually requires a long sequence of frames. Similarly, "translucent trocar" was defined by being inside Fig. 4 Confusion matrices for ground truth labels and predicted labels for the binary and multiclass experiment. Distribution of predicted classes for each annotated class for the binary, as well as the multiclass experiment. In the binary case, the majority class (inside) is better recognized as the minority class (outside). Similarly, for the multiclass experiment, the abdominal cavity is recognized the best by the algorithm, then outside no cleaning, which is mostly misclassified as cleaning or abdominal cavity. The trocar class is split between trocar and abdominal cavity. Frames annotated as cleaning are mostly predicted to be the operation site. The smallest class translucent trocar was never recognized by the algorithm, but instead either labeled as trocar or abdominal cavity with a similar split as the trocar class Fig. 5 Performance of IODA. Discriminating only between inside and outside classes, the algorithm trained either on binary or multiclass labels has similarly excellent results. The video fraction which is at a security risk, due to not being recognized as outside, is quite low in both cases, as computed from the sensitivity. Similarly low is the video fraction from inside which is not recognized as inside and is consequently lost due to anonymization, as computed by the specificity. The multiclass case is additionally broken down into the individual classes. Notable is that "Cleaning," "Translucent trocar," and "Trocar" have a high specificity, but the algorithm is not very sensitive for these classes a translucent trocar and skin being visible. These sequences are usually very short and are not easy to detect, even for a human annotator. Indeed, IODA wrongly classified translucent trocars without skin being visible as the class "translucent trocar," which by our definition belong to the "trocar" class. These examples were for the most part also not a security risk. However, some of the frames misclassified as inside contained potentially compromising information like the skin (color) of the patient. Advantage of algorithm over human On the other hand, reviewing the misclassified frames showed the value and consistency of our algorithm. After checking discrepancies between IODA and the human annotator, we found obvious wrong human annotations: once because of an annotation software problem, where the annotation of a short phase had been deleted and twice because the annotator simply overlooked a short outside section. Thus, in expectation of an ever-growing database, IODA can already be expected to have an advantage and to be superior in consistency to a human annotator. Also, noteworthy is the time needed for annotating the video, when comparing the human annotator and the computer algorithm. Figure 7 shows a comparison of the annotation time for the complete data set. Even factoring in the training time of IODA, which only has to be done once, the algorithm is significantly faster than the human annotator, approximately by a factor of 26. If we do not include the training time, this even increases to a factor of approximately 380. Obviously, the speed-up depends on the hardware, though even with less and slower graphics cards than in our setup, a real-time anonymization would be feasible. Potential clinical applications In the ever-increasing digitalization, it may be possible to utilize our developed algorithm to its full potential. Due to the massively increasing video data in the clinical workspace, we would be able to build a wide and diversified database, while ensuring patients privacy protection. These anonymized videos can then be used for surgical AI development, quality management, or for educational purposes, Fig. 8. Due to the possibility of real-time application, an automated pipeline for anonymized video data would benefit other research projects in developing algorithms and in introducing AI to the broad field of surgical practice. In order to make this technology available for other surgical researchers, we made our source code, as well as the machine learning model, open source. Thus, anonymization of surgical video does not necessarily need expensive commercial solutions, but is free for the scientific community and can be refined collaboratively. Fig. 6 Examples of frames misclassified by IODA. The algorithm had especially problems classifying rare events and edge cases. For example, a glove appearing inside the abdominal cavity during a handassisted nephrectomy was classified as outside no cleaning. Misclassification of the trocar class consisted mostly of the edge cases, where the trocar and the abdominal cavity appeared to the same degree in the frame. The same is true for the translucent trocar. Additionally, the frames classified by IODA as translucent trocar were mostly trocars, which are translucent but annotated as trocar by definition because there was no skin visible in the frame. This subtle definition of classes might explain the difficulty of IODA to correctly classify the translucent trocar. Similarly, the cleaning class was defined by the intention of cleaning the camera, which can only be determined when considering quite a long sequence of frames As of now, IODA runs with 45 frames per second on a hardware setup with a single NVIDIA A40 graphics card. Even if image loading and video transfer may add additional delay, the algorithm is thus suitable for "real-time" application within the operating room. This could be realized by using a medical pc with a video capture card that is connected to the video output of the laparoscope. However, the software that captures the video stream, hands it over to IODA, and displays the final video stream and potentially a graphical user interface would still need to be implemented for intraoperative real-time application. Future Research Directions In future studies, the applicability of our algorithm on other operating centers with different color schemes of surgical drapes, skin and operating room surrounding, and more different operation types should be investigated. Larger data sets are essential to improve the performance of the algorithms, ideally with addition of medical device sensor data to complement manual reference annotations. Also, to speed up annotation processes, the development of time and costeffective annotation tools should be realized. Another idea to explore that might improve the performance of IODA is a bidirectional training. Training IODA on forward and backward playing videos could increase the available data and the variability of trained scenes. Also, as explained in the methods section, the LSTM architecture of the network was chosen to take the temporal component into account, because usually each inside or outside scene is at least a couple of seconds long. In addition, a rule-based post-processing filter could be implemented that removes IODA outliers of a few frames by changing the class of very short sequences (e.g., < 2 s) to the surrounding class. Conclusion Our Inside-Outside-Discrimination-Algorithm IODA allows for privacy protection when recording laparoscopic video data. Implementing this kind of deep learning into surgical video data sets holds the potential to immensely improve the quality and especially the quantity of available video data for secondary use. The next step will be a prospective evaluation within a real time setting in the operating room. Visual abstract of IODA. The algorithm's task is to ensure privacy protection while maximizing the remaining video data. The data set for algorithm training and testing contained a total of 100 laparoscopic surgery videos of 23 different operations. IODAs neural network architecture was based on a pretrained AlexNet augmented with a long-short-term-memory so that previous frames can influ-ence the prediction of the following frames. False predictions were penalized by using focal loss as loss function. Algorithm training on binary or all five classes showed similarly excellent results for classifying outside frames while minimizing the lost video. The time taken to anonymize videos with IODA is significantly reduced when compared to a human annotation and Lena Maier-Hein have no conficts of interest or financial ties to disclose. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
6,335
2023-05-05T00:00:00.000
[ "Medicine", "Computer Science", "Engineering" ]
Application Analysis of Digital Printing Technology in Packaging Printing Based on the application of digital printing technology in packaging printing as the core, the research is carried out. First, the overview and advantages of digital printing technology are explained, and then the reasons for using digital printing technology in packaging printing are analyzed. Based on this, digital printing technology is proposed The effective use in labels, flexible packaging and boxes, I hope to provide some reference for relevant people. INTRODUCTION With the improvement of people's consumption quality and consumption level year by year, higher requirements are put forward for product packaging. The advantages of digital printing technology in the packaging and printing field have attracted the attention of many people. Businesses are also committed to the development of digital printing technology in the packaging and printing field. In this regard, for the application of digital printing technology in packaging printing, the following analysis is launched. Overview and Advantages of Digital Printing Technology In digital printing technology, it mainly includes: electrostatic imaging digital printing technology, inkjet imaging digital printing technology and magnetic imaging digital printing technology. First of all, research on electrostatic imaging digital printing technology. According to its technical characteristics, electrostatic imaging digital printing technology is also called electrophotographic technology. The main technical principle is to generate electrostatic latent images based on light conductors and use laser scanning technology. The effective effect of the latent image is to visualize the latent image. The small particles of powder are adhered to the set paper under the corresponding technology to form an image, and finally the printing work is completed; secondly, the inkjet imaging digital printing technology is researched (Ji,Li,2019) The principle of this technology is to use small droplets of ink to regularly drop on the setting paper under the corresponding technology to form a set image, and eject various colors of ink according to the corresponding control, and print a variety of colors Finally, research on magnetic imaging digital printing technology. The principle of this technology refers to that the magnetons of magnetic materials are arranged in a directional manner to form a magnetic latent image, and then use the influence formed between the magnetic color and the magnetic latent image to promote its latent image energy to achieve visualization . The application advantages of digital printing technology in packaging printing are mainly reflected in the following aspects: First, the printing efficiency is higher. Compared with the previous printing technology, digital printing technology has gradually transformed the traditional plate-making technology into a digital format, which not only effectively simplifies the printing process, but also improves the printing efficiency. Moreover, according to the relevant survey results, the digital printing technology can produce up to 8000 sheets/h of A4 paper under normal conditions, and it can also realize printing automation according to the actual instructions received; secondly, printing is convenient and fast. Digital printing technology does not have relatively clear requirements for file formats, and can provide corresponding support for the printing of multiple formats of files, thereby effectively saving the processing time of graphic information in the form; finally, it has strong personalization. Digital printing technology can effectively meet customer printing needs that cannot be achieved in traditional printing technology, and has strong personalization in actual work (Pan,Liu and Yin,2019). Product Traceability and Anti-Counterfeiting If we analyze the application of digital printing technology in packaging and printing from a simple level E3S Web of Conferences 185, 02034 (2020) ICEEB 2020 http://doi.org/10.1051/e3sconf/202018502034 of meaning, its representative meaning is to print the QR code of the product on the packaging, the purpose is to give the product a personal "identity", and based on relevant data Information supervision and tracking, product production, warehousing, distribution, and logistics provide a scientific and effective way for merchants and manufacturers to track and manage the quality of the product. At the same time, it can also facilitate the development of after-sales service activities. Once quality problems are found, relevant staff need to be able to deal with them in a timely manner, and ensure that the economic and social benefits of the merchants and manufacturers themselves are effectively obtained. In addition, consumers can also scan the QR code to obtain relevant information about the product, so as to effectively identify the authenticity of the product and ensure that they will not be deceived. Virtual Reality Technology The so-called "virtual reality" refers to the virtual world created by the computer. In this virtual world, users can feel immersive. The use of virtual reality in packaging focuses on product promotion . For example, if a user wants to carry out an activity or sell a certain product, after the design is completed, the camera can be directly pointed at the trademark of the product, and the product information and logo added in advance can be seen after the alignment, such as video Or other image materials, etc., so as to give consumers a novel feeling and experience, in order to achieve the purpose of marketing. Digital printing technology using virtual reality marketing methods can be realized in packaging printing(Zhang,2019). At the same time, virtual reality information can also change according to product changes, ensuring that users can obtain information at any time during use, which not only saves time and effort, but also improves user participation. For example: You can organize more lottery activities. When the user reads the three-dimensional information specified by the organizer, it can be regarded as winning. the Effective Application of Digital Printing Technology in Packaging Printing Based on the effective application of digital printing technology in packaging printing, a more in-depth analysis is carried out, which includes: the use of labels, the use of flexible packaging, and the use of boxes. The use of Labels In the current fierce market competition, if you want to improve your core competitiveness and stand out, you need to speed up the update of their products and pay more attention to packaging cover design. The most distinctive packaging cover design industry at present includes: Pharmaceutical, consumer goods and food industries, etc. According to the current application status of digital printing technology in packaging printing, it can be seen that some product suppliers have higher requirements for the novelty and uniqueness of labels, hoping to obtain novel label designs in the shortest possible time. At the same time, some enterprises and group departments are still focusing on the marketing and publicity of their own image in their development, and even hold events at some special festivals to further promote their own corporate image. Generally, they will receive a small The way of gifts, although the amount of gifts is not very large, but it has important representative significance. Therefore, it is necessary for relevant personnel to pay more attention to the outer packaging and printing of the product, not to give people a phenomenon of price drop, but to fully demonstrate its novelty (Liu,Yang and Wang,2019). The selection of printing in this process is particularly important. If the personnel use the traditional printing method before, not only need to wait a long time, but also need to spend a lot of cost, which is unable to meet customer needs in the short term. Therefore, digital printing has become the first choice of personnel because of its advantages of not requiring typesetting and printing in small quantities. It can be seen that digital printing technology fully embodies the advantages of flexible operation and cost-effective operation in the production of new product packaging labels, and can effectively meet the diverse needs of consumers. Application in Flexible Packaging Nowadays, traditional rigid plastic packaging has gradually transformed into flexible packaging. As the overall growth rate increases, the demand for digital printing is also gradually increasing. Simply put, if you want to develop the flexible packaging market, you need to focus on improving the printing speed, and the digital printing speed is proportional to the scope of its application. The faster the speed, the wider the scope of its application. The slower it is, the narrower its application range. Especially with the continuous reform and development of digital printing technology, traditional printing technology has been gradually replaced by digital printing technology, and it has also withdrawn from market development. The rational use of digital printing technology can not only increase the frequency of application in standard boxes and user-specific packaging, etc. At the same time, it can also save production costs, but while saving production costs, it is necessary to ensure the printing quality and speed, and then promote the development of digital printing technology in the field of flexible packaging. Application in Boxes In the process of digital packaging printing, compared with other products, the box product has a certain universality, and different from other products, the printing area ofthis product is relatively large, not only requires a lot of cost, but also pays attention to the type of color Selection, once the carton starts printing E3S Web of Conferences 185, 02034 (2020) ICEEB 2020 http://doi.org/10.1051/e3sconf/202018502034 operation, it cannot be changed in the job. Therefore, before officially starting the construction, the staff should communicate with the carton manufacturer in detail, and after the printing content is clarified, the printing work should be started. At present, there are many printing equipment that can be used in box products, and the printing speed is extremely fast. For example, for the use of color digital printing presses, printing operations can be performed by folding cartons. The average number of prints per hour is about 63 sheets, and the resolution is as high as 600dpi. the Future Development of Digital Printing Technology According to the current development status, it can be seen that although digital printing technology has been developed in China, there are still many shortcomings in the development. On the one hand, the evaluation level of printing quality is less, and on the other hand, it is related to the developed The overall development level of the country is relatively backward, and both the printing equipment and the later maintenance are mainly imported, which invisibly increases the cost burden of the printing industry. Based on this, in order to fully highlight the overall development prospects of digital printing technology in the future, major domestic enterprises should focus on the market construction of the project, and promote the production of digital printing equipment, which can show its obvious serialization characteristics. In the future development, quality and serialization may still be the characteristics of digital printing technology. In addition, in the context of the current rapid social and economic development, digital printing is regarded as the project with the most investment value and investment significance, but from the actual development status, it can be seen that the digitalization, integration and short printing cycle characteristics of digital printing technology play , It also needs to cooperate with other related technologies to promote the further development of the printing industry(Feng,2019). CONCLUSIONS All in all, with the continuous improvement of people's quality of life, people have put forward higher requirements for product packaging. Especially for companies, it is necessary to introduce advanced printing technology to give people a visual impact from product packaging. This will stimulate consumers' desire to buy. In this regard, on the basis of mastering the reasons for using digital printing technology in packaging and printing, through the effective application of digital printing technology in packaging and printing such as labels, flexible packaging, boxes, etc., promote the development of enterprises.
2,676.8
2020-01-01T00:00:00.000
[ "Art", "Materials Science" ]
Large area MoSe2 and MoSe2/Bi2Se3 films on sapphire (0001) for near-infrared photodetection The fabrication of heterojunction-based photodetectors (PDs) is well known for the enhancement of PDs performances, tunable nature of photoconductivity, and broadband application. Herein, the PDs based on MoSe2 and MoSe2/Bi2Se3 heterojunction on sapphire (0001) substrates were deposited using a r.f. magnetron sputtering system. The high-resolution x-ray diffraction and Raman spectroscopy characterizations disclosed the growth of the 2-H phase of MoSe2 and the rhombohedral phase of Bi2Se3 thin films on sapphire (0001). The chemical and electronic states of deposited films were studied using x-ray photoelectron spectroscopy and revealed the stoichiometry growth of MoSe2. We have fabricated metal-semiconductor–metal type PD devices on MoSe2 and MoSe2/Bi2Se3 heterojunction and the photo-response measurements were performed at external voltages of 0.1–5 V under near-infrared (1064 nm) light illumination. The bare MoSe2 PD device shows positive photoconductivity behavior whereas MoSe2/Bi2Se3 heterojunction PD exhibits negative photoconductivity. It was found that the responsivity of MoSe2 and MoSe2/Bi2Se3 heterojunction PDs is ~ 1.39 A W−1 and ~ 5.7 A W−1, respectively. The enhancement of photoresponse of MoSe2/Bi2Se3 PD nearly four-fold compared to bare MoSe2 PD shows the importance of heterojunction structures for futuristics optoelectronic applications. Introduction Transition metal dichalcogenides (TMDs), such as MoS 2 , MoSe 2 , WS 2 , and WSe 2 , have attracted a lot of interest recently as one of the most significant members of the two-dimensional (2D) materials family due to their exceptional electrical, and optical characteristics [1][2][3][4].It has been demonstrated that MoS 2 , MoSe 2 and WS 2 can absorb up to 5%-10% of incident sunlight at thicknesses below 1 nm [5].The bulk band gap of MoS 2 and MoSe 2 was reported of ∼ 1.3 and ~1.1 eV (indirect) whereas the monolayer MoS 2 and MoSe 2 have direct band nature with bandgap of ∼ 1.9 and ~1.66 eV, respectively [6,7].The large bandgap and higher carrier lifetimes of TMDs like MoS 2 and MoSe 2 make them attractive candidates for high-sensitivity photodetectors (PDs) [8][9][10][11][12][13][14][15].Advantage of MoSe 2 is related to well matched bulk band gap with Si which can be used for various optoelectronic applications under near-infrared (NIR) region.A few works have been reported for PDs based on mono/multilayer MoSe 2 for NIR regions.Recently, Polumati et al, fabricated a MoSe 2 /Mxene/cellulose paperbased photodetector device synthesized by a three-step hydrothermal method and exhibited responsivity of 9.82 mA/W under NIR light illumination [16].Ko et al, synthesized few-layer flake MoSe 2 on SiO 2 /Si substrate by mechanical exfoliation method, fabriacted back-gated phototransistor and found peak responsivity of 238 A/ W under NIR excitation [17].Selamneni et al, fabrication of large area MoSe 2 nanoflowers on cellulose paper using hydrothermal synthesis deposition and found responsivity of 9.73 mA W −1 at NIR light illumination [18]. Jana et al, fabricated MoSe 2 nanoflakes/ZnO nanorods (NR) NIR-based heterostructure device by using the liquid-phase exfoliation method and they have found responsivity of 0.21 A/W under NIR light illumination [19].The stacked-layered MoSe 2 /Si heterojunction was fabricated by the pulsed laser deposition method, and the responsivity was reported to be ∼ 12.8 mA/W under NIR light illumination [20].These reported work clearly demonstrated that the MoSe 2 is well-suited material for PD applications in the NIR region [16][17][18][19][20]. Among various approaches, one of the approach to enhance the photoresponse of PDs is to develop the heterojunctions among various semiconducting materials [21][22][23].Most of the heterojunction and hybrid structures on MoSe 2 have fabricated using conventional semiconductors and organic materials for PD applications in NIR region [16][17][18][19][20].In the quest for new materials, group V-VI binary chalcogenide materials such as Bi 2 Se 3 , Sb 2 Se 3 , Bi 2 Te 3 , and Sb 2 Te 3 have recently been attracted due to their good optical and electrical characteristics [24,25].Among these, Bi 2 Se 3 is one of the well-studied topological insulator (TI) materials that exhibit insulation in bulk while the surface shows a conducting nature [25].Also, Bi 2 Se 3 being a layered material, there is mild van der Waals force between the quintuple layers (QLs) and strong covalent bonding inside each QL, the van der Waals layered growth of Bi 2 Se 3 is advantageous for making the buffer layer [26][27][28].Recently, Bi 2 Se 3 has been integrated with the wide-band gap semiconductor materials to enhance the PDs characteristics [29,30].However, there is limited work on the growth and their applications in fabrication of PDs using Bi 2 Se 3 with MoSe 2 material [31]. High-quality MoSe 2 and Bi 2 Se 3 layers and thin films have been produced using various methods such as liquid exfoliation, scotch tape, physical vapor deposition, hydrothermal, etc [17,19,31].The exfoliation techniques have limitations due to the long process time, repeatability issue, and inability to produce large-area coverage.In contrast, the sputtering technique provides many advantages such as ease of handling, repeatability, and the ability to deposit large areas of thin films [29,30].In this study, we have deposited MoSe 2 film and MoSe 2 /Bi 2 Se 3 heterojunction on sapphire (0001) substrates using r.f.magnetron sputtering system.The sapphire (0001) substrates were preferred over conventional low bandgap Si and Ge substrates as sapphire possesses larger bandgap (∼10 eV), far from NIR region.We have found a high NIR photoresponsivity of ∼ 5.7 A W −1 on MoSe 2 /Bi 2 Se 3 heterojunction PD device as compared to the responsivity of ∼ 1.39 A W −1 on bare MoSe 2 PD device.Interestingly, the MoSe 2 /Bi 2 Se 3 heterojunction PD showed the negative photoconductivity (NPC) behavior whereas sole MoSe 2 PD revealed the positive photoconductivity under NIR (1064 nm) illumination. Materials synthesis We have deposited MoSe 2 thin film on bare sapphire (0001) substrate [sample S1] and Bi 2 Se 3 coated sapphire (0001) substrate [sample S2] at 400 °C using a magnetron sputtering system, having a base vacuum of ∼ 2 × 10 −7 mbar.First, we cleaned the single-side polished sapphire substrate with acetone and isopropanol in an ultrasonicator for several minutes followed by drying with N 2 gas.For deposition of MoSe 2 , the stoichiometry MoSe 2 (purity: 99.99%) target was sputtered by applying a forward r.f.power of 100 W in the presence of ultrahigh-pure Ar (99.9999%) gas flow of 20 sccm (working pressure: ∼5.0 × 10 −3 mbar).In the case of 40 nm thick Bi 2 Se 3 buffer layer deposition, the working pressure and forward r.f.power was kept at ∼ 3.3 × 10 −3 mbar and 10 W, respectively.The deposition rate of sputtered film has been deduced by stylus profilometer on various films deposited under different conditions.To achieve good stoichiometry of sputtered films, a post-selenization process was performed in a tubular furnace at 300 °C for 60 minutes in presence of continuous Ar flow. Materials characterization Raman spectroscopy in backscattering geometry with an Ar + laser (532 nm) source was employed to characterize the structural properties of MoSe 2 and MoSe 2 /Bi 2 Se 3 thin films on sapphire (0001) substrates.A Cu K α1 x-ray source with a wavelength of 0.15406 nm was used in the high-resolution x-ray diffraction (HR-XRD) to characterize the crystalline properties of thin films.Atomic force microscopy (AFM) in tapping mode and field emission scanning electron microscopy (FESEM) in plan-view were used to study the surface morphology.Thermofisher K-Alpha x-ray photoelectron spectroscopy (XPS) having x-ray source [Al K α :1486.6 eV] was used to analyze the electronic and chemical composition of MoSe 2 and MoSe 2 /Bi 2 Se 3 thin films on sapphire substrates. Photo-response performance measurement We have deposited Cr/Au metal electrodes to fabricate metal-semiconductor-metal (MSM)-type PD devices on MoSe 2 (S1) and MoSe 2 /Bi 2 Se 3 (S2) heterojunction.The metal electrode was comprised of a sequential deposition of a ∼ 20 nm Cr adhesion layer followed by ∼ 80 nm Au coating using the thermal evaporation technique.The active area for both PD devices were kept same ∼ 0.01 mm 2 and having resistance in the range of ∼0.5-2 kΩ.To measure the current-voltage characteristics of the devices under NIR (1064 nm) laser illumination with external bias voltage, we employed a Keithley 2450 source meter.We also conducted investigations on the transient photoresponse by using 20-second ON-OFF cycles at different external voltages ranging from 0.1 to 5 V at a fixed laser power of 125 mW.We have also performed spectral response measurement with a Xenon lamp having wavelength in range of 400-1200 nm at applied bias of 5 V. Results and discussion Figure 1(a) represents the Raman spectrum of MoSe 2 thin film deposited on sapphire (0001) substrate [sample S1] at 400 °C using a magnetron sputtering system.Raman characteristic peaks of sample S1 were obtained at 169.6, 236.2, and 287.7 cm −1 corresponding to the E 1g (in-plane), A 1g (out-of-plane), and E 1 2g (in-plane) modes for 2H phase of the MoSe 2 [32,33].Remaining two Raman peaks centered at 445.5 and 577.5 cm −1 are related to the sapphire substrate.The Raman spectrum of MoSe 2 thin film on Bi 2 Se 3 /sapphire (0001) substrate [sample S2] is shown in figure 1(b).For sample S2, the three Raman active modes were observed for Bi 2 Se 3 which were present at 76.1.129.8 and 176.4 cm −1 corresponding to A 1 1g , E 2 g and A 2 1g optical phonon modes, respectively [34].The three Raman peaks at 169.2, 237.3 and 287.6 cm −1 are assigned to A 1 1g , E 1 1g , and E 1 2g vibrational Raman modes, respectively related to the 2H-phase of MoSe 2 , and the remaining Raman peaks are indexed to sapphire substrate, similar to sample S1 [32,33]. Further, the crystalline properties of MoSe 2 film deposited on bare and Bi 2 Se 3 coated sapphire (0001) substrates were characterized with HR-XRD.The 2theta-omega scan of samples S1 and S2 is shown in figures 1(c) and (d), respectively.The XRD peaks of sample S1 were found at 12.54, 22.56, and 29.08°position that could be indexed to the (002), (104), and (010) lattice planes of MoSe 2 and related to its hexagonal crystal The surface morphology of sputtered MoSe 2 thin films was characterized by FESEM in plan view mode.Figure 2(a) displays the large area FESEM image of sample S1 in which it can be seen the worm-type MoSe 2 structure.Further, the high magnification FESEM image (figure 2(b)) clearly showed the worm type with platelets and statistical analysis revealed the lateral size of ∼ 100±10 nm.The long and relatively wide worm-type surface morphology of the sample S2 is also seen with the increased lateral size of ∼130±10 nm (figure 2(c)).The platelets of MoSe 2 is quite visible in high magnification image presented in figure 2(d).Further, we have also performed AFM characterization using Si-tips of radii curvature of ∼ 10 nm in tapping mode on these samples and sample S1 reveals the worm-type morphology [figure 3(a)).The lateral sizes of worm-type structure by AFM image seems wider compared to FESEM and it is likely due to the tip effect in lateral directions.However, the AFM images are taken to estimate the surface roughness and thickness of the film as z-direction measurement is independent to the AFM tip shape, size and geometry.The rms surface roughness of the S1 sample is obtained to be ∼ 8.76 nm for a scan area of 2 μm × 2 μm.It can be noted that the bare clean sapphire substrate used in this study has rms surface roughness of ∼ 0.6 nm for a scan area of 2 μm × 2 μm.Further, thickness of MoSe 2 film on sapphire was measured using line profiles across the various pits on the sample S1. Figure 3(b) shows that the height of the film from the sapphire surface was found to be ∼ 40 nm, complements with our calibration for growth rate.In case of MoSe 2 on Bi 2 Se 3 coated sapphire, the longer worm-type morphology was also seen in the AFM image (figure 3(c)).The rms surface roughness of sample S2 was found to be ∼14.6 nm for an AFM scan area of 2 μm × 2 μm (figure 3(c)), slightly rougher than the S1 sample, likely due to the large size of the wormtype structure of sample S2.These observations disclosed that the Bi 2 Se 3 buffer layer promotes the growth of longer and wider worm-type MoSe 2 morphology on sapphire (0001) substrate.[32].The Se/Mo ratio was obtained using a tabulated sensitivity factor and it turned out to be ∼1.94,close to the ideal ratio i.e. 2 [31]. The core level XPS spectra of MoSe energy (blue shift) as compared to elemental Bi 4f 5/2 and 4f 7/2 peaks at 161.9 and 156.6 eV, respectively [33][34][35].These peaks are further deconvoluted into two more peaks at positions 164.2 and 159.0 eV corresponding to Bi +5 oxidation states of the Bi 4 f state.One peak at 162.4 eV corresponds to the Bi +3 .The oxidation states of Bi 2 Se 3 appear likely due to the ex situ XPS measurements after deposition of the film [35][36][37][38].The fitted corelevel Se 3d spectrum shows two peaks with binding energies of 54.7 and 53.9 eV that could be distributed to Se 3d 3/2 and Se 3d 5/2 , respectively and it could be assigned to the valence state of Se(2) in MoSe 2 and Bi 2 Se 3 compounds [32,35,36]. Further, we have fabricated MSM PDs on samples S1 and S2 and the schematic of the devices is presented in figures 5(a) and (b), respectively.The spectral photoresponse was measured for both the PD devices at fixed bias volatge of 5 V in the wavelength range of 400-1200 nm as presented in figure 5(c).The high photoresponse was obsereved in the wavelength region of 1000-1100 nm revealing its application in NIR photodetection [15].The We have performed voltage-dependent time-resolved photoresponse characteristics at a fixed laser power of 125 mW in the NIR (1064 nm) region as shown in figures 7(a) and (c) for devices S1 and S2, respectively.In the device S1, only the MoSe 2 material absorbs NIR light and generates electron-hole pairs which are separated by the externally applied electric field and contribute to the photocurrent.On the other hand, in device S2, both materials absorb NIR light to produce photocurrent.It was observed that photocurrent value increases with an increase in the external applied bias voltage from 0.1 to 5 V at a fixed maximum laser power of 125 mW by NIR light illumination and it shows NPC behavior.Similar behavior is also obtained for the multilayer graphene/ InSe heterojunction PD device compared to sole InSe PD [39].In the case of MoS 2 /GaN/Si PD device, Singh et al reported the change in the polarity of current by changing the wavelength of the illumination light from ultra-violet (positive) to NIR (negative photocurrent) [40]. For quantitative analysis, the following equations (1-4) have been used to calculate the performance parameters of a photodetector such as a responsivity, specific detectivity, noise equivalent power (NEP) and external quantum efficiency (EQE). where, I l is the light current, I dark is the dark current, P in is the input optical power density, A is the active area of the device, e is the elementary charge, h is Planck's constant, and c is the speed of light in vacuum, and λ is wavelength of incident light [41]. The performance parameters R and D are plotted in figures 7(b) and (d) with applied bias under illumination of 1064 nm light for devices S1 and S2, respectively.The responsivity and detectivity of device S1 were found to be ~1.39A W −1 and 2.18 × 10 8 Jones at 5 V in PPC behavior, respectively.In the case of device S2, the responsivity and detectivity have been increased and it was found to be ~5.7 A W −1 and 2.91 × 10 8 Jones at 5 V, respectively.The values of NEP and EQE calculated at a bias voltage of 5 V and maximum laser power of 125 mW are 4.72 × 10 −11 W × Hz −1/2 and 162% for device S1 and 3.43 × 10 −11 W × Hz −1/2 and 664% for device S2, respectively.It is clearly seen that the photoresponse of the both S1 and S2 devices increased with an increase in applied bias voltage at fixed laser power.When the external bias increased, subsequently, the electric field across the device also enhanced.Consequently, it increases the charge collection efficiency on the electrodes by minimizing the e-h pair recombination.The value of response (decay) time of devices S1 and S2 was estimated to be 2.02 (3.15) sec and 1.12 (2.74) sec, respectively.The nearly four-fold increase in the photoresponsivity of S2 sample is likely related to the built-in potential at the interface of heterojunction.We have compared the photoresponse characteristics of PDs in NIR region and our value is comparable to the reported works, except [17], as shown in table 1 [17,19,[42][43][44][45][46][47][48][49][50]. In PD sample S1, consisting of a photodetector based solely on MoSe 2 , the PPC arises due to the applied electric field enhancement with increasing voltage.When a voltage is applied across the MoSe 2 material, an electric field is established.This electric field accelerates the photogenerated charge carriers (electrons and holes) towards the metal electrodes, reducing the transit time in the material.The accelerated charge carriers reach the electrodes without undergoing significant recombination, leading to an increase in conductivity.Thus, the observed PPC is attributed to the efficient drift of photogenerated charge carriers to the electrodes under the influence of the applied electric field.In contrast, sample S2, which utilizes a heterostructure of MoSe 2 and Bi 2 Se 3 , exhibits NPC, where the photocurrent decreases with increasing applied voltage.The NPC observed in S2 can be attributed to the interfacial defects at the junction of MoSe 2 /Bi 2 Se 3 or defects induced by selenium vacancies in Bi 2 Se 3 [51].Despite efforts to fill selenium vacancies through post-selenization, the selenium vacancies in the top MoSe 2 layer are filled however a significant population of vacancies remains in the bottom Bi 2 Se 3 layer.Selenium vacancies in Bi 2 Se 3 create localized defect states within the band gap, acting as trap sites for photoexcited electrons.These trapped electrons are unable to contribute to conductivity and may undergo recombination with holes, thereby reducing the overall conductivity of the device.As the applied voltage increases, the electric field may intensify the trapping of photoexcited electrons at selenium vacancy sites, leading to a more pronounced decrease in conductivity and the observed NPC effect [51].Singh et al also reported that the selenium-deficient Cu 2 Se-based thermoelectric material exihibited NPC behavior [52].The NPC has been also reported for the Bi 2 Te 3 based topological insulator in which the resistance of the topological surface states suddenly increases when the film is illuminated [53].As Bi 2 Se 3 is well known excellent thermoelectric and topological insulator materials, the effet of large Seebeck coefficient and topological surface states on the photoconductivity behavior can not be ignored and it required further detailed theoretical and computational studies [39,40,[51][52][53]. Conclusion We have deposited large-area MoSe 2 thin film and MoSe 2 /Bi 2 Se 3 heterojunction onto sapphire (0001) substrates using the magnetron sputtering technique followed by post-selenization process. Figure 4 ( Figure 4(a) shows the XPS survey scan for samples S1 and S2, which confirms the presence of Mo, Bi and Se elements in deposited samples.The Bi element signal was only seen in sample S2 due to the worm-type structures of MoSe 2 thin films on Bi 2 Se 3 /sapphire (0001).Figures 4(b), and (c) show the core level XPS spctra of pristine MoSe 2 film deposited on sapphire substrate [sample S1].The XPS spectra of Mo 3d was deconvoluted into three major peaks at binding energy of 228.2, 231.4,and 228.9 eV corresponding to the two electronic states of Mo 4+ 3d 5/2 , Mo 4+ 3d 3/2 spin-orbit coupled peaks, that signals originates from 2H phase of MoSe 2 and remaining single peak could be assigned to Mo-Se bond (figure 4(b)).In figure 4(c), the Se 3d spectra show two peaks with binding energies of 53.8 and 54.7 eV corresponding to the divalent Se ions Se 3d 5/2 and 3d 3/2 states, respectively, which is consistence with valence state of Se −2[32].The Se/Mo ratio was obtained using a tabulated sensitivity factor and it turned out to be ∼1.94,close to the ideal ratio i.e. 2[31].The core level XPS spectra of MoSe 2 /Bi 2 Se 3 heterostrucutre [sample S2] is shown in figures 4(d)-(f).Figure 4(d) shows two prominent peaks at binding energy of 227.8 and 231.0 eV corresponding to Mo +4 3d 5/2 and Mo +4 3d 3/2 spin-orbit coupled peaks, respectively.These XPS peaks were found slightly shifted towards lower binding energy (red shift) value compared to pristine MoSe 2 indicating the coupling interface between Bi 2 Se 3 and MoSe 2 .Remaining one peak at 226.6 eV could be indexed for Mo-Se bond.The core level spectra for Bi 4 f states in the binding energy range of 155-168 eV is shown in figure 4(e) which shows the two dominant peaks located at binding energy values of 163.2 and 157.9 eV corresponding to Bi 4f 5/2 and 4f 7/2 electronic states, respectively.The binding energy difference of 5.3 eV between spin-orbit coupled peaks indicates the formation of the Bi 2 Se 3 compound [31].These two spin-orbit coupled peaks are found to shift slightly to higher binding 2 / Figure 4(a) shows the XPS survey scan for samples S1 and S2, which confirms the presence of Mo, Bi and Se elements in deposited samples.The Bi element signal was only seen in sample S2 due to the worm-type structures of MoSe 2 thin films on Bi 2 Se 3 /sapphire (0001).Figures 4(b), and (c) show the core level XPS spctra of pristine MoSe 2 film deposited on sapphire substrate [sample S1].The XPS spectra of Mo 3d was deconvoluted into three major peaks at binding energy of 228.2, 231.4,and 228.9 eV corresponding to the two electronic states of Mo 4+ 3d 5/2 , Mo 4+ 3d 3/2 spin-orbit coupled peaks, that signals originates from 2H phase of MoSe 2 and remaining single peak could be assigned to Mo-Se bond (figure 4(b)).In figure 4(c), the Se 3d spectra show two peaks with binding energies of 53.8 and 54.7 eV corresponding to the divalent Se ions Se 3d 5/2 and 3d 3/2 states, respectively, which is consistence with valence state of Se −2[32].The Se/Mo ratio was obtained using a tabulated sensitivity factor and it turned out to be ∼1.94,close to the ideal ratio i.e. 2[31].The core level XPS spectra of MoSe 2 /Bi 2 Se 3 heterostrucutre [sample S2] is shown in figures 4(d)-(f).Figure 4(d) shows two prominent peaks at binding energy of 227.8 and 231.0 eV corresponding to Mo +4 3d 5/2 and Mo +4 3d 3/2 spin-orbit coupled peaks, respectively.These XPS peaks were found slightly shifted towards lower binding energy (red shift) value compared to pristine MoSe 2 indicating the coupling interface between Bi 2 Se 3 and MoSe 2 .Remaining one peak at 226.6 eV could be indexed for Mo-Se bond.The core level spectra for Bi 4 f states in the binding energy range of 155-168 eV is shown in figure 4(e) which shows the two dominant peaks located at binding energy values of 163.2 and 157.9 eV corresponding to Bi 4f 5/2 and 4f 7/2 electronic states, respectively.The binding energy difference of 5.3 eV between spin-orbit coupled peaks indicates the formation of the Bi 2 Se 3 compound [31].These two spin-orbit coupled peaks are found to shift slightly to higher binding Figure 4 ( Figure 4(a) shows the XPS survey scan for samples S1 and S2, which confirms the presence of Mo, Bi and Se elements in deposited samples.The Bi element signal was only seen in sample S2 due to the worm-type structures of MoSe 2 thin films on Bi 2 Se 3 /sapphire (0001).Figures 4(b), and (c) show the core level XPS spctra of pristine MoSe 2 film deposited on sapphire substrate [sample S1].The XPS spectra of Mo 3d was deconvoluted into three major peaks at binding energy of 228.2, 231.4,and 228.9 eV corresponding to the two electronic states of Mo 4+ 3d 5/2 , Mo 4+ 3d 3/2 spin-orbit coupled peaks, that signals originates from 2H phase of MoSe 2 and remaining single peak could be assigned to Mo-Se bond (figure 4(b)).In figure 4(c), the Se 3d spectra show two peaks with binding energies of 53.8 and 54.7 eV corresponding to the divalent Se ions Se 3d 5/2 and 3d 3/2 states, respectively, which is consistence with valence state of Se −2[32].The Se/Mo ratio was obtained using a tabulated sensitivity factor and it turned out to be ∼1.94,close to the ideal ratio i.e. 2[31].The core level XPS spectra of MoSe 2 /Bi 2 Se 3 heterostrucutre [sample S2] is shown in figures 4(d)-(f).Figure 4(d) shows two prominent peaks at binding energy of 227.8 and 231.0 eV corresponding to Mo +4 3d 5/2 and Mo +4 3d 3/2 spin-orbit coupled peaks, respectively.These XPS peaks were found slightly shifted towards lower binding energy (red shift) value compared to pristine MoSe 2 indicating the coupling interface between Bi 2 Se 3 and MoSe 2 .Remaining one peak at 226.6 eV could be indexed for Mo-Se bond.The core level spectra for Bi 4 f states in the binding energy range of 155-168 eV is shown in figure 4(e) which shows the two dominant peaks located at binding energy values of 163.2 and 157.9 eV corresponding to Bi 4f 5/2 and 4f 7/2 electronic states, respectively.The binding energy difference of 5.3 eV between spin-orbit coupled peaks indicates the formation of the Bi 2 Se 3 compound [31].These two spin-orbit coupled peaks are found to shift slightly to higher binding Figure 3 . Figure 3. (a) AFM morphology in tapping mode of MoSe 2 on sapphire (0001) substrate, (b) Height profile along line PQ, RS and UV as shown in (a) for sample S1.(c) AFM image of MoSe 2 on Bi 2 Se 3 /sapphire (0001) substrate. Figure 5 . Figure 5. Schematic of MSM-based PD devices for (a) device S1, (b) device S2.(c) Spectral response of devices S1 and S2 at bias voltage of 5 V under ligh illumination of 400-1200 nm. Figure 7 . Figure 7. External bias-dependent characteristics of the PD devices excited by NIR (1064 nm) light illumination for (a) S1 and (c) S2.The responsivity (left) and detectivity (right) evaluations under different applied bias voltages of 0.1 to 5 V of the PD devices :(b) S1 and (d) S2. Table 1 . Comparison table of responsivity of the NIR photodetector devices fabricated using MoSe 2 , selenium based compounds and MoS 2 . Raman and HR-XRD studies confirmed the formation of 2H-MoSe 2 and rhombohedral Bi 2 Se 3 thin films with distinct crystalline phases.The worm-type morpohlogy of MoSe 2 was obtained and XPS study disclosed the nearly stoichiometric MoSe 2 film.The MSM based photodetectors were fabricated on these films and showed excellent responsivity of ~1.39 and ~5.7 A/W for MoSe 2 and MoSe 2 /Bi 2 Se 3 -based devices under NIR illumination, respectively.The enhanced photoresponsivity for MoSe 2 /Bi 2 Se 3 -based PD device is related to the built-in potential at the interface of heterojunction.These results suggest that the MoSe 2 /Bi 2 Se 3 heterojunction-based photodetector has the potential for use in future optoelectronic applications for NIR photodetection.
6,377
2024-04-10T00:00:00.000
[ "Physics", "Materials Science", "Engineering" ]
Clothing Image Classification with DenseNet201 Network and Optimized Regularized Random Vector Functional Link ABSTRACT To ameliorate the precision of clothing image classification, we proposed a clothing image classification method via the DenseNet201 network based on transfer learning and the optimized regularized random vector functional link (RVFL). First, the formula extracts weight’s parameters about DenseNet201 that is pre-trained on the ImageNet dataset for transfer learning, thereby obtaining an incipient network,after that trim this model parameters. The modified network is utilized to pick up the clothing image features output by the DenseNet201’s global average pooling layer. Second, regularization coefficient is introduced to control RVFL’s model complexity and solve the problem of over-fitting. Then, the generated solution vector of aquila optimizer (AO) is produced by marine predators algorithm (MPA). The input weights, biases of hidden layer and renormalization modulus of regularized RVFL are optimized using the improved AO algorithm. Finally, we use the optimized RVFL to assort abstracted fashion graphics traits. We use Accuracy, Macro-F1, Macro-R and Macro-P to assess the algorithm’s ability and compare this algorithm with ResNet50 network, ResNet101 network, DenseNet201 network, InceptionV3 network and different classifiers, which use DenseNet201 as the feature extractor to get the input. From the experimental results, this algorithm proposed has excellent classification power and generalization ability. Introduction In the near future, the internet is developing rapidly, clothing images on the Internet have also exploded. Whether these clothing images can be accurately and effectively classified is closely related to the interests of clothing e-commerce practitioners. The traditional clothing image classification is mainly done manually, which not only requires high labor costs but also cannot meet the requirements of classification accuracy because manual detection is easily affected by many subjective and objective factors. So, it is necessary to research an excellent garment classification algorithm to improve the classification performance and reduce the classification cost. To research on a garment classification algorithm with superior performance, we need to pay attention to two aspects, one is to efficiently extract the traits of fashion graphics, and the other is to effectively classify the extracted clothing graphic traits. The ways of abstracting the traits of clothing images will be classified to conventional feature extraction ways and deep learning-based feature extraction methods. The conventional clothing graphic feature extraction method essentially extracts the color feature or texture feature of the clothing image. Color features describe the surface properties of clothing images or the corresponding clothing. Commonly used color trait abstraction ways with pigment moment (Weng et al. 2013), pigment histogram (Liu, Wei, and López-Rubio 2020) and pigment correlation (Moon 2007). The texture feature of the clothing image is a value calculated from the clothing image, which reflects the texture of the clothing, such as roughness, granularity, randomness, and normality. The commonly used texture feature extraction methods are histogram of oriented gradient (HOG) (Marek and Wojcikowski 2016), gray-level co-occurrence matrix (GLCM) (Chandy, Johnson, and Selvan 2014), local binary pattern (LBP) (Zhao 2021) et al. These traditional clothing image feature extraction methods have high requirements on clothing images and are likely influenced by elements like the graphic's degree of angle, background, and deformation of the clothing image, so that this abstracted fashion graphic traits are not satisfied. As computer deep learning's fast advancement, researchers use a large number of data sets to train convolutional neural networks, so that the convolutional neural network can continuously study the deep traits of fashion graphics during training process and accomplish the abstraction and classification of fashion image features. At present, many scholars have applied convolutional neural network to the research of fashion graphic classification algorithm and have also obtained nice outcomes. Tan et al (Tan et al. 2020) ameliorated a network named Xception network then used this network to the fashion graphic classification algorithm. They improved the network's nonlinearity and learning characteristics and innovated the L2 regularization approach is used to strengthen the network's antiinterference capacity, thereby improving the network's capacity to classify fashion graphics. Liu et al (Liu, Zhong, and Wang 2018) applied convolutional neural networks to classify images of female's fashion. Liu et al (Liu, Luo, and Dong 2019) put forward a hierarchic classification mold on account of convolutional neural network (CNN), they executed the CNN using the VGG network as the underlying framework and were validated on the Fashion-MNIST dataset. Wang et al (Di 2020) used and contrasted the full connection neural network's power, MobileNet V2, CNN and MobileNet V1 on the dress dataset about Fashion-MNIST and proved good ability of MobileNet V2. Kayyed et al (Kayed, Anter, and Mohamed 2020) put forward a fashion graphic classification model on account of the LeNet-5 structure for this dataset about Fashion-MNIST, obtained nice outcomes in addition. Yu et al (Yu et al. 2021) put forward an augmented capsule network with graphic traits and space fabric traits to deal with the subject that conventional neural networks unable to get the space fabric traits of fashion graphics. The network obtains the space structure traits of fashion graphics by enhancing the capsule network, and makes the extracted clothing features more robust through attention mechanism and a deeper network fabric, and reduces the computational load of the network through parameter optimization. To effectively classify clothing image features, it is particularly important to select a suitable classifier. Using a neural network as a classifier to classify clothing image features is a good choice, however its arguments' selection holds a great influence on the classification performance of the neural network, so to use the neural network as a classifier, it is necessary to hit appropriate arguments to increase its classification ability, swarm intelligence optimization algorithm can be considered as neural network's parameter optimization method. The swarm intelligence optimization algorithm is a method to obtain the optimal solution of a problem, which is inspired by a great deal of appearance or math formula theory in nature. for instance, gray wolf optimizer (GWO) (Panda and Das 2019) is aroused by the leadership hierarchy and hunting behavior of gray wolves in nature, salp swarm algorithm (SSA) (Abualigah et al. 2019) imitates the salp swarming behavior when foraging and navigating in the ocean, and ant lion optimizer (ALO) (Mirjalili 2015) is on account the ant lions' hunting mechanism in nature. The principles of these swarm intelligence optimization algorithms are casual and do not depend on gradient message, so they work well in solving nonlinear problems. RVFL (Scardapane et al. 2015) is a casual and high efficiency neural network. It generates the network's hidden layer bias and input weight randomly and obtains the output weight by least square calculation or pseudo-inverse, which effectively overcomes the traditional gradient learning algorithm. Different from the extreme learning machine proposed by Huang (Huang et al. 2012), RVFL exists a plain connection between the output and the input node. This structure causes the input node to exist a greater influence on the algorithm's ability, and generally has better performance in prediction or classification. But it is precisely because RVFL stochastic produces the network's hidden layer biases and input weights, its classification result is tumultuous. Some scholars have applied the combination of intelligent optimization algorithm and neural network to the research work of clothing image classification algorithm. Hazarika et al (Hazarika and Deepak 2022a) create a random vector function link formula named 1-norm RVFL (1N RVFL). Newton technique is used to solve the external dual penalty problem of 1N RVFL, and the solution of 1N RVFL optimization problem is obtained. Li et al utilized the dragonfly algorithm to optimize the online sequence extreme learning machine's hidden layer offset and the import weight. The validity of the model is verified by classifying the Fashion-MNIST dataset. Hazarika et al (Hazarika and Deepak 2022c) proposed a novel random vector functional link with εinsensitive Huber loss function (ε-HRVFL) for medicinal record classification issues. Zhou et al (Zhou et al. 2021) proposed a fashion graphic classification algorithm with the combination of RVFL and convolutional neural network. The algorithm first uses a parallel convolutional neural network (PCNN) to abstract clothing graphic traits, then obtains the optimized RVFL classifier by using the RVFL's hidden layer biases and the input weights majorized by grasshopper optimization algorithm (GOA), and classify the extracted features. Malik et al (Malik et al. 2022) created an extended feature RVFL (efRVFL), which analyzes the original feature space to generate an extended feature space, and then trains on it. They propose a set, which is about the extended feature RVFL (efRVFL). Different efRVFL basic models are trained in different feature spaces, so diversified and more accurate models can be generated. Hazarika et al (Hazarika and Deepak 2022b) proposed a nonlinear RVFL for estimating the daily suspended sediment load. The maximum overlapping discrete wavelet transform (MODWT) with boundary correction is applied to the model, which is used for SSL pre. The RVFL neural network has a simple composition. The RVFL's training is completed by calculating the weight of the output layer. The RVFL's training time is much less than that of the neural network which needs gradient descent means to renew the network arguments in the past, and RVFL has a high classification performance. However, it is assumed that the hidden layer offsets and input layer weights are produced stochastically, which will cause network instability. Inspired by the above literature, we proposed a fashion graphic classification algorithm via Densent201 and regularized random vector functional link (RRVFL) improved by MPA-AO algorithm. Unlike references Liu and Yang 2021) and , which only optimizes the input layer weight and hidden layer offset, our method also optimizes the regularization factor. Follows are the capital innovations of this algorithm: (1) We migrate and learn from DenseNet201 network, which is more efficient in feature propagation and feature utilization, fine tune the model parameters, and then use its global average pooling layer to extract clothing image features. (2) We introduce regularization factor into RVFL to improve the generalization ability of RVFL and avoid overfitting. We use MPA algorithm to generate the initial search agent of AO algorithm, and use the optimized AO algorithm to improve the hidden layer deviation, input weight and regularization coefficient of RRVFL. (3) This RRVFL improved by the MPA-AO algorithm is applied to sort the extracted clothing image traits. This algorithm not only fully utilizes the function of convolutional neural network to voluntarily abstract clothing image traits, but also utilizes the optimized regularization RVFL to ameliorate the features classification' accuracy. DenseNet201 structure DenseNet201, as a convolutional neural network with deeper layers, reduces the disappearance of gradients, improves the efficiency of feature propagation and feature utilization, and reduces the number of parameters of the network. The DenseNet201 network structure directly connects all layers, making the most complete transmission between layers can be achieved. The sub-modules of the DenseNet201 network are mainly dense block and transition layer, and the construction is displayed in Figure 1. According to the DenseNet201 network structure, if it has L layers in the DenseNet201 network, then there will be L L þ 1 ð Þ=2 connections. Each layer's input derives from all the previous layers' output. In Figure 1, X 0 represents the import of the entire convolutional neural network. The input to H 1 is X 0 , the input to H 2 is X 0 and X 1 , and so on. Therefore, compared with other traditional convolutional neural networks such as ResNet networks, which rely on the features output from the last layer of the network, DenseNet201 can fuse and utilize more low-level features, thereby improving the efficiency of feature propagation and feature utilization. Regularized random vector functional link As a type of single-hidden layer neural network, RVFL combines the hidden layer and the input layer to jointly influence the output layer. For a sample dataset with N inputs, the i-th sample's input is The RVFL network, which hidden layer nodes is L can be represented by the following formula: is the input weight, b j represents the hidden layer enhancement node's bias, and β j ¼ ½β j1 ; β j2 ; . . . . . . ; β jm � T represents the outcome weight. w j and b j are commonly defined stochastically. Equation (1) could be facilitated into a array and shown as: Among them, H represents the hidden layer enhancement node's output, β represents the output weight, and O represents the output result, which is expanded as follows: Contrasted with the conventional RVFL network, the regularized RVFL can better the network's normalization capacity and can effectively prevent the overfitting problem of the model. A common practice is to find the minimum training error and output weights: where the output error E ¼ O À Y,C is the regularization factor, which is used to trade off the effect between training error and model complexity. In order to find the minimum value of L RVFL , the closed-form solution of β can be obtained by fitting the grad of RVFL with respect to β to zero: where, I is an identity matrix of dimension L þ n. Aquila optimizer AO [24] is a population-based optimization algorithm aroused by the deed of Skyhawks in capturing prey. There are four methods for the optimization process of the algorithm. The first search method is extended exploration, using high-altitude diving and vertical diving to search the space. First, Skyhawk searches for the best hunting area through high-altitude diving and vertical diving. The mathematical expression formula is: where X 1 t þ 1 ð Þ represents the solution for t þ 1 loops emerged by the first hunt approach X 1 . X best t ð Þ represents the optimum key for t iterations, reflecting the approximate location of the prey. 1 À t T À � is to dominate the expansion enquiry through the iterations' number. X M t ð Þ is the mean location of the immediate solution connected at the t-th loop, rand is a stochastic figure in [0,1], and t and T represent the immediate and maximum loops. here, Dim represents the problem's dimension and N represents the candidate solutions' number. The second step is to narrow the scope of exploration, that is, to explore in the divergent search space through the contour short-slip attack. The mathematical formula is shown as formula (10). X 2 t þ 1 ð Þ represents the key for t + 1 loops, which is produced by the second hunt step C. X 2 is the dimensional space, and Levy D ð Þ represents the function of levy flight. X R t ð Þ is a random solution of range 1; N ½ � after t-th iterations. The third step is to expand the development, that is, to explore within the convergent search space through a low-speed descent attack. The mathematical formula is expressed as shown in the following formula (11). where X 3 t þ 1 ð Þ represents the key for t + 1 iterations obtained by the third hunt approach, a and @ are development tuning arguments fasten at 0.1, LB represents the minimum limit of the given matter, and UB is the maximum limit of the given problem. The fourth step is to narrow the range, that is, to grab the prey by walking and diving, and its mathematical formula is shown in (12): where X 4 t þ 1 ð Þ represents the key for t + 1 iterations obtained by the fourth hunt approach. QF is the quality function, which is applied to equipoise the hunt policy. G 1 is the various motion of the AO tracing the quarry during the exploration period, G 2 represents a dropping figure from 2 to 0, representing the flight slope of the AO tracking the quarry from the first location to the t-th location during the exploration period, X t ð Þ represents the immediate solution for t iterations. Marine predators algorithm MPA [25] is a novel metaheuristic optimization algorithm aroused by the predation strategies of predators in nature. The algorithm considers top predators to have the greatest search skills. The algorithm has a cream array and a quarry array, and the top predators constitute the elite matrix. The algorithm has three stages in the optimization process. The first phase is applied for the global hunt of the solution room. The second stage is to conduct local search around the best position after determining the best position, and the third stage is used to perform a local hunt for the best solution position at the moment in the solution room. And the algorithm can avoid falling into the local optimal trap as much as possible, so as to achieve better classification performance. Clothing image classification network model Since the DenseNet201 network structure can improve the efficiency of feature propagation and utilization, this paper selects the DenseNet201network as the feature extractor. First, we extract the weight arguments of the DenseNet201 pre-trained on the ImageNet dataset for transfer learning, after that train the incipient model obtained after transfer learning and fine-tune its parameters. Then extract the features of the output of its global average pooling layer, so that the feature extraction work for the clothing image dataset is completed. The network fabric of the DenseNet201 trait abstraction backbone is revealed in Figure 2. The RVFL's hidden layer offset and import weights are stochastic assigned and do not involve to be reset, while the output weights can be calculated analytically via an ordinary generalized inverse operation. So, when dealing with multi-classification problems, randomness greatly affects RVFL's power. performance, and resulting lower precision, easy to fall into local optimum or overfitting and other hidden dangers. Considering the above problems, we first usher the regularization factor to solve the output weight of RVFL and solve the turbulence of the inverse matter on minimizing this error function, so as to control the model complexity of RVFL and resolve the matter of overfitting. After that, we use the MPA optimization algorithm to supply a group of suitable originating search agent for the AO algorithm to improve the influence of the initial population on the optimization and convergence effects of the AO algorithm. Finally, we use the AO algorithm improved by the MPA algorithm to ameliorate this hidden layer offset, input weights and regularization coefficients of the regularized RVFL to improve the stability and accuracy of its classification. Finally, the MARRVFL classifier model put forward in this paper is obtained. The DFEB-MARRVFL clothing image classification algorithm put forward in this paper is obtained by combining the Densent201 feature extraction backbone (DFEB)and the MARRVFL classifier. Figure 3 is a flowchart of the algorithm. As shown in Figure 3, DFEB-MARRVFL is mainly classified into three parts. The first part is the backbone of DenseNet201 feature extraction, which is mainly responsible for abstracting the traits of fashion graphic datasets. We divide the dataset into testing set and training set proportionally, use the training set to drill the DenseNet201feature extractor, and then use the trained trait extractor to abstract the test set's traits, thus completing the first step of trait extraction. The second part is the MPA-AO optimization algorithm. The task of this part is to improve the hidden layer biases, input weights, and regularization coefficients of RRVFL to improve the instability of RRVFL caused by randomly generating parameters and improve RRVFL classification performance. First, we use the MPA algorithm to supply a group of suitable incipient search agent for the AO algorithm, so as to weaken the influence of the random generation of the incipient search agent on the optimization and convergence ability of the AO algorithm, and then use the optimization mechanism of the AO algorithm to affect the individuals in the population. The position is explored and developed, and the hidden layer bias input weight, regularization coefficient, and input weight of RRVFL are obtained by segmenting and rearranging the individual position, and the classification error of RRVFL is obtained as the fitness of the individual. The optimal fitness is compared to decide whether to update the optimal solution. If the algorithm does not reach the maximum number of iterations, it will continue to explore and develop, and if the most figure of loops is achieved, it will revert to the position of the optimal individual. The third part is the RRVFL classifier, the task of this part is to classify the features of the extracted clothing image dataset. First, we segment and rearrange the position of the optimal individual returned by the MPA-AO optimization algorithm to obtain the RRVFL's hidden layer bias, the input weight, and regularization coefficient and use the extracted training set features to train RRVFL and calculate its output weight. Then, we use the classification function of RRVFL, the testing set features and the obtained output weights and input weights to calculate the test set features' output. Finally, we compare the output matrix of the obtained testing set features with its real label matrix. If the outcome is not same as the tag, it is considered as a classification mistake. Finally, the figure of correctly classified samples is split by the figure of test set specimen to obtain the testing set classification degree of accuracy. Figure 4 shows the processing from the individual positions of the population to the RRVFL parameters. The position of the individual in the optimization algorithm is a row vector whose dimension is (hidden layer node×(input layer node+1)+1). In Figure 4, m is the RRVFL's hidden layer nodes number, and n is the figure of hidden layer nodes of RRVFL. The figure of input layer nodes, W is the RRVFL's input weight, B is the RRVFL's hidden layer bias, C is the regularization factor of RRVFL, β is RRVFL's output weight. As shown in Figure 4, the first m � n particles at the individual position are divided into the input weight of RRVFL, and the matrix of m � n can be used as the input weight of RRVFL by rearranging it, and the next m particles are divided into the hidden layer partial weight of RRVFL. rearrange it into the column vector of m � 1, which can be used as the RRVFL's hidden layer bias, and the last remaining particle is the RRVFL's regularization factor. Since the scope of particles is between [−1, 1], so use Abs C � 10 ð Þ to map the regularization factors to the (0,10) interval. RRVFL's output weight β can be computed by processing the input weight of the RRVFL obtained by the individual position, the regularization factor and the hidden layer bias. Experiment circumstances The experimental system in this paper is windows10, the CPU processor of the experimental equipment is intel(R) Xeon(R) Bronze 3106, the memory is 64GB, and the GPU processor is NVIDIA GeForce RTX 2080Ti. The editing software is MATLAB 2018b. Dataset selection This paper uses the ACWS dataset, it was first proposed in the paper [22]. Pictures in the dataset are closely related to our lives. The dataset consists of 15 kinds of pictures such as Shorts, Top, Sweater, and Dress. This data set is used to prove the excellent performance of the model put forward in this paper. However, due to the large data set, full use will greatly increase the time and calculation cost. This paper selects 10 categories from them, namely Sweater, Shorts, Jumpsuit, Tee, Jacket, Skirt, Jeans, Dress, Blouse and Coat, of which 1000 images are selected for every class, and 10,000 fashion graphics are chosen regard as the test data set to verify the algorithm obtains the classification ability of fashion graphics, and divides the data set according to 7:3, that is, 7000 images are applied que the train set and 3000 images are applied que the testing set. Figure 5 presents an example of the categories of the ACWS dataset which is chosen by us. To prove the availability and generalization of the model, we also use the Fashion-MINIST dataset, which has 10 classes of graphics, namely: pullover, dress, shirt, sandal, ankle boot, sneaker, bag, coat, trouser, and t-shirt. The train dataset includes 6000 specimen per class, and the testing dataset contains 1000 specimens per class, so the train dataset owns 60,000 specimens and the testing dataset has 10,000 specimens. Figure 6 reveals a partial the Fashion-MNIST dataset's case. Algorithm parameter settings To achieve more excellent performance of the model, the selection of model parameters is particularly important. In this paper, ablation experiments are carried out for each parameter of the DFEB-MARRVFL algorithm, by using selected partial ACWS datasets to study the classification performance of the algorithm under different parameter values to select the most suitable parameters for subsequent experiments. The effect of the activation function of RVFL First of all, we need to research the influence of various activation functions of RVFL on the performance of the algorithm. In this paper, five activation functions are selected for ablation experiments, namely radbas function, tribas function, hardlim function, sine function and sigmoid function. were carried out under each activation function, and the average classification accuracy was taken for comparison. Table 1 shows the experimental outcomes of the put forward model in this paper under every activation function. The 1 to 10 time represent the operation results of the 1st to 10th times, respectively. Table 1 shows that if the activation function is the sigmoid function, the classification accuracy of the algorithm is very stable and its mean classification accuracy is the best. When we use the hardlim function as the activation function, the classification accuracy of the algorithm fluctuates slightly, and its average classification accuracy is slightly worse than the sigmoid function. When we use sine as the activation function, tribas and radbas, the algorithm fluctuates greatly, and the classification accuracy fluctuates around 10%-80%, which is very unstable, and the average classification accuracy is significantly poorer than that of the sigmoid function and the hardlim function. Therefore, we select the sigmoid function for RVFL for subsequent experiments. Effect of hidden layer nodes of RVFL on the model As RVFL is a single hidden layer neural network, it is particularly important to choose the optimum hidden layer nodes' figure to improve its classification performance. When the figure of selected nodes is too low, the classification performance may not meet the requirements. When the figure of chosen nodes is too strong, it will not only increase the amount of calculation but also overfitting risk. In this paper, 20 groups of hidden layer nodes of RVFL are selected for experiments. This hidden layer nodes' quantity is in a range of [10,200]. 10 points are added in each experiment, and under the value of each hidden layer nodes, 10 experiments were carried out to calculate the average classification accuracy to research the effect of hidden layer nodes' figure on the performance of the model. The experimental outcomes are revealed in Table 2. The 1 to 10 T represent the operation results of the 1st to 10th times, respectively. Seen from Table 2 that if the figure of hidden layer nodes is in the interval [10,140], with the increase of hidden layer nodes' figure, the classification ability of the model gradually increases, which means that in this interval, with the hidden layer nodes' number increases, it will effectively ameliorate the classification the algorithm's power. However, when the hidden layer nodes' figure is in the interval , there is an overfitting phenomenon. The increase of hidden layer nodes' figure does not improve the classification ability of the model. On the contrary, it decreases. Therefore, we select hidden layer nodes' number as 140 for subsequent experiments. Influence of parameters of MPA-AO optimization algorithm on the proposed algorithm The population size (search agent) and the maximum quantity of loops of MPA-AO algorithm will affect its optimization ability similarly. For studying these two parameters' effect on the algorithm's performance, this paper will conduct a combined experiment with these two parameters. Among them, the population size is selected 10 values: 5,10,15,20,25,30,35,40,45,50. Select 10 values for the maximum number of iterations: 10,20,30,40,50,60,70,80,90,100, and combine the population number and the maximal quantity of loops for subsequent experimentation. The test outcomes are revealed in Table 3. Figure 7 intuitively reveals the change of the breakdown of the algorithm's accuracy with the change of the search agent and the quantity of loops. It could be learned through the experimental results that if the quantity of loops is less, the algorithm's classification accuracy is piecemeal wrong for the optimization algorithm doesn't converge at this time. And when a high quantity of iterations, because the optimization algorithm has converged, so the accuracy is also relatively higher. The biggest difference between low iteration and high iteration is whether the algorithm can converge. If the quantity of loops is too less, the algorithm cannot converge. If the number of iterations is too high, the algorithm has already converged, and continuing the iteration will only waste resources. If the quantity of populations is 15 and the quantity of loops is 80, the classification accuracy of the algorithm reaches the optimum level. Subsequent increases in the quantity of search agent and the maximal quantity of loops does not effectively ameliorate the algorithm's classification property, and increases the amount of computation and consumes more computing resources. Therefore, this paper selects the population number of MPA-AO optimization algorithm as 15, and the maximal quantity of loops as 80 for following experiments. Experimental results and analysis To bring the model's results much convincing, this paper uses four evaluation indicators, Macro-P, Macro-R, Accuracy and Macro-F1, for assessing the algorithm's power in this paper. Moreover, each algorithm in the subsequent experiments was run ten times to get the average figure to compare the experimental results. Table 4 shows the settings of the final parameters of the algorithm. To prove the improvement of the model's classification ability in the paper contrasted with the traditional convolutional neural network, we selected the ResNet50 network, ResNet101 network, DenseNet201 network and InceptionV3 network based on transfer learning qua the targets of the distinction experiment. These networks are all obtained by performing transfer learning on the weight arguments of the network trimmed on the ImageNet dataset and fine-tuning its parameters. We compare every network model's evaluation indicator and the algorithm in this paper, as shown in Table 5, and give the corresponding confusion matrix, as revealed in Figure 8. See the experiment data, we can see that the evaluation indicators of ResNet50, ResNet101, and InceptionV3 are lower than the DFEB-MARRVFL algorithm proposed in this paper, and even lower than the DenseNet201 network structure, which verifies the traditional convolutional neural network mentioned above rely on the trait outcome by the last layer of the network. The DenseNet201 network structure can integrate and utilize more low-level features, thereby improving the efficiency of feature propagation and feature utilization. The evaluation indicators of the DFEB-MARRVFL algorithm are 14%-15% higher than DenseNet201, which proves the excellent classification performance of the MARRVFL classifier. For improve the classification performance of the put forward DFEB-MARRVFL model and MARRVFL classifier additionally, we use different classifiers (RVFL, RRVFL, GWORRVFL, ALORRVFL, SSARRVFL, MPARRVFL, AORRVFL, AARRVFL, SARRVFL, GARRVFL) to classify the features extracted from the DFEB structure and compare the assessment indicators of every algorithm. The experiment result is revealed in Table 6. See this Table, we could know the property of using the primitive RVFL as the classifier is the lowest, and the performance of the RRVFL classifier that introduces the regularization mechanism to the RVFL owns a definite amelioration contrasted with the original RVFL. However, since it does not resolve the issue of randomly emanating hidden layer biases and import weights, its classification performance still falls short of the requirements. In the RRVFL classifier optimized by a single optimization algorithm, the GWO algorithm, ALO algorithm, SSA algorithm and MPA algorithm are not as good as the AO algorithm in their ability to ameliorate the arguments of RRVFL. Subsequently, we choose the GWO algorithm, ALO algorithm, SSA algorithm and MPA algorithm to supply the initial search agent for the AO algorithm and apply this optimized AO algorithm to improve the RRVFL's arguments to get the corresponding GARRVFL, ALORRVFL, SARRVFL and MARRVFL classifiers. The experimental results show that the MARRVFL classifier performs the best. This fully demonstrates the excellent classification power of the MARRVFL classifier proposed in this paper. Analysis of stability and parameter optimization convergence To prove the algorithm's stability proposed by us, the model we proposed is contrasted with DFEB-RVFL, DFEB-RRVFL, DFEB-GWORRVFL, DFEB-ALORRVFL, DFEB-SSARVFL, DFEB-MPARRVFL, DFEB-AORRVFL, DFEB-AARRVFL, DFEB-SARRVFL and the DFEB-GARRVF algorithm. Each algorithm runs ten times, and a box plot is painted on the basis of the experimental results, revealed in Figure 9. The line which color is red represents the data's middle value. If this red line is lofty in the graph, the classification accuracy of this algorithm is higher. The blue box's extent is the data's scatter space. The less the distance between the quantiles, the more concentrated the position of the obtained classification outcomes, that is, the more stable the algorithm's classification ability is. According to the experiment outcomes, the stabilization of the DFEB-RVFL and DFEB-RRVFL algorithms that are not improved through the optimization algorithm is the worst, because the problem of randomly generating hidden layer biases and input weights has not been solved. The stability of other algorithms that use optimization algorithms to optimize the parameters of RRVFL is improved compared with the first two. The DFEB-MARRVFL put forward by us has the smallest box and the best classification accuracy, which just proves its good stability and classification performance. In order to evaluate the convergence speed and convergence the algorithm's impression in parameter optimization, the algorithm proposed by us is contrasted with DFEB-GWORRVFL, DFEB-ALORRVFL, DFEB-SSARRVFL, DFEB-MPARRVFL, DFEB-AORRVFL, DFEB-AARRVFL, DFEB-SARRVFL and DFEB-GARRVFL algorithm, and the optimization and every algorithm's convergence curves are drawn through the experiment outcomes. The experimental results are revealed in Figure 10. It can be seen from the Figure 10 that DFEB-ALORRVFL converges quickly when optimizing parameters, and the convergence effect is very poor, mainly because it falls into a local optimal solution. Compared with other algorithms that use a single optimization algorithm for optimization, the DFEB-AORRVFL algorithm has the best convergence effect, followed by DFEB-MPARRVFL. The DFEB-MARRVFL put forward by us has the best convergence effect among the 9 algorithms, which indicates that the MPA-AO algorithm could better improve the input weights, hidden layer biases, and regularization coefficients of RRVFL. Algorithm effectiveness and generalization analysis In the experiments in this chapter, we contrast the put forward algorithm with the GLCM-RVFL, LBP-HOG-SVM and InceptionV3-SRC algorithms, and we use the Fashion-MNIST dataset on the proposed algorithm and PCNN-GOARVFL proposed by Zhou [21] to prove the availability and generalization ability of our put forward algorithm. Tables 7 and 8 show the results of the experiments. It can be known from Table 7 that these algorithms' evaluation indicators using conventional trait abstraction approaches such as HOG, LBP and GLCM cannot meet the requirements, mainly because these feature extraction methods could merely abstract low-level traits of clothing graphics, which results the poor classification ability of the model. In contrast, the InceptionV3-SRC method using the InceptionV3 convolutional neural network to extract features has improved performance compared to the first two, but it is still 12-13% points poorer than the DFEB-MARRVFL model put forward by us. Table 8 reveals the algorithm's evaluation indicators put forward in this paper and the PCNN-GOARVFL algorithm on the Fashion-MNIST data set. It can be known from Table 8 that the algorithm's evaluation indicators proposed in this paper are 3-5% points better than the PCNN-GOARVFL algorithm, which indicating that it can effectively classify the Fashion-MNIST dataset, which proves the good generalization ability of the algorithm. Conclusion This paper proposed a fashion graphic classification algorithm with DenseNet201 network and improved regularized RVFL direction, we can draw the following conclusions: (1) The Densent201 trait extraction backbone based on transfer learning can better extract the traits of fashion graphics, improve the efficiency of feature propagation and utilization, and thus improve the classification ability of the algorithm for fashion graphics. (2) Innovating a regularization mechanism to RVFL can ameliorate the generalization capacity of the algorithm and improve the classification accuracy of RVFL. And compared with other optimization algorithms, the MPA algorithm is applied to provide the incipient search agent for the AO algorithm, and the RRVFL classifier optimized by the improved AO algorithm could attain a better convergence result in the process of optimizing the parameters, and improve the performance of the RRVFL classifier. The stability and classification performance of the algorithm are improved, and the issue of low and turbulent classification ability caused by RVFL stochastically arising hidden layer biases and import weights is resolved. (3) The DFEB-MARRVFL model put forward in this paper can improve the clothing image classification algorithm's accuracy with effect. It owns excellent generalization capability, and the performance of the algorithm is more excellent than else fashion graphic classification algorithms. Although the classification accuracy of DenseNet201 neural network is high, its huge number of parameters makes it impossible to ignore the delay in practical application. Next, we will look for and build a better neural network, so that we can have better classification performance while reducing model parameters. Highlights (1) We extract the weight parameters of the Densenet201 network pre-trained on the ImageNet dataset for transfer learning, to obtain the initial network model and fine-tune the model parameters. The fine-tuned network model is then used as a feature extractor to extract the clothing image features output by the global average pooling layer of the Densenet201 network. (2) Aiming at the problem of low classification performance and instability of RVFL, we propose to introduce a regularization coefficient into the traditional RVFL, and solve the ill-posedness of the inverse problem by constraining the minimization of the empirical error function, which improves the generalization ability of RVFL and avoids overfitting. Then use the MPA optimization algorithm to provide a set of suitable initial populations for the AO algorithm, and use the optimized AO algorithm to optimize the input weights, hidden layer biases, and regularization coefficients of RRVFL, thus solving the problem of low classification performance and instability caused by RVFL randomly generating input weights and hidden layer biases. (3) The RRVFL optimized by the MPA-AO algorithm is used to classify the extracted clothing image features. Disclosure statement No potential conflict of interest was reported by the authors. Funding The work was supported by the National Key
8,943.2
2023-03-26T00:00:00.000
[ "Computer Science" ]
Friction hysteretic behavior of supported atomically thin nanofilms Hysteretic friction behavior has been observed on varied 2D nanofilms. However, no unanimous conclusion has yet been drawn on to the exact mechanism or relative contribution of each mechanism to the observed behavior. Here we report on hysteretic friction behavior of supported atomically thin nanofilms studied using atomic force microscopy (AFM) experiments and molecular dynamics (MD) simulations. Load dependent friction measurements were conducted on unheated and heated samples of graphene, h-BN, and MoS2 supported by silica substrates. Two diverging friction trends are reported: the unheated samples showed higher friction during unloading than during loading, and the heated samples showed a reversed hysteresis. Further, the friction force increased sub-linearly with normal force for heated samples, compared with unheated samples. Tapping mode AFM suggested that the interaction strength of the substrate was increased with heating. Roughened substrates in the MD simulations that mimicked strong/weak interaction forces reproduced the experimental observations and revealed that the evolution of real contact area in different interface interaction situation caused the diverging behaviors. Surface roughness and interaction strength were found to be the key parameters for controlling the out-of-plane deformation of atomically thin nanofilms. INTRODUCTION Owing to their two-dimensional (2D) planar structure and strong in-plane covalent bonding, 2D nanofilms such as graphene, h-BN, and MoS 2 , exhibit ultra-high mechanical strength and intrinsically low interfacial friction that can, in some cases, reach vanishing low/superlubritic friction coefficients [1][2][3] . Thus, 2D nanofilm-based lubricants are poised to make significant impact both as solid lubricants on their own or as boundary friction modifiers in oilbased lubricants [4][5][6] . Focusing on their application as novel solid lubricants, the intrinsic attraction or adhesive properties of 2D nanofilms to solids has been shown to be a significant factor in determining their lubricating properties 7,8 . More specifically, the substrate-nanofilm interaction has endowed these nanofilms with various unique frictional characteristics that macroscopic film does not possess, such as thickness-dependent friction 7 , adhesiondependent negative friction coefficient 9 , non-uniform interface interaction tuned friction 10 and load-dependent friction hysteresis 11 . While these observations have been made, a unifying lubrication mechanism or some method of determining the relative contributions of the various proposed friction reducing mechanisms proposed for 2D nanofilms has not yet been discovered. Major progress in understanding the lubrication mechanisms of 2D nanofilms has been achieved using atomic force microscopy (AFM) 12 . In particular, the variation of friction with applied load in AFM experiments has shown a non-conventional behavior 8,9,11,13 compared with the expected trends predicted by contact mechanics theory 14 . In such cases, 2D nanofilms often exhibit linear increases in friction with applied load 8,11 , as well as friction hysteresis 8,9,11,13,15 , where the applied normal force was increased and then decreased and the friction force was not the same for a given load during either segment of the experiment. Such results indicate that many experimental conditions, such as environment, sliding history, surface preparation, etc., may influence the results obtained from experiments, and thus the developed lubrication theories. Currently, there are two mechanisms that have been proposed to explain the observed hysteresis in load dependent friction measurements: the puckering effect 7,11,16 ; and surface contamination, such as the capillary formation from ambient humidity, temperature, and chemical contamination 8,9,13,15,17 . Friction hysteresis was first reported for graphene on copper by ref. 11 . It was suggested that the relatively weak adhesion between graphene and copper foil substrate compared with stronger adhesive forces between the tip and the graphene caused the graphene to pucker and inhibited the puckered graphene from fully relaxing during unloading. This suggested that there was also a hysteresis in the contact area during a loading-unloading cycle results in the hysteresis in friction, which was supported by the work of ref. 9 . This proposed mechanism was well-received as it is based on one of the lubrication mechanisms proposed for layer dependent friction on 2D nanofilms 7 . Deng et al. found that when the tip slid against the graphite surface, the topmost graphene layer could even delaminate from the graphite due to a weaker adhesion interface, which resulted in a significant increase in friction as the applied normal load was decreased 9 . However, Ye et al. found that the puckering of the graphene film was less correlated to the friction hysteresis, but rather with the shape that the water molecules formed between the tip and graphene which dominated the friction hysteretic behavior 13 . The role of water was confirmed by Gong et al.'s experiment and simulation, where graphene was found to exhibit higher friction hysteresis in high humidity conditions, and the hysteresis was not observed under dry conditions 8 . However, Zhang et al. found that changing the humidity had limited influence on hysteresis, while the environmental contaminants between the tip and graphene surface tuned the adhesion force of graphene/tip, thus leading to different friction hysteretic behaviors 17 . Additionally, according to the recent paper of Gong et al., after the surface contamination and water molecules were removed from the graphene sample using the ultra-high vacuum (UHV) AFM, the friction hysteretic was still observed until high temperature annealing of the sample was conducted 15 . As only a small amount of water/contamination may be present on freshly prepared surfaces that were subjected to low temperature heating and UHV conditions, friction hysteresis may also be an intrinsic property of the nanofilm covered surface itself. Although several hypotheses have been proposed regarding the origin of friction hysteresis for supported 2D nanofilms, no unanimous conclusion has yet been drawn on to the exact mechanism or relative contribution of each mechanism to the observed behavior 11,13,15,17 . Herein, we re-examine the friction hysteretic behavior of three different material combinations of supported 2D nanofilms prepared through mechanical exfoliation, including graphene/Silica, MoS 2 /Silica, and h-BN/Silica. In each case, we prepare the samples with or without a heating step during mechanical exfoliation process, which resulted in an increase of the 2D nanofilm/substrate interfacial adhesion on the heated samples. Load dependent friction measurements revealed that the unheated and heated samples had two distinct different friction hysteretic behaviors, which were correlated with the interface interaction between 2D nanofilm and substrate. MD simulation was then performed on 2D nanofilm/substrate system with different interface interaction, confirming the observed results in experiments and also providing atomistic mechanisms of friction reduction for these materials. Characterization of the interfacial interaction between graphene/substrate To study the hysteretic behavior of 2D nanofilms, two different graphene/Silica samples were prepared using the mechanical exfoliation method, as shown in Fig. 1a 18 . In the first set of samples, the graphene was exfoliated onto the silica substrate without any heat treatment in ambient environment. In the second set of samples, the graphene/Silica/scotch tape was heated to~100°C for 10 min before exfoliation and removal of the scotch tape 19 . molecules were intercalated between graphene and substrate and between graphene layers in either sample 20 . Additionally, bubbling from intercalated water between the graphene and the substrate was not observed in either sample despite the ambient humidity of the laboratory environment. The scanning areas and the corresponding friction force images of the surfaces of the unheated and heated graphene samples are shown in Supplementary Fig. 1, where the surfaces of the supported graphene were unaffected by heating the sample at first glance: both samples appear contaminant free and similar decreases in the friction forces were observed between the substrate and graphene, as well as when the number of layers of graphene covering the substrate is increased. Previous work 17,21,22 has suggested that periodic stripes having a spacing of 4.3 ± 0.2 nm should be observed when environmental adsorbates are present. Supplementary Fig. 2 shows that such a structure is not present on the graphene sample, suggesting that the influence of environmental adsorbates can be neglected here. AFM tapping mode imaging was employed to determine the interfacial adhesion of graphene/Silica samples. The phase signal variation over the graphene samples was acquired, providing a map of effective stiffness of the sample surface and serve as an indicator of the contact state of the embedded interface [23][24][25][26] . The phase variation is illustrated in Fig. 2a, where a lower phase value indicates a lower contact stiffness and thus weaker interface strength between the graphene and substrate [27][28][29] . Figure 2b-e shows phase images acquired on several regions of different samples, having varying graphene thickness above the substrate. In each image, the silica substrate was also imaged as a reference point for the phase value. Using the value of the phase for the silica substrate to normalize the results, which was in fact very close to the same value in both samples, the variation in phase over the number of graphene layers covering the substrate could be examined. Figure 2f shows the phase and stiffness difference between the silica substrate and each of the different layers of graphene identified. There was a significantly larger decrease to more negative values in phase for the unheated sample compared with the heated sample, suggesting a sharp decrease in the graphene/Silica adhesive interaction 19,30 . In both samples, there was a layer-dependent variation of phase difference: a monotonic decrease to more negative values in phase was observed as the number of layers increased for both heated and unheated samples. Additionally, there was a layer-dependent variation of phase difference for both heated and unheated samples: a monotonic decrease to more negative values in phase was observed as the number of layers increased, suggesting that the adhesive interaction of the graphene with the substrate decreased as more graphene layers were added 31,32 . Hysteretic friction behavior of graphene Load dependent friction measurements were then conducted on the unheated and heated graphene/Silica samples to investigate the friction hysteretic behavior. In these measurements, the applied normal force was increased from~0 nN applied load to a maximum value and then decreased to zero. Images of the surface were acquired at a constant normal force while simultaneously acquiring the lateral force in the forward and reverse scan directions. Before discussing the results on graphene, load dependent friction measurements were conducted on the silica substrates for reference. Supplementary Fig. 3a shows that the friction forces measured on the silica substrate were not strongly impacted by heating the sample. While a direct comparison of the friction forces measured on the silica substrate is difficult to make, resulting from possible changes in the tip size/chemistry and the alignment of the laser/sensor between samples, neither sample exhibited significant friction hysteresis and the slope of the friction force versus normal force curves were very similar between both samples. Figure 3a, b shows the load dependent friction measurements acquired on the graphene covered regions on the unheated and heated samples, respectively. The friction measured on the graphene covered areas were far lower than those measured on the silica substrate ( Supplementary Fig. 3). In both samples, the friction forces were observed to decrease with the number of layers of graphene as expected 7 . However, the layer-dependent friction behavior on unheated and heated graphene samples were different. For the unheated sample, since the interface between graphene and the substrate was weak, the friction of unheated graphene showed obvious layer-dependence, which was often observed on 2D nanofilms. For the heated sample, due to the improvement of the interface strength, the layer-dependence was greatly suppressed, where the fiction on bilayer was almost the same as that on trilayer, especially below the applied normal force of 30 nN. Previous works showed a similar phenomenon, where the friction strengthening was disappeared on heated graphene due to the heating process strengthening the interface strength between graphene and the underlaying substrate 15 . Therefore, besides the phase difference in Fig. 2, the analysis on the layerdependent friction behaviors further convinced us that the heating process did increase the interface strength between graphene and substrate 19 . Comparing the friction on like coverage of graphene shows that the friction forces were much higher on the unheated samples than on heated samples. The variations in friction force versus normal force during loading were nonlinear for both samples, but their curvatures were different. For the unheated sample, the slope in the variation in friction force versus normal force, i.e., friction coefficient, increased with increasing normal force, while for the heated sample, the friction coefficient decrease with increasing normal force. The load-dependent friction coefficient was most evident on monolayer, which was most sensitive to interface interaction 7 . A second difference between the heated and unheated samples is the friction hysteretic behavior. In Fig. 3a, we see that the unheated sample demonstrated friction forces that were higher during unloading than during loading regardless of the layer number. This is in contrast to the friction forces shown in Fig. 3b for the heated sample, where the friction forces were lower during unloading than loading. Additionally, the amount of hysteresis depended on the coverage of graphene: a greater amount of hysteresis was observed for fewer (e.g., one) layers on the unheated sample than for a higher number (e.g., three) layers. Therefore, the diverging hysteretic behavior suggests that the lubrication mechanism is not the same for the two samples resulting from the different interface interaction between graphene and the substrate. It is noted that the same measurements were conducted for smaller ranges in normal force ( Supplementary Fig. 4), confirming that the hysteresis was not a result of tip wear. Hysteretic friction behavior for other 2D nanofilms To explore the universality of the diverging hysteretic behavior for other supported 2D nanofilms, the same heating treatments were performed for exfoliated MoS 2 and h-BN samples. Figure 4 shows the topographic images and friction images of the four samples, where the monolayer regions can be readily distinguished. The load dependence of friction on the monolayer region of the four samples are shown in Fig. 4c, f, i, l. The friction behavior of the MoS 2 and h-BN samples was similar to that of the graphene samples. First, the friction on unheated samples was always larger than that on the heated samples. Second, the same nonlinear variation of the friction force and the normal-force-dependent friction coefficient was observed on MoS 2 and h-BN samples, where the unheated samples always showed a higher friction coefficient under higher normal force, while the friction coefficient of heated samples exhibited the reverse trend. Third, the same diverging hysteretic behavior for heated versus unheated samples for each nanofilm material was observed. For the unheated sample with relatively weak interface interaction, the friction forces were higher during unloading than loading, and for the heated sample with stronger interface interaction, the hysteretic behavior was reversed. It is noted that the overall friction on monolayer MoS 2 is much larger than that on monolayer graphene, and the overall friction on monolayer h-BN is slight smaller than that on monolayer graphene, which agrees well with previous results of friction published on these materials 33 . Thus, we confirmed that the diverging hysteretic behavior on supported 2D nanofilms is universal property of 2D materials, which is modulated by the interface interaction. MD simulations of hysteretic friction To further understand the underlying mechanism determining the normal-force-dependent friction coefficient and the diverging hysteretic behavior observed on heated and unheated 2D materials, MD simulations of the friction occurring between a silicon-supported graphene layer and a silicon tip were performed. As shown in Fig. 5a two simulations were conducted both having a silicon substrate with an average surface roughness of 0.3 nm over a 20 × 20 nm 2 surface. The two simulations had different interaction strengths, one with a low interaction strength and another with a higher interaction strength, mimicking the adhesive strength change in experiments that resulted from the two heat treatments. Unsurprisingly, the graphene layer conformed better to the substrate with the higher interaction strength than the substrate with the lower interaction strength. While the apparent change in surface roughness is visible in Fig. 5a, AFM measurements of the surface topography in Fig. 1a of the unheated (weak interaction strength) and heated (strong interaction strength) did not show a conclusive change in surface roughness resulting from the error associated with characterizing surface roughness (average roughness, R a , of the heated and unheated monolayer graphene sample was 0.1211 nm and 0.1208 nm, respectively): e.g., a change of less than 1% in average surface roughness over the scan area is difficult to measure with certainty. Despite the absence of change, the average surface roughness measured over the 5 × 5 μm 2 scan frame was comparable to that used in the MD simulations. Figure 5b shows that the simulation with the high interaction energy simulation showed the same friction hysteresis as the heated sample from experiments, whereas the simulation with the low interaction energy has the same friction hysteresis as the unheated sample from experiments. The consistency between the simulation and the experimental results and the literature on the adhesive properties of heated/unheated samples suggests that examination of the sliding interface may provide further understanding of how the structure of the contact is impacted by the surface roughness, interaction energy, and graphene overlayer. We also suggest that the curvature of the friction versus normal force in Fig. 5b for the high/low interaction energy substrates follows the same trend as the experimental measurements in Figs. 3 and 4. However, with the number of loads examined in the simulations, this curvature change linked to the experimental results, which had significantly more data points, is slightly less obvious. DISCUSSION The proposed mechanisms for layer dependent friction on 2D materials, as well as for observed friction hysteresis, include the pucker-effect 7 , enhancement of interaction between tip-sample atoms (quality of the contact) 16 , water meniscus 13 , deformation of confined liquid layers, and electron-phonon coupling 34 . In this case here, we observe two different friction hysteretic behaviors from a change in substrate interaction. As we have not examined velocity dependent friction in the experiments, and the MD simulations did not include water or other medium on the surface, we focus rather on understanding the current friction results within the the first two proposed mechanisms. In the puckereffect, an increase in contact area between the tip and the 2D material that is dependent on sliding history and load was previously reported 11 . Figure 5d shows the number of atoms contacting between the tip and the graphene film in the load dependent friction measurements shown in Fig. 5b. Here, the number of contacting atoms follows the trend in hysteresis: in the low interaction substrate, the number of atoms in contact during unloading is less for the same normal force value as was observed in loading. The opposite is true for the higher interaction force. However, the low interaction force shows a dramatic decrease in the number of contacting atoms during unloading compared with the smaller increase in contacting atoms during the unloading simulations in the high interaction substrate case. A closer examination of the contact state was performed to further understand the link between contact size, quality, and friction. We characterized the in-plane stresses and strains created in the graphene layer as the tip slid across the substrate. Figure 5c shows that larger area of highly strained atoms is observed at the same load/sliding history for the low graphene-substrate interaction than in the high graphene-substrate interaction simulation. This is a result of more gaped regions between graphene and the rough substrate with low graphene-substrate interaction, as the graphene was less adhered to the surface variations of the underlying substrate. The higher strain/less adherent graphene resulted in a higher effective contact area between the sliding tip and the substrate in the low graphene-substrate interaction simulations, and thus resulted in the observed higher friction. In addition, with the lower graphene-substrate interaction, the effective contact is smaller during loading than unloading, resulted in a lower friction during loading than unloading, However, the opposite trend was observed for the high graphene-substrate interaction. To better interpret the impact of amount of strain in the graphene nanofilm, a histogram of the strains in the contacting area between the tip and graphene was created, shown in Supplementary Fig. 5. This histogram shows a change in the average/center value of the strain as well as a significant change in the width of the distribution of the strain values for the two surface interactions. At 10 nN applied load, larger number of high compressive strain atoms (more atoms were resisting the motion of sliding) was observed during unloading than that during loading for low interaction case, which also contributed to the higher friction forces observed during unloading compared with loading. However, for high interaction case, a different trend was observed that larger number of atoms at contact experienced tensile strain during unloading. Meanwhile, the strain becomes more narrowly distributed around 0% during unloading for the high interaction case. In several previous works, similar simulations have been performed where no friction hysteresis has been observed in the MD simulations 8,13,35,36 . In the simulations contained within this study, the presence of hysteretic friction is a result of the minor roughness of the substrate, compared to the previous studies that were constructed only with atomic-scale roughness. This roughness allows for small portions of the graphene film to be detached and adhere less to the substrates, particularly in valleys of roughness of the substrate that the graphene smoothen. It is in these regions that the interaction between graphene and the tip can be sufficiently higher than between graphene and the substrate, that a pucker can begin to form. Thus, the small amount roughness is required for the formation of a pucker, or an increase in the contact area of the tip and the substrate. The interaction strength between the graphene and the substrate then controls how much this layer can lift off the substrate during sliding, as well as how well it conforms to the surface roughness of the substrate. The observation of variation in the number of atoms in contact between the tip and sample/the contact area in Fig. 5d directly follows the hysteretic behavior observed in experiments, providing a simulation that replicates the observed data contained within this manuscript, as well as that suggested in ref. 11 . It is noted that, the tip-sample interaction can be affected by the tip material, surface contaminants, and possible intercalated liquid molecules between tip and sample 17,20,37,38 . However, we used the tips of the same kind in all friction experiments and the ambient conditions were barley changed. Thus, the influence of the tip-sample interaction on the frictional hysteresis behavior was not specifically investigated here with the tip-sample interaction also fixed in all MD simulations. And finally, the change in strain in the graphene layer suggests that the contact quality, suggested in ref. 16 , is indeed important to the observed friction and hysteretic behavior of 2D materials. The formation of the pucker and subsequent alignment of graphene-tip atoms to favorable positions during sliding allows for the build-up of strain in the 2D material as the tip slides. However, the surface roughness and the interaction strength between the tip and substrate are two critical parameters that control the amount of strain that can be built-up in the 2D material, which has been missing from the literature on the friction behavior of 2D materials and the proposed friction mechanism for 2D materials. Thus, the combination of out-of-plane stiffness, the adhesive interaction between the graphene/2D material and the substrate, and the alignment of the tip and 2D material atoms is required for interpreting the friction mechanism of 2D materials. In summary, friction hysteretic behaviors of supported atomically thin nanofilms were studied using experiments and MD simulations. Load dependent friction measurements were conducted on the unheated and heated 2D nanofilm/Silica samples. Two diverging friction hysteretic behaviors were found, where the unheated sample demonstrates friction forces that were higher during unloading than during loading, while the hysteresis of heated sample was reversed. Meanwhile, two distinct evolving behaviors of the normal-force-dependent friction coefficient were found, where the unheated/heated sample had an increasing/ decreasing friction coefficient during loading. The phase images of two different samples obtained by AFM tapping mode indicated that the heating process during the mechanical exfoliation preparation strengthened the interface interaction between 2D material and the substrate, which can potentially affect the frictional behaviors. MD simulations were performed on the 2D nanofilm attached on a rough substrate with weak and strong interface interaction, where the normal-force-dependent friction coefficient and the diverging friction hysteretic behaviors were reproduced. The evolution of the real contact area in different interface interaction situation is responsible for these unique behaviors. The increased interaction strength between the graphene and the substrate that occurred through heating resulted in a gradual decrease in contact area between the tip and sample during sliding, compared with the unheated sample where the contact area increased during sliding. Further, the surface roughness of the substrate beyond atomic scale roughness is necessary for the formation of a pucker and the hysteretic behavior observed in the load dependent friction measurements. This increased roughness allowed for the flexible graphene/2D lubricant to lift off and change its contact configuration with the sliding AFM tip. The essential role of the interface interaction in the 2D nanofilm/substrate system revealed in this paper provides a knob for tuning the friction behaviors of supported 2D materials. Sample preparation Graphene, MoS 2 , and h-BN samples were prepared through the mechanical exfoliation method 1,18,19 . Silicon wafers with a 300 nm thermally grown oxide (Silicon Valley Microelectronics, Inc.) were used as the substrate. Cleaved pieces were cleaned by ultrasonicating the silica pieces in acetone and subsequently in ethanol. Graphite, MoS 2 , and h-BN flakes (2D Semiconductors, Inc.) were pressed against scotch tape repeatedly. For the unheated sample, the 2D nanofilm was directly transferred onto the silica substrate. For the heated sample, instead of immediately removing the tape to complete the exfoliation, the substrate with the 2D nanofilm attached tape was heated for 10 min at~100°C in air on a hot plate with the temperature of the hot plate surface monitored using an infrared thermometer. The effect of chemical reaction during the heating process was not considered here. First, the samples were heated at 100°C. According to previous research, these three 2D materials have great chemical stability for temperatures below of~500°C (graphene),~325°C (MoS 2 ) and 840°C (h-BN) [39][40][41] . Second, it takes days to weeks for these three 2D materials to react with water or oxygen 42 , but in our experiments, the samples were only heated for 10 min. Third, unlike the directly exposed 2D material, the mechanical exfoliation was performed after heating process, which means the 2D nanofilm was protected by the thick island on it during the heating process 19 . Sample measurements A MFP3D AFM (Asylum Research) was employed to perform the friction, topography, adhesion and phase measurements in ambient conditions (20-25°C, relative humidity 10%). In contact mode friction measurements, silicon tips (Nanosensors PPP-CONT) were used and the normal and lateral force constants were calibrated by the Sader method 43 and the diamagnetic lateral force calibration technique 44 , respectively. More specifically, the thermal resonance of the first normal oscillatory mode was used, along with the cantilever plan-view dimensions (length and width) to determine the normal bending stiffness of the cantilever. The stiffness of tips used in experiments ranged from 0.25 to 0.35 N/m. A force distance curve was acquired before friction measurements, as shown in Supplementary Fig. 6. The linear voltage response of the position sensitive detector (PSD) to the normal displacement of the sample was determined by calculating the slope of a linear fit to the force distance curve, allowing for the conversion of force to distance. The diamagnetic lateral force calibration then allowed for a determination of the lateral (PSD) sensitivity, or conversion factor from volts to distance, in reference to the normal sensitivity 44 . Zero applied normal force was determined as the bending signal measured by the PSD when the tip was far from the sample. Lateral forces are the instantaneous twisting forces recorded during a contact mode scan. The friction force was determined by calculating the half difference of the trace and retrace lateral forces in a single line. All friction measurements which were obtained under a scanning rate of 2 Hz. The scanning area was 2 × 2 μm for graphene/Silica samples and was 1 × 1 μm for MoS 2 /Silica and h-BN/Silica samples. The reported mean friction forces in friction force versus normal force measurements were determined by averaging the friction forces over the areas of interest. Tapping mode imaging was performed also using silicon tip, but one with a higher stiffness of 29.5 N/m (Nanaosensors PPP-NCL). The phase difference reported is the phase lag between the cantilever excitation piezo and the measured oscillation of the cantilever. The contact stiffness difference (Δk) can be calculated based on the measured phase difference (ϕ) with the equation: Δk ¼ k n sinðϕÞ A0 Asample , where the k n is the stiffness of cantilever, A 0 and A sample are the cantilever deflection amplitudes when the cantilever is out of and in contact with the sample, where A0 A sample is close to 1 in the experiments 28 . Molecular dynamics simulations A fully atomistic model was developed to mimic the experimental measurements, which consisted of an apex of an AFM tip that was scanned over a substrate of graphene covering a rough silicon substrate. The in-plane dimensions of the substrate was 20 × 20 nm 2 . The simulation box has periodic boundary condition in x and y direction and non-periodic in z direction. The rough silicon substrate was modeled as amorphous silicon 45 having a RMS roughnesses of 0.3 nm. The three bottom layers of atoms in the model substrates were fixed in place throughout the simulation. The model hemispherical tip apex was constructed of silicon and had radius of 2.5 nm, illustrated in Supplementary Fig. 7. The three topmost atomic layers of the tip were treated as a rigid body that was subject to a range of external normal loads varying from 0 to 40 nN. The tip was also connected to a support through a harmonic spring that moved laterally at a constant speed of 1 m/s. A complete friction loop obtained through a forward and backward scan. The sliding distance was 8 nm in both forward and backward scan. A full load dependent simulation consisted of multiple continuous forward and back scan loops. The load increased by 10 nN from 0 nN in each scan loop and reached to a maximum of 40 nN, then load decreased by 10 nN in the following loops. The harmonic spring had stiffness of 8 N/m in the horizontal directions, but did not resist motion in the vertical direction (normal to the graphene surface) 46 . A Langevin thermostat was applied to the free atoms in the system to maintain a temperature of 300K. The interatomic interactions within the tip/substrate and graphene layers were described via Tersoff 47 and the Adaptive Intermolecular Reactive Empirical Bond Order (AIREBO) potential 48 , respectively. The long range interactions between tip and substrate were modeled using the Lennard-Jones (LJ) potential with parameters obtained from the standard mixing rules 49,50 . To explore the effect of heated substrate with stronger adhesion, we also ran simulations with an artificially strong substrate-graphene interaction strength (ε Si−C = 0.01406 eV, doubled interaction strength compared to original interaction). The simulations were performed using the LAMMPS simulation software 51 . DATA AVAILABILITY The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
7,404.6
2023-01-02T00:00:00.000
[ "Materials Science", "Physics" ]
Application of Machine Learning in Material Synthesis and Property Prediction Material innovation plays a very important role in technological progress and industrial development. Traditional experimental exploration and numerical simulation often require considerable time and resources. A new approach is urgently needed to accelerate the discovery and exploration of new materials. Machine learning can greatly reduce computational costs, shorten the development cycle, and improve computational accuracy. It has become one of the most promising research approaches in the process of novel material screening and material property prediction. In recent years, machine learning has been widely used in many fields of research, such as superconductivity, thermoelectrics, photovoltaics, catalysis, and high-entropy alloys. In this review, the basic principles of machine learning are briefly outlined. Several commonly used algorithms in machine learning models and their primary applications are then introduced. The research progress of machine learning in predicting material properties and guiding material synthesis is discussed. Finally, a future outlook on machine learning in the materials science field is presented. Introduction New materials have become the cornerstone of scientific and technological development.Discovering materials with targeted properties, especially nanomaterials, has always been a hotspot in science [1,2].At present, the research and development of new materials mainly relies on researchers' intuitive judgment of materials and empirical trial-and-error methods, which are not only inefficient but also often require a certain level of experience and luck to obtain the target materials.At the same time, methods based on density functional theory (DFT) are widely used in the research and development of novel materials.Since their initial development, DFT methods have evolved from limited calculations that provide approximate results to increasingly accurate and predictable methods.These methods have made important contributions in a variety of fields, such as materials discovery and design, drug design, solar cells, and hydrolytic materials [3].The accuracy of these methods, however, is limited in fast calculations.To obtain high-accuracy results, the computational volume often has to be much higher, which is difficult to exploit efficiently in the research and development of new materials.In this context, artificial intelligence (AI) is becoming highly popular with researchers as a means of accelerating the development of innovative materials.A subfield of AI that has grown rapidly in recent years is machine learning (ML).ML applications are built on statistical algorithms.ML performs similarly to researchers' performance [4].Because of its powerful data processing capability and relatively low research threshold, ML can effectively reduce human and material costs in the process of novel material development and shorten the research and development cycle.By replacing or collaborating with traditional experiments and computational simulations, ML could be employed to analyze material structures and predict material properties, enabling the development of novel functional materials more efficiently and accurately.As a result, ML has become one of the most crucial methods for replacing traditional research and development.In the recent past, researchers in different fields, including computer scientists and experts in AI algorithms, have used this approach extensively, greatly contributing to the development of ML techniques [5].ML is now widely utilized in fields such as natural language understanding, non-monotonic reasoning, machine vision, and pattern recognition [6]. The basic principle of ML is to learn (or guess) general patterns from a limited amount of training data and use these patterns to make predictions on unknown data.Figure 1 shows an ML workflow example.ML has been used to detect the solubility of C 60 in materials science as early as the last century [7].It is now used to discover novel materials, predict material and molecular properties, study quantum chemistry, and design drugs.The purpose of this review is to offer an overview of the employment of ML in predicting material properties and performance, guiding material synthesis, and projecting models and conclusions.This review not only provides guidance for researchers to synthesize stable and efficient materials, but also inspires their interest in the use of ML in materials research. Materials 2023, 16, x FOR PEER REVIEW 2 of 30 predict material properties, enabling the development of novel functional materials more efficiently and accurately.As a result, ML has become one of the most crucial methods for replacing traditional research and development.In the recent past, researchers in different fields, including computer scientists and experts in AI algorithms, have used this approach extensively, greatly contributing to the development of ML techniques [5].ML is now widely utilized in fields such as natural language understanding, non-monotonic reasoning, machine vision, and pattern recognition [6]. The basic principle of ML is to learn (or guess) general patterns from a limited amount of training data and use these patterns to make predictions on unknown data.Figure 1 shows an ML workflow example.ML has been used to detect the solubility of C60 in materials science as early as the last century [7].It is now used to discover novel materials, predict material and molecular properties, study quantum chemistry, and design drugs.The purpose of this review is to offer an overview of the employment of ML in predicting material properties and performance, guiding material synthesis, and projecting models and conclusions.This review not only provides guidance for researchers to synthesize stable and efficient materials, but also inspires their interest in the use of ML in materials research. Data Pre-Processing If ML models are the engines that handle various tasks, data are the fuel that drives the models.A sufficient amount of data is a prerequisite to making the model work.Highquality data enable the model to run effectively.Due to this, large amounts of data are critical to ML [8].In general, the final ML results are directly affected by the amount and reliability of the data.This is where data pre-processing and feature engineering are beneficial.Data pre-processing and feature engineering could promote the reconstruction of datasets so that computers could more easily understand the physicochemical relationships of materials, detect material properties, and build prediction models [9]. Data Collection In ML, the size and quality of the training dataset employed for learning could significantly affect the accuracy of a predictive model.Therefore, training datasets need to be collected or created carefully.In general, training data can be gathered in three ways.Obtaining data from the published literature is the first method.The data obtained in this way could be more relevant and provide a direction for synthesis and application [10].Second, high-throughput computations or experiments can be used to obtain data.It should be noted that, in some cases, these data may be incomplete, inconsistent, or even spurious [11].The third method is to obtain data from open databases available on repository websites.The Materials Genome Initiative, initiated by the United States in 2011, emphasizes the importance of massive data in the development of materials science, which encourages the development of high-quality material databases [12].With the continuous development of theoretical and experimental research, data generated from Data Pre-Processing If ML models are the engines that handle various tasks, data are the fuel that drives the models.A sufficient amount of data is a prerequisite to making the model work.High-quality data enable the model to run effectively.Due to this, large amounts of data are critical to ML [8].In general, the final ML results are directly affected by the amount and reliability of the data.This is where data pre-processing and feature engineering are beneficial.Data pre-processing and feature engineering could promote the reconstruction of datasets so that computers could more easily understand the physicochemical relationships of materials, detect material properties, and build prediction models [9]. Data Collection In ML, the size and quality of the training dataset employed for learning could significantly affect the accuracy of a predictive model.Therefore, training datasets need to be collected or created carefully.In general, training data can be gathered in three ways.Obtaining data from the published literature is the first method.The data obtained in this way could be more relevant and provide a direction for synthesis and application [10].Second, high-throughput computations or experiments can be used to obtain data.It should be noted that, in some cases, these data may be incomplete, inconsistent, or even spurious [11].The third method is to obtain data from open databases available on repository websites.The Materials Genome Initiative, initiated by the United States in 2011, emphasizes the importance of massive data in the development of materials science, which encourages the development of high-quality material databases [12].With the continuous development of theoretical and experimental research, data generated from experiments and computational simulations, including failure data, have been integrated into databases [13].These databases are based on the concept of material data sharing, which greatly simplifies the process of obtaining material information.Table 1 introduces some commonly used methods for collecting data from publicly available databases.For instance, Zhou et al. [14] developed an ML-based approach to predict cathode materials for Zn-ion batteries with high capacity and high voltage.They screened over 130,000 inorganic materials from the materials project database and applied a crystal graph convolutional-neural-network-based ML approach with data from the Automatic Flow (AFLOW) database.This resulted in the prediction of approximately 80 cathode materials, with 10 of them being experimentally discovered previously and agreeing well with the observed measurements.Additionally, approximately 70 new promising candidates were predicted for further experimental validation.The OQMD is a database of DFT calculated thermodynamic and structural properties of 1,022,603 materials. Data Cleaning When collecting raw data, unprocessed datasets are difficult to analyze and sometimes become useless, as they tend to be inconsistent, missing, and noisy.Before using those datasets, quality must be maintained.Data cleaning is an operation performed on the existing data to remove anomalies and obtain the data collection, which is an accurate and unique representation of the mini world.It involves eliminating errors, resolving inconsistencies, and transforming the data into a uniform format [15].Data cleaning is an enormous task achieved by smoothing noise, completing missing values, correcting inconsistencies, and identifying outliers in data.The common methods for filling in missing values are as follows: fill in missing values manually; fill in missing values with a global constant; fill in missing values with the average value of attributes; fill in corresponding missing values with the average value of attributes of the same type as the given tuple; and fill in missing values with the most likely value.The commonly used methods for smoothing noise are binning, regression, and clustering [10].Binning is employed to handle noisy data.In this approach, the data are sorted, and then values are partitioned by equal-frequency bins where values are put into an equal number of bins.Regression involves predicting unknown data from known data and fitting it using a function.The two types of regression techniques are linear and multiple linear.Linear regression uses a known value to predict an unknown value, fitting the relationship between the two values with a straight line.To reduce outliers, clustering can be implemented.Clustering refers to grouping data points with similar properties into clusters.By categorizing outliers as points outside these clusters, they could be easily identified and minimized in the dataset [16][17][18].Data cleaning can effectively improve the model's prediction accuracy.Liu et al. [19] discussed the prediction of protein-protein interaction sites using ML-based computational approaches.The authors proposed a method that improves prediction performance by addressing the class imbalance issue in protein-protein interaction site prediction.They operated a data-cleaning procedure to remove marginal targets from majority samples and a post-filtering procedure to reduce false-positive predictions.The proposed method was tested on benchmark datasets and showed competitive performance compared to existing predictors. Feature Engineering A key part of the data preparation phase in ML is feature engineering.It extracts features (also known as descriptors) from the raw data and transforms the features into a format suitable for ML models.The selection of features is critical for building ML models and could even determine the upper limit of overall model performance [20].In feature selection, different parameters could be operated as features for chemical and material structures (and their properties), e.g., electronic properties (band gap, dielectric constant, work function, electron density, and electron affinity) and crystal features (translation vectors, fractional coordinates of atoms, radial distribution functions, and Voronoi tessellations of atomic positions).It is worth noting that rational feature selection is often expensive and difficult [11].In past studies, feature selection has typically had to be performed manually.However, the limitations of manual feature engineering prevented the selection of the most representative features in most cases.Over the last few years, the employment of automated feature engineering has become increasingly widespread.It automatically constructs brand new candidate features from data and selects the most suitable features for model training, which could solve the dilemma faced by manual feature engineering. Wang et al. [21] utilized automated feature engineering for the development of nanomaterials.Automated feature engineering uses deep learning algorithms to automatically develop a set of features that are relevant to the desired output.As a result, non-experts could select features much more easily, which would greatly reduce the use of expertise in training models.The variation in feature engineering in the design of nanomaterials can be observed in Figure 2. known value to predict an unknown value, fitting the relationship between the two values with a straight line.To reduce outliers, clustering can be implemented.Clustering refers to grouping data points with similar properties into clusters.By categorizing outliers as points outside these clusters, they could be easily identified and minimized in the dataset [16][17][18].Data cleaning can effectively improve the model's prediction accuracy.Liu et al. [19] discussed the prediction of protein-protein interaction sites using ML-based computational approaches.The authors proposed a method that improves prediction performance by addressing the class imbalance issue in protein-protein interaction site prediction.They operated a data-cleaning procedure to remove marginal targets from majority samples and a post-filtering procedure to reduce false-positive predictions.The proposed method was tested on benchmark datasets and showed competitive performance compared to existing predictors. Feature Engineering A key part of the data preparation phase in ML is feature engineering.It extracts features (also known as descriptors) from the raw data and transforms the features into a format suitable for ML models.The selection of features is critical for building ML models and could even determine the upper limit of overall model performance [20].In feature selection, different parameters could be operated as features for chemical and material structures (and their properties), e.g., electronic properties (band gap, dielectric constant, work function, electron density, and electron affinity) and crystal features (translation vectors, fractional coordinates of atoms, radial distribution functions, and Voronoi tessellations of atomic positions).It is worth noting that rational feature selection is often expensive and difficult [11].In past studies, feature selection has typically had to be performed manually.However, the limitations of manual feature engineering prevented the selection of the most representative features in most cases.Over the last few years, the employment of automated feature engineering has become increasingly widespread.It automatically constructs brand new candidate features from data and selects the most suitable features for model training, which could solve the dilemma faced by manual feature engineering. Wang et al. [21] utilized automated feature engineering for the development of nanomaterials.Automated feature engineering uses deep learning algorithms to automatically develop a set of features that are relevant to the desired output.As a result, non-experts could select features much more easily, which would greatly reduce the use of expertise in training models.The variation in feature engineering in the design of nanomaterials can be observed in Figure 2. The key characteristic that distinguishes this approach from the first-generation approach is eliminating human-expert feature engineering, which can directly learn from raw nanomaterials.Reproduced with permission from [21]. Classification of ML and Algorithms Once sufficient training data are selected, models can be built for the development of novel materials.Choosing an appropriate algorithm for a training model is essential for making accurate predictions.Based on the type of processed data, ML can be classified as supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning.For supervised learning, the input training data are labeled.After optimizing the model with ML, a predictable output value for a new input value could be acquired.In contrast, the input training data are unlabeled in unsupervised learning.Using an algorithm, the unlabeled training set is trained to find potential features.As for semisupervised learning, the input training data are partially labeled.Reinforcement learning occurs when the training object interacts with the environment, obtaining feedback from the environment and adjusting its strategy to accomplish a specific goal or to maximize the benefit of a behavior [22].Next, a brief description of several commonly utilized ML algorithms is given. Shallow Learning Shallow learning usually has no hidden layer or only one hidden layer [23].The approaches include decision tree (DT), K-nearest neighbor (KNN), support vector machine (SVM) [24], random forest (RF), and artificial neural network (ANN).Shallow learning has produced satisfactory results in various areas of materials science.In this section, some algorithms for shallow learning are presented, some applications in materials science are summarized, and the ML model used by the researchers is demonstrated. KNN The KNN algorithm was first proposed by Cover and Hart [25].The KNN classification is one of the most basic and simplest classification methods.It should be considered for classification studies when little or no data distribution experience is available [26].The principle of the KNN algorithm is that if most of the most similar K samples in the feature space (i.e., the nearest samples in the feature space) belong to a certain category, the sample also belongs to this category.Figure 3 shows a schematic of a typical KNN algorithm.For an unknown target, when K takes 3, the target is classified into class 1; when K takes 7, the target is classified into class 2. According to this method, the sample's category is determined by its proximity to one or more nearby samples.The KNN algorithm itself is simple and effective, easy to understand, and straightforward to implement.Since it does not require prediction parameters or training, the KNN algorithm is suitable for time classifications, especially for multimodals (i.e., objects with multiple categories).Recently, KNN algorithms have been widely utilized in text classification, pattern recognition, image processing, and materials science.Sharma et al. [27] employed the KNN algorithm to predict the dynamic fracture toughness of glass-filled polymer composites.The dynamic modulus of elasticity, aspect ratio, and volume fraction of glass particles were used as independent model parameters.The proposed KNN model predicted the fracture behavior of the composites with an accuracy of 96%.It is also possible to extend their model to predict other material properties. The drawback of the KNN algorithm is that as the amount of data increase, the computational complexity of the KNN increases accordingly.This is because the KNN algorithm needs to calculate both training data and test data for each classification or regression.If there are a large amount of data, the computing power required would be greatly increased.In addition, the randomness of training data also affects the performance of the KNN algorithm [28].The drawback of the KNN algorithm is that as the amount of data increase, the c putational complexity of the KNN increases accordingly.This is because the KNN a rithm needs to calculate both training data and test data for each classification or reg sion.If there are a large amount of data, the computing power required would be gre increased.In addition, the randomness of training data also affects the performance of KNN algorithm [28]. DT A DT is a typical classification method.The earliest DT algorithm was the con learning system proposed by Hunt [29].The most influential DT algorithms are ID3 and C4.5 [31], which were proposed by Quinlan in 1986 and 1993, respectively.DTs c sify training data by different features, aiming to correctly categorize instances.A model consists of internal decision nodes and leaf nodes.Each internal node splits instance space into two or more subspaces according to a certain discrete function of input attribute values, and each leaf node is assigned to one class representing the m appropriate target value [32].Chen et al. [11] presented the structure of a typical DT shown in Figure 4.A typical decision tree algorithm consists of three main steps: fea selection, decision tree generation, and pruning.The purpose of pruning is to minim the structural risk of the model by optimizing the loss function and weighing the mod complexity and accuracy.Liu et al. [33] developed a DT model for predicting the resid tensile strength and modulus of pultruded-fiber-reinforced polymer (FRP) compos Using an existing database, 746 data points were collected for training.The accurac the model was verified experimentally.The significance of all attributes of the input d was also quantitatively analyzed by the model.The proposed DT model provides a n method for predicting the long-term degradation of FRP composites subjected to envi mental influences. DT A DT is a typical classification method.The earliest DT algorithm was the concept learning system proposed by Hunt [29].The most influential DT algorithms are ID3 [30] and C4.5 [31], which were proposed by Quinlan in 1986 and 1993, respectively.DTs classify training data by different features, aiming to correctly categorize instances.A DT model consists of internal decision nodes and leaf nodes.Each internal node splits the instance space into two or more subspaces according to a certain discrete function of the input attribute values, and each leaf node is assigned to one class representing the most appropriate target value [32].Chen et al. [11] presented the structure of a typical DT, as shown in Figure 4.A typical decision tree algorithm consists of three main steps: feature selection, decision tree generation, and pruning.The purpose of pruning is to minimize the structural risk of the model by optimizing the loss function and weighing the model's complexity and accuracy.Liu et al. [33] developed a DT model for predicting the residual tensile strength and modulus of pultruded-fiber-reinforced polymer (FRP) composites.Using an existing database, 746 data points were collected for training.The accuracy of the model was verified experimentally.The significance of all attributes of the input data was also quantitatively analyzed by the model.The proposed DT model provides a new method for predicting the long-term degradation of FRP composites subjected to environmental influences.The drawback of the KNN algorithm is that as the amount of data increase, the c putational complexity of the KNN increases accordingly.This is because the KNN a rithm needs to calculate both training data and test data for each classification or reg sion.If there are a large amount of data, the computing power required would be gr increased.In addition, the randomness of training data also affects the performance o KNN algorithm [28]. DT A DT is a typical classification method.The earliest DT algorithm was the con learning system proposed by Hunt [29].The most influential DT algorithms are ID3 and C4.5 [31], which were proposed by Quinlan in 1986 and 1993, respectively.DTs sify training data by different features, aiming to correctly categorize instances.A model consists of internal decision nodes and leaf nodes.Each internal node split instance space into two or more subspaces according to a certain discrete function o input attribute values, and each leaf node is assigned to one class representing the appropriate target value [32].Chen et al. [11] presented the structure of a typical D shown in Figure 4.A typical decision tree algorithm consists of three main steps: fea selection, decision tree generation, and pruning.The purpose of pruning is to mini the structural risk of the model by optimizing the loss function and weighing the mo complexity and accuracy.Liu et al. [33] developed a DT model for predicting the resi tensile strength and modulus of pultruded-fiber-reinforced polymer (FRP) compo Using an existing database, 746 data points were collected for training.The accurac the model was verified experimentally.The significance of all attributes of the input was also quantitatively analyzed by the model.The proposed DT model provides a method for predicting the long-term degradation of FRP composites subjected to env mental influences.The RF algorithm consists of multiple DTs.In RFs, each tree casts a unit vote for the most popular class, and then combining these votes obtains the final sort result.RFs possess high classification accuracy [34].It would, however, take a great deal of space and time to train an RF with many DTs.Compared with DTs, the calculation costs of RFs would also increase significantly.In this regard, RFs and DTs should be selected based on the actual situation. ANN The concept of an ANN was introduced by McCulloch and Pitts [35].An ANN is a complex network structure that is formed by a large number of nodes (neurons) connected to each other.It is a kind of abstraction, simplification, and simulation of the organization and operation mechanism of the human brain.Each node in an ANN represents a specific output function, i.e., the activation function.Each connection between any two nodes represents a weighted value for the signal passing through that connection, which is equivalent to the memory of the ANN.The network's connection mode, the value of the weights, and the excitation function all have an effect on its output [36].As a major soft-computing technology, ANNs have been extensively studied and applied in recent decades [37]. The structure of a typical ANN is shown in Figure 5.Its nodes are generally divided into three categories: input, hidden, and output.The input nodes represent the information received from the input data.The output nodes are utilized to store the results of the data processing.The nodes between the input and output nodes are so-called hidden nodes.Different types of nodes in an ANN are distributed in multiple layers.The nodes on different layers could be connected by lines, which correspond to synapses in neural structures, representing a nonlinear mapping.The learning process of an ANN is to continuously optimize the whole network model by correcting the weights of nodes in each layer with training data [38].The RF algorithm consists of multiple DTs.In RFs, each tree casts a unit vote for most popular class, and then combining these votes obtains the final sort result.RFs sess high classification accuracy [34].It would, however, take a great deal of space time to train an RF with many DTs.Compared with DTs, the calculation costs of would also increase significantly.In this regard, RFs and DTs should be selected base the actual situation. ANN The concept of an ANN was introduced by McCulloch and Pitts [35].An ANN complex network structure that is formed by a large number of nodes (neurons) conne to each other.It is a kind of abstraction, simplification, and simulation of the organiza and operation mechanism of the human brain.Each node in an ANN represents a spe output function, i.e., the activation function.Each connection between any two nodes resents a weighted value for the signal passing through that connection, which is equ lent to the memory of the ANN.The network's connection mode, the value of the weig and the excitation function all have an effect on its output [36].As a major soft-compu technology, ANNs have been extensively studied and applied in recent decades [37]. The structure of a typical ANN is shown in Figure 5.Its nodes are generally divi into three categories: input, hidden, and output.The input nodes represent the in mation received from the input data.The output nodes are utilized to store the resul the data processing.The nodes between the input and output nodes are so-called hid nodes.Different types of nodes in an ANN are distributed in multiple layers.The no on different layers could be connected by lines, which correspond to synapses in ne structures, representing a nonlinear mapping.The learning process of an ANN is to tinuously optimize the whole network model by correcting the weights of nodes in e layer with training data [38].A variety of ANN models and their variants have been developed.The variant clude back-propagation networks, perceptrons, self-organizing mappings, Hopfield works, and Boltzmann machines.ANNs have been applied to drive the synthesis of a w range of functional materials, such as shape memory alloys [39], hyperelastic mate [40], and high-entropy alloys (HEAs) [41].Table 2 illustrates the application of the af mentioned algorithms. Table 2. Some applications of shallow learning in materials science.A variety of ANN models and their variants have been developed.The variants include back-propagation networks, perceptrons, self-organizing mappings, Hopfield networks, and Boltzmann machines.ANNs have been applied to drive the synthesis of a wide range of functional materials, such as shape memory alloys [39], hyperelastic materials [40], and high-entropy alloys (HEAs) [41].Table 2 illustrates the application of the afore-mentioned algorithms. Researchers Algorithms Purposes Sharma et al. [42] KNN Predict the fracture toughness of silica-filled epoxy composites. Wang et al. [45] SVM Achieve rapid detection of transformer winding materials. Martinez et al. [46] SVM and ANN Predict the fracture life of martensitic steels under high-temperature creep conditions. Ahmad et al. [47] Adaptive boosting, RF, and DT (Figure 6b) Predict the compressive strength of concrete at high temperatures. Sun et al. [48] Gradient boosted regression tree (GBRT) and RF Evaluate the strength of coal-grout materials. Samadia et al. [49] GBRT Predict the higher heating value (HHV) of biomass materials based on proximate analysis. Researchers Algorithms Purposes Sharma et al. [42] KNN Predict the fracture toughness of silica-filled epoxy composites. Martinez et al. [46] SVM and ANN Predict the fracture life of martensitic steels under high-temperature creep conditions. Ahmad et al. [47] Adaptive boosting, RF, and DT (Figure 6b) Predict the compressive strength of concrete at high temperatures. Sun et al. [48] Gradient boosted regression tree (GBRT) and RF Evaluate the strength of coal-grout materials. Samadia et al. [49] GBRT Predict the higher heating value (HHV) of biomass materials based on proximate analysis. Deep Learning Hinton et al. [52] first proposed the concept of deep learning.The unsupervised greedy training layer-by-layer algorithm based on deep degree nets was designed to solve optimization problems related to deep structures.Similar to an ANN, deep learning is a multilayer neural network [53]. Deep Learning Hinton et al. [52] first proposed the concept of deep learning.The unsupervised greedy training layer-by-layer algorithm based on deep degree nets was designed to solve optimization problems related to deep structures.Similar to an ANN, deep learning is a multilayer neural network [53]. Overview of Deep Learning Deep learning can be considered a subset of ML.The idea of deep learning is derived from multilayer ANNs.The learning process of deep learning exhibits depth to some extent because of the multilayer structure of ANNs.In each hidden layer, neurons receive input signals from other neurons, combine them with their internal state, and produce output signals.The connections between neurons have weights assigned to them, forming the overall layer of a neural network.The learning process involves adapting the network by adjusting the weights of the connections to minimize output errors.Deep learning, with its self-adapting architecture, reduces the need for feature engineering and could identify and work around defects that may be difficult to detect in other techniques [5].Instead, the algorithm adjusts itself in continuous learning and independently selects suitable features.This could be viewed as a major advancement in ML.While traditional ML models may be more accurate with small data, deep learning models tend to be more reliable when big data is available.Deep neural networks (DNNs) with multiple hidden layers have higher learning capacity, allowing them to saturate accuracy gains compared to traditional models.Although training neural networks is computationally expensive, once trained, deep learning can make very fast predictions.This one-time training cost is outweighed by the speed of subsequent predictions [54].After years of development, a variety of deep learning models have been produced, mainly including stacked autoencoders [55], deep belief networks (DBNs) [56], deep Boltzmann machines (DBMs) [57], DNNs [58], and convolutional neural networks (CNNs) [59].Deep learning techniques are widely utilized in speech recognition, visual object recognition, object detection, drug discovery, and genomics [60].They are also some of the fastest-growing and most adaptable techniques ever developed in materials science. Additionally, deep learning faces the dilemma of how to effectively process large amounts of complex data.In practical applications, building suitable deep learning models is increasingly challenging.Although deep learning is not yet fully mature and has many problems to solve, it has shown a strong learning capability.Throughout the future, deep learning is expected to remain a key research focus in AI. Applications of Deep Learning Deep learning has been widely applied in materials science due to its excellent performance.Based on industrial data, Wu et al. [61] investigated the impact energy prediction model of low-carbon steel.A three-layer neural network, extreme learning machine, and DNN were compared with different activation functions, structure parameters, and training functions.Bayesian optimization was employed to determine the optimal hyper-parameters of the DNN.The model with the highest performance was applied to investigate the importance of process parameter variables on the impact energy of low-carbon steel.The results showed that the DNN obtained better prediction results than those of a shallow neural network because the multiple hidden layers improved the learning ability of the model.Sun et al. [62] applied deep learning to rapidly predict the photovoltaic properties of organic photovoltaic materials, with a prediction accuracy up to 91%.Konno et al. [63] reported a deep learning algorithm for discovering novel superconductors.The prediction accuracy of their ML model for material superconductivity was as high as 62%.Employing the ML model, the authors found two superconductors that were not in the database and found Fe-based high-temperature superconductors (discovered in 2008) in the training data before 2008.These results pave the way for the discovery of new high-temperature superconductors.Li et al. [64] explored a correlated deep learning framework consisting of three recurrent neural networks (RNNs) to efficiently generate new energetic molecules with high detonation velocity in the low data regime.They utilized data augmentation by fragment shuffling of 303 energetic compounds to pretrain the RNN and then fine-tuned it using the 303 compounds to produce molecules similar to the energetic compounds.They also employed a simplified molecular input line entry (SMILE) system coupled with pretrained knowledge to build an RNN-based prediction model for screening molecules with high detonation velocity.Their strategy performed comparably to transfer learning based on an existing big database.Quantum mechanics calculations confirmed that 35 new molecules have higher detonation velocity and lower synthetic accessibility than the classic explosive hexogen, with three novel molecules comparable to caged China Lake Compound No. 20 in detonation velocity.Zhang et al. [65] utilized generative adversarial networks (GANs) to design metaporous materials for sound absorption (Figure 7a).The researchers trained the GANs using numerically prepared data and successfully developed designs with high-standard broadband absorption performance.The GANs accelerated the design process by hundreds of times, allowing for instantaneous multiple solutions.The GANs also demonstrated the ability to generate creative configurations and rich local features.This work highlighted the potential of ML in guiding the design and optimization process for materials and opened up new possibilities for interdisciplinary research in AI and materials.Unni et al. [66] introduced a deep convolutional mixture density network (MDN) approach for the inverse design of layered photonic structures.The MDN modeled the design parameters as multimodal probability distributions, allowing for convergence in cases of nonuniqueness without sacrificing degenerate solutions.The MDN was applied to the inverse design of two types of multilayer photonic structures consisting of thin films of oxides, which present a challenge for conventional machine learning algorithms due to their large degree of nonuniqueness in their optical properties.The MDN can handle the transmission spectra of high complexity and varying illumination conditions.The shape of the probability distributions provides valuable information for postprocessing and prediction uncertainty.The MDN approach offers an effective solution to the inverse design of photonic structures with high degeneracy and spectral complexity. The use of vision transformers, residual networks (ResNets), and region-based-CNNs (R-CNNs) on materials datasets has shown exceptional performance.Huang et al. [67] proposed a waste materials classification method based on a vision transformer model (Figure 7b).The model overcame CNN limitations by using self-attention mechanisms to allocate weights to different parts of waste images.The vision transformer achieved an accuracy rate of 96.98% by pretraining on ImageNet and fine-tuning on the TrashNet dataset.The trained model can be deployed on a cloud server and accessed through a portable device for real-time waste classification, which is convenient and efficient for resource conservation and recycling.Jiang et al. [68] explored the use of global optimization networks (GLOnets) with the ResNet architecture for the multiobjective and categorical global optimization of photonic devices.The authors demonstrated that these networks, called Res-GLOnets, could be configured to design thin-film stacks consisting of multiple material types.The Res-GLOnets can find the global optimum with faster speeds compared to conventional algorithms.The authors also showed the utility of their method for complex design tasks, such as designing incandescent light filters.Wang et al. [69] proposed an image detection method based on an improved Faster R-CNN model for wear location and wear mechanism identification (Figure 7c).They trained and tested the model using a wear image dataset produced by a self-made tribometer equipped with an imaging system.The results showed that the proposed method had a detection accuracy of more than 99%.It outperformed edge detection technology and Yolov3 target detection models in wear location and wear mechanism identification.This research contributes to the development of an innovative approach for the online and intelligent wear status detection of machinery components. Materials Informatics Based on ML Materials informatics is a study field that focuses on investigating and applying informatics techniques to materials science and engineering.Propelled partly by the Materials Genome Initiative and partly by algorithmic developments and successes of datadriven efforts in other domains, informatics strategies are beginning to take shape within materials science.Informatics strategies give rise to surrogate ML methods that can realize accurate prediction using just historical data instead of experiments or simulations/calculations.This methodology is usually composed of three distinct steps: acquisition of reliable historical data, statistical quantification of information-rich material structures, and mapping between "input" and "output".The commonly used ML algorithms in materials informatics include regression, DT, ANN, and deep learning [70][71][72][73].To meet the requirements of the studies of computational materials informatics, Zhao et al. [74] derived an artificial-intelligence-aided data-driven infrastructure called Jilin Artificial-intelligence aided Materials-design Integrated Package (JAMIP).The organization of JAMIP abides by the data lifecycle in computational materials informatics, from data generation to collection and learning, as shown in Figure 8.It provides tools for materials production, highthroughput calculations, data extraction and management, and ML-based data mining. Materials Informatics Based on ML Materials informatics is a study field that focuses on investigating and applying informatics techniques to materials science and engineering.Propelled partly by the Materials Genome Initiative and partly by algorithmic developments and successes of data-driven efforts in other domains, informatics strategies are beginning to take shape within materials science.Informatics strategies give rise to surrogate ML methods that can realize accurate prediction using just historical data instead of experiments or simulations/calculations.This methodology is usually composed of three distinct steps: acquisition of reliable historical data, statistical quantification of information-rich material structures, and mapping between "input" and "output".The commonly used ML algorithms in materials informatics include regression, DT, ANN, and deep learning [70][71][72][73].To meet the requirements of the studies of computational materials informatics, Zhao et al. [74] Prediction of Material Properties ML has gained prominence in recent years in predicting material properties due to its advantages of high generalization ability and fast computational speed.It has been successfully applied to predict the structure, adsorption, electrical, catalytic, energy storage, and thermodynamic properties of materials.The prediction results could even reach the same accuracy as high-fidelity models with low computational costs. Molecular Properties In the past, it was very time consuming to predict molecular properties based on high-throughput density generalization calculations.ML allows fast and accurate prediction of the structure or properties of molecules, compounds, and materials.In materials science, solubility factors, such as Hansen and Hildebrand solubility, are critical parameters for characterizing the physical properties of various substances.Kurotani et al. [76] successfully developed a solubility prediction model with a unique ML method, the socalled in-phase DNN (ip-DNN).This algorithm started with the analysis of input data (including NMR information, refractive index, and density).The solubility was then speculated in a multi-step approach by predicting intermediate elements, such as molecular components and molecular descriptors.An intermediate regression model was also utilized to improve the accuracy of the prediction.A website dedicated to the established solubility prediction methods has also been developed, which is available free of charge.Liang et al. [77] proposed a generalized ML method based on ANNs to predict polymer compatibility (the total miscibility of polymers with each other at the molecular scale).The authors built a database by collecting data from scattered literature through natural language processing techniques.By using the proposed method, predictions could be made based on the basic molecular structure of the blended polymers and the blended Prediction of Material Properties ML has gained prominence in recent years in predicting material properties due to its advantages of high generalization ability and fast computational speed.It has been successfully applied to predict the structure, adsorption, electrical, catalytic, energy storage, and thermodynamic properties of materials.The prediction results could even reach the same accuracy as high-fidelity models with low computational costs. Molecular Properties In the past, it was very time consuming to predict molecular properties based on highthroughput density generalization calculations.ML allows fast and accurate prediction of the structure or properties of molecules, compounds, and materials.In materials science, solubility factors, such as Hansen and Hildebrand solubility, are critical parameters for characterizing the physical properties of various substances.Kurotani et al. [76] successfully developed a solubility prediction model with a unique ML method, the so-called in-phase DNN (ip-DNN).This algorithm started with the analysis of input data (including NMR information, refractive index, and density).The solubility was then speculated in a multistep approach by predicting intermediate elements, such as molecular components and molecular descriptors.An intermediate regression model was also utilized to improve the accuracy of the prediction.A website dedicated to the established solubility prediction methods has also been developed, which is available free of charge.Liang et al. [77] proposed a generalized ML method based on ANNs to predict polymer compatibility (the total miscibility of polymers with each other at the molecular scale).The authors built a database by collecting data from scattered literature through natural language processing techniques.By using the proposed method, predictions could be made based on the basic molecular structure of the blended polymers and the blended compositions (as an auxiliary).This generalized approach yielded some results in illustrating polymer compatibility.A prediction accuracy of no less than 75% was achieved on a dataset containing 1400 entries in their model.Zeng et al. [78] developed an atomic table CNN that could predict the band gap and ground energy.The model accuracy exceeded that of standard DFT calculations.Furthermore, this model could accurately predict superconducting transition temperatures and distinguish between superconductors and non-superconductors.With the help of this model, 20 potential superconductor compounds with high superconducting transition temperatures were screened out. Band Gap The band gap size not only determines the energy band structure of a material but also affects its electronic structure and optical properties.Recently, researchers have applied ML to forecast the band gap of various materials.Venkatraman [79] developed an algorithm for band gap prediction based on a rule-based ML framework.With descriptors derived from elemental compositions, this model accurately and quickly predicted the band gap of various materials.After testing on two independent sets, this model obtained squared correlations > 0.85, with errors smaller than those of most density generalization calculations, improving the material screening performance.Xu et al. [80] developed an ML model called support vector regression (SVR) for predicting the band gaps of polymers.They used training data obtained from DFT computations and generated descriptors using Dragon software.After feature selection, the SVR model using 16 key features achieved high accuracy in predicting polymer band gaps.The SVR model with a Gaussian kernel function performed the best, with a determination coefficient (R 2 ) of 0.824 and a root mean square error (RMSE) of 0.485 in leave-one-out cross-validation.The authors also provided correlation analysis and sensitivity analysis to understand the relationship between the selected features and the band gaps of polymers.Several polymer samples with targeted band gaps were designed based on the analysis and validated through DFT calculations and model predictions.Espinosa et al. [81] proposed a vision-based system to predict the electronic band gaps of organic molecules using deep learning techniques.The system employed a multichannel 2D CNN and a 3D CNN to recognize and classify 2D projected images of molecular structures.The training and testing datasets used in the research were derived from the Organic Materials Database (OMDB-GAP1).The results showed that the proposed CNN model achieved a mean absolute error of 0.6780 eV and an RMSE of 0.7673 eV, outperforming other ML methods based on conventional DFT.These findings demonstrate the potential of CNN models in materials science applications using orthogonal image projections of molecules.Wang et al. [82] explored the use of ML techniques to accurately predict the band gaps of semiconductor materials.The authors applied a stacking approach, which combined the outputs of multiple baseline models, to enhance the performance of band gap regression.The effectiveness of different models was tested using a benchmark dataset and a newly established complex database.The results showed that the stacking model had the highest R 2 value in both datasets, indicating its superior performance.The improvement percentages of various evaluation metrics for the stacking model compared to other baseline models range from 3.06% to 33.33%.Overall, the research demonstrated the excellent performance of the stacking approach in band gap regression.On the basis of generalized gradient approximation (GGA) band gap information of crystal structures and materials, Na et al. [83] established an ML method that used the tupleswise graph neural network (TGNN) algorithm for the accurate band gap prediction of crystalline compounds.The TGNN algorithm showed strong superiority in predicting the band gap of four different open databases.It has better accuracy for 48,835 samples of G 0 W 0 (a widely used technique in which the self-energy is expressed as the convolution of a noninteracting Green's function (G 0 ) and a screened Coulomb interaction (W 0 ) in the frequency domain) band gaps than the standard density generalized theory without high computational costs.Moreover, this model could be extended to project other valuable properties. Energy Storage Performance Energy storage is a key step in determining the efficiency, stability, and reliability of power supply systems [84].Exploring the energy storage performance of materials is critical to energy storage, and ML accelerates the exploration process.Feng et al. [85] collected over one thousand composite energy storage performance data points from the open literature and utilized ML to analyze and build a predictive model.The prediction accuracies of the RF, SVM, and neural network were 84.1%, 80.9%, and 70.6%, respectively.They then added processed visual information data of the composite into the dataset, resulting in improved prediction accuracies of 91.9%, 68.9%, and 81.6% for the three models, respectively.This demonstrated that the dispersion of the filler in the matrix is an important factor affecting the maximum energy storage density of the composite.The authors also analyzed the weights of each descriptor in the RF model and explored the effects of various parameters on the energy storage of the material.Figure 9 shows the logic diagram of their ML models.Yue et al. [86] utilized the packing dielectric constant, packing size, and packing content as descriptors to predict the energy storage density of polymer matrix composites.High-throughput random breakdown simulations were performed on 504 datasets.The simulation results were then applied as an ML database and combined with classical dielectric prediction equations.They experimentally validated the predictions, including the dielectric constant and breakdown strength.This work provides insights into the design and fabrication of polymer matrix composites with enhanced energy density for applications in capacitive energy storage.Ojin et al. [87] built four traditional ML models and two graph neural network models.Through them, 32,026 heat capacity structures were predicted using a high-precision deep graph attention network.Additionally, the correlation between heat capacity and structure descriptors was inspected.A total of 22 structures were predicted to have high heat capacity, and the results were further validated by DFT analysis.Through the combination of ML and minimal DFT queries, this study provides a path to accelerating the discovery of new thermal energy storage materials. Energy Storage Performance Energy storage is a key step in determining the efficiency, stability, and reliability of power supply systems [84].Exploring the energy storage performance of materials is critical to energy storage, and ML accelerates the exploration process.Feng et al. [85] collected over one thousand composite energy storage performance data points from the open literature and utilized ML to analyze and build a predictive model.The prediction accuracies of the RF, SVM, and neural network were 84.1%, 80.9%, and 70.6%, respectively.They then added processed visual information data of the composite into the dataset, resulting in improved prediction accuracies of 91.9%, 68.9%, and 81.6% for the three models, respectively.This demonstrated that the dispersion of the filler in the matrix is an important factor affecting the maximum energy storage density of the composite.The authors also analyzed the weights of each descriptor in the RF model and explored the effects of various parameters on the energy storage of the material.Figure 9 shows the logic diagram of their ML models.Yue et al. [86] utilized the packing dielectric constant, packing size, and packing content as descriptors to predict the energy storage density of polymer matrix composites.High-throughput random breakdown simulations were performed on 504 datasets.The simulation results were then applied as an ML database and combined with classical dielectric prediction equations.They experimentally validated the predictions, including the dielectric constant and breakdown strength.This work provides insights into the design and fabrication of polymer matrix composites with enhanced energy density for applications in capacitive energy storage.Ojin et al. [87] built four traditional ML models and two graph neural network models.Through them, 32,026 heat capacity structures were predicted using a high-precision deep graph attention network.Additionally, the correlation between heat capacity and structure descriptors was inspected.A total of 22 structures were predicted to have high heat capacity, and the results were further validated by DFT analysis.Through the combination of ML and minimal DFT queries, this study provides a path to accelerating the discovery of new thermal energy storage materials. Structural Health Structural health monitoring (SHM) utilizes engineering, scientific, and foundational knowledge to prevent damage to property and life.The core of the field of construction informatics is the transmission, processing, and visualization of architectural information, providing effective methods for monitoring structural changes [88,89].ML provides effective methods for monitoring structural changes.Dang et al. [90] proposed a cloud-based digital twin framework for SHM employing deep learning.The framework consists of physical components, device measurements, and digital models formed by combining different sub-models including mathematical, finite element, and ML sub-models.The data interactions among the physical structure, digital model, and human interventions were enhanced by using cloud computing infrastructure and a user-friendly web application. Structural Health Structural health monitoring (SHM) utilizes engineering, scientific, and foundational knowledge to prevent damage to property and life.The core of the field of construction informatics is the transmission, processing, and visualization of architectural information, providing effective methods for monitoring structural changes [88,89].ML provides effective methods for monitoring structural changes.Dang et al. [90] proposed a cloud-based digital twin framework for SHM employing deep learning.The framework consists of physical components, device measurements, and digital models formed by combining different sub-models including mathematical, finite element, and ML sub-models.The data interactions among the physical structure, digital model, and human interventions were enhanced by using cloud computing infrastructure and a user-friendly web application.The feasibility of the framework was demonstrated through case studies of the damage detection of model bridges and real bridge structures utilizing deep learning algorithms, with a high accuracy of 92%.Dong et al. [91] discussed the use of the eXtreme gradient boosting (XGBoost) algorithm for predicting concrete electrical resistivity in SHM (Figure 10a).The proposed XGBoost-algorithm-based prediction model considers all potential influencing factors simultaneously.A database of 800 experimental instances was used to train and test the model.The results showed that the XGBoost model achieved satisfactory predictive performance.The study also identified the importance of curing age and cement content in electrical resistivity measurement results.The XGBoost algorithm was chosen for its high performance, ease of use, and better prediction accuracy than other algorithms.The bond effect between the reinforcement and concrete guarantees the combined action of the two materials.This is a critical factor that affects the mechanical properties of reinforced concrete components and structures, e.g., bearing capacity and ductility [92].Gao et al. [93] developed a new solution for evaluating the bond strength of an FRP using AI-based models.Two hybrid models, the imperialist competitive algorithm (ICA)-ANN and the artificial bee colony (ABC)-ANN, were designed and compared.The results showed that the ICA-ANN model had a higher predictive ability than the ABC-ANN model.The proposed hybrid models can be used as a suitable substitute for empirical models in evaluating FRP bond strength in concrete samples.Li et al. [94] utilized ML approaches to estimate the bond strength between ultra-high-performance concrete (UHPC) and reinforcing bars.A new database was created by integrating data from multiple published works.Nine ML models, including linear models, tree models, and ANNs, were implemented to train bond strength estimators based on the database.The results showed that the ANN and RF models achieved the highest estimation performances, surpassing empirical formulas.The study also analyzed the relative importance of different factors in determining bond strength.Overall, the research provides a data-driven approach to estimating bond strength and contributes to the understanding of bond performance between UHPC and reinforcing bars.Su et al. [95] applied three ML approaches (multiple linear regression, SVM, and ANN) to predict the interfacial bond strength between FRPs and concrete (Figure 10b).They trained these models using two datasets containing experimental results from single-lap shear tests, employed random search and grid search to find the optimal hyperparameters, and analyzed input variables' contributions using partial dependence plots.They also developed a stacking strategy to improve prediction accuracy.The results showed that the SVM approach had the best accuracy and efficiency.They concluded that ML methods are feasible and efficient for predicting the bond strength of FRP laminates in reinforced concrete structures. Nanomaterial Toxicity It has been proven that ML can be used to identify nanomaterial properties and exposure conditions that influence cellular and organism toxicity, thus providing information required for risk assessment and safe-by-design approaches in the development of new nanomaterials [96].Huang et al. [97] combined ML with high-throughput in vitro bioassays to develop a model to predict the toxicity of metal oxide nanoparticles to immune cells, as shown in Figure 11.In the training, test, and experimental validation sets, the ML model displayed prediction accuracies of 97%, 96%, and 91%, respectively.ML methods were used to identify features that encode information on immune toxicity.These features are crucial for the scientific design of future experiments and for the accurate depiction of nanotoxicity.According to Gousiadoua et al. [98], advanced ML techniques were applied to create nano quantitative structure-activity relationship (QSAR) tools for modeling the toxicity of metallic and metal oxide nanomaterials, both coated and uncoated, with various core compositions tested on embryonic zebrafish at various dosage concentrations.Based on both computed and experimental descriptors, the scientists identified a set of properties most relevant for assessing nanomaterial toxicity and successfully correlated these properties with zebrafish physiological responses.It has been concluded that for the group of metal and metal oxide nanomaterials, the core chemical composition, concentration, and properties are influenced by the nanomaterial surface and medium composition (such as zeta potential and agglomerate size), which have a significant impact on toxicity, even though the ranking of different variables is subject to variation in the analytical method and data model.Generalized nano-QSAR ensemble models offer a promising framework for predicting the toxicity potential of new nanomaterials.Liu et al. [99] presented a metaanalysis of phytosynthesized silver nanoparticles (AgNPs) with heterogeneous features using DTs and RFs.The researchers found that exposure regime (including the time and dose), plant family, and cell type were the most important predictors for cell viability for green AgNPs.In addition, a discussion of the potential effects of major variables (cell assays, inherent nanoparticle properties, and reaction parameters used in biosynthesis) on AgNP-mediated cytotoxicity and model performance was presented to provide a basis for future research.The findings of this study may assist future studies in improving the design of experiments and the development of virtual models or optimizations of green AgNPs for specific applications. Nanomaterial Toxicity It has been proven that ML can be used to identify nanomaterial properties and ex posure conditions that influence cellular and organism toxicity, thus providing infor mation required for risk assessment and safe-by-design approaches in the developmen of new nanomaterials [96].Huang et al. [97] combined ML with high-throughput in vitro bioassays to develop a model to predict the toxicity of metal oxide nanoparticles to im for cell viability for green AgNPs.In addition, a discussion of the potential effects variables (cell assays, inherent nanoparticle properties, and reaction parameters biosynthesis) on AgNP-mediated cytotoxicity and model performance was pres provide a basis for future research.The findings of this study may assist future s improving the design of experiments and the development of virtual models or o tions of green AgNPs for specific applications. Adsorption Performance of Nanomaterials Because of their high surface area, ease of functionalization, and affinity t wide range of pollutants, nanomaterials are excellent adsorbents [100].Moosavi et applied four machine learning methods to model dye adsorption on 16 activated adsorbents and determined the relationship between adsorption capacity and a carbon parameters.The results indicated that agro-waste characteristics (pore surface area, pH, and particle size) contributed 50.7% to the adsorption efficiency the agro-waste characteristics, pore volume and surface area were the most impo fluencing variables, while particle size had a limited impact.With a hypothetic approximately 130,000 structures of metal-organic frameworks (MOFs) with and carbon dioxide adsorption data at different pressures, Guo et al. [102] est models for estimating gas adsorption capacities using two deep learning algorithm tilayer perceptrons (MLPs) and long short-term memory (LSTM) networks.The were evaluated by performing ten iterations of 10-fold cross-validations and 100 validations.The performance of the MLP and LSTM models was similar with hi racy of prediction.Those models that predicted gas adsorption at a higher press formed better than those that predicted gas adsorption at a lower pressure.In pa deep learning models were more accurate than RF models reported in the literatu predicting gas adsorption capacities at low pressures.Deep learning algorithm Adsorption Performance of Nanomaterials Because of their high surface area, ease of functionalization, and affinity toward a wide range of pollutants, nanomaterials are excellent adsorbents [100].Moosavi et al. [101] applied four machine learning methods to model dye adsorption on 16 activated carbon adsorbents and determined the relationship between adsorption capacity and activated carbon parameters.The results indicated that agro-waste characteristics (pore volume, surface area, pH, and particle size) contributed 50.7% to the adsorption efficiency.Among the agro-waste characteristics, pore volume and surface area were the most important influencing variables, while particle size had a limited impact.With a hypothetical set of approximately 130,000 structures of metal-organic frameworks (MOFs) with methane and carbon dioxide adsorption data at different pressures, Guo et al. [102] established models for estimating gas adsorption capacities using two deep learning algorithms, multilayer perceptrons (MLPs) and long short-term memory (LSTM) networks.The models were evaluated by performing ten iterations of 10-fold cross-validations and 100 holdout validations.The performance of the MLP and LSTM models was similar with high accuracy of prediction.Those models that predicted gas adsorption at a higher pressure performed better than those that predicted gas adsorption at a lower pressure.In particular, deep learning models were more accurate than RF models reported in the literature when predicting gas adsorption capacities at low pressures.Deep learning algorithms were found to be highly effective in generating models capable of accurately predicting the gas adsorption capacities of MOFs. Accelerated Materials Synthesis and Design In addition to being widely utilized for predicting material properties, ML also plays a pivotal role in the synthesis of new materials.During the past few years, ML has made significant progress in the exploration of novel materials, such as highly efficient molecular organic light-emitting diodes [103], low thermal hysteresis shape memory alloys [104], and piezoelectric materials with large electrical strain [105].The use of ML for materials synthesis not only significantly speeds up novel material discovery but also provides insight into the basic composition changes in materials from big data. Chalcogenide Materials Chalcogenide materials can be used in a variety of photovoltaic and energy devices, including light-emitting diodes, photodetectors, and batteries.ML has promoted the development of high-performance chalcogenide materials [106].Li et al. [107] proposed an ML model based on an RF algorithm for speculating the formation of ABX 3 and A 2 B B X 6 compound chalcogenides.With geometric and electrical parameters, the RF classification model reached 96.55% accuracy for ABX 3 samples and 91.83% accuracy for A 2 B B X 6 samples.A total of 241 ABX 3 chalcogenides with a 95% probability of formation were filtered from 15,999 candidate compounds, and a total of 1131 A 2 B B X 6 chalcogenides with a 99% probability of formation were filtered from 417,835 candidate compounds.The method presented in their work could offer valuable enlightenment for the acceleration of discovering perovskites.Liu et al. [108] used data from 397 ABO 3 compounds and nine parameters (e.g., tolerance factor and octahedral factor) as input variables for ML.The gradient-enhanced DT obtained by training was compared as the optimal model by 10-fold cross-validation of the average accuracy.A total of 331 chalcogenides were filtered by the model from 891 data points with a classification accuracy of 94.6%.Omprakash et al. [109] compiled a model including organometallic salt chalcogenides to 2D chalcocite and its corresponding band gaps.An ML model for predicting all types of chalcocite band gaps was then trained using a graphical representation learning technique.The model could accurately estimate the band gap within a few milliseconds with an average absolute error of 0.28 eV.Wang et al. [110] applied unsupervised learning to discover quaternary chalcogenide semiconductors (I 2 -II-IV-X 4 ) and were successful in screening eight of these materials with good photoconversion efficiency despite a data shortage.This method shortens the material screening cycle and facilitates rapid material discovery. Catalytic Materials In traditional experiments, it is difficult to design efficient catalytic materials in a short time because a clear reaction mechanism is required [111].ML can rapidly extract the relationship between the structure and performance of catalytic materials and effectively expedite the development process of new catalytic materials.Zhang et al. [112] employed a gradient boosting algorithm to build an ML model.The model utilized four key stability and catalytic features of graphene-loaded single-atom catalysts as targets to find catalytic materials suitable for electro-hydrogenation nitrogen reactions.With this model, a total of 45 catalytic materials with efficient catalytic performance were successfully screened from 1626 samples.The model could be operated for the rapid screening of other electrocatalysts.Figure 12 illustrates their computational framework.Wei et al. [113] developed an ML model, which was applied in a Bayesian optimization framework to obtain molybdenum disulfide (MoS 2 ) catalysts with stable hydrogen reaction activity.To explore the structure-property relationship of the samples optimized by the ML technique, nine electrochemical characterizations were performed to verify the results, including SEM, TEM, XRD, and XPS.A strong correlation was found between the structure of the optimized MoS 2 and its hydrogen evolution reaction performance.Hueffel et al. [114] reported an unsupervised ML workflow that uses only five experimental data points, which could be used to accelerate the recognition of binuclear palladium (Pd) catalysts.Based on their method, some phosphine ligands were successfully predicted and experimentally verified from 348 ligands, including those that had never been synthesized before, which formed binuclear Pd (I) complexes on Pd (0) and Pd (II) species.Their strategy plays an important role in studying the formation mechanisms of Pd catalyst species, as well as the further integration of ML into catalytic research. optimized MoS2 and its hydrogen evolution reaction performance.Hueffel et al. [114] re ported an unsupervised ML workflow that uses only five experimental data points, which could be used to accelerate the recognition of binuclear palladium (Pd) catalysts.Based on their method, some phosphine ligands were successfully predicted and experimentally verified from 348 ligands, including those that had never been synthesized before, which formed binuclear Pd (I) complexes on Pd (0) and Pd (II) species.Their strategy plays an im portant role in studying the formation mechanisms of Pd catalyst species, as well as th further integration of ML into catalytic research. Superconducting Materials Superconductivity, intrinsically regulated by finite phonon-coupled electron-elec tron attractions, has aroused decades of intense research interest in condensed matte physics.The development and prediction of upcoming superconducting materials with high critical temperatures are essential in many applications.ML-guided iterative experi mentation may outperform standard high-throughput screening for discovering break through materials in high-temperature superconductors [115,116].Zhang et al. [117] de veloped an integrated ML model to accurately and robustly predict the critical tempera ture (Tc) of superconducting materials (Figure 13a).They used open-source materials data ML models, and data mining methods to explore the correlation between chemical fea tures and Tc values.The integrated model combined three basic algorithms (gradien boosting decision tree, extra tree, and light gradient boosting machine) to improve th prediction accuracy.The model achieved an R 2 of 95.9% and an RMSE of 6.3 K.The study also identified the importance of various material features in Tc prediction, with therma conductivity playing a critical role.The integrated model was used to screen out potentia superconducting materials with Tc values beyond 50.0K.This research provides insight for accelerating the exploration of high-Tc superconductors.Roter et al. [118] used ML to predict new superconductors and their critical temperatures.They constructed a databas of superconductors and their chemical compositions and applied this information to train ML models.They achieved an R 2 of approximately 0.93, which was comparable to o higher than similar estimates based on other AI techniques.They also discussed factor that limit learning and suggested possible ways to overcome them.The researchers used Superconducting Materials Superconductivity, intrinsically regulated by finite phonon-coupled electron-electron attractions, has aroused decades of intense research interest in condensed matter physics.The development and prediction of upcoming superconducting materials with high critical temperatures are essential in many applications.ML-guided iterative experimentation may outperform standard high-throughput screening for discovering breakthrough materials in high-temperature superconductors [115,116].Zhang et al. [117] developed an integrated ML model to accurately and robustly predict the critical temperature (T c ) of superconducting materials (Figure 13a).They used open-source materials data, ML models, and data mining methods to explore the correlation between chemical features and T c values.The integrated model combined three basic algorithms (gradient boosting decision tree, extra tree, and light gradient boosting machine) to improve the prediction accuracy.The model achieved an R 2 of 95.9% and an RMSE of 6.3 K.The study also identified the importance of various material features in T c prediction, with thermal conductivity playing a critical role.The integrated model was used to screen out potential superconducting materials with T c values beyond 50.0K.This research provides insights for accelerating the exploration of high-T c superconductors.Roter et al. [118] used ML to predict new superconductors and their critical temperatures.They constructed a database of superconductors and their chemical compositions and applied this information to train ML models.They achieved an R 2 of approximately 0.93, which was comparable to or higher than similar estimates based on other AI techniques.They also discussed factors that limit learning and suggested possible ways to overcome them.The researchers used both unsupervised and supervised ML techniques, including singular value decomposition and KNN, to improve their models' accuracy.They achieved a classification accuracy of 96.5% and an R 2 of approximately 0.93 for predicting critical temperatures.They also employed their models to predict several new superconductors with high critical temperatures.However, the authors noted that incorrect entries in the database can lead to outliers in the predictions.Pereti et al. [119] proposed an ML approach to identify new superconducting materials.They utilized DeepSet technology, which allows them to input the chemical constituents of the compounds without predetermined ordering (Figure 13b).The method was successful in classifying materials as superconducting and quantifying their critical temperature.The trained neural network was then used to search through a mineralogical database for candidates that might be superconducting.Three materials were selected for experimental characterization, and superconductivity was confirmed in two of them.This was the first time a superconducting material was identified using AI methods.The results demonstrated the effectiveness of the DeepSet network in predicting the critical temperatures of superconducting materials. both unsupervised and supervised ML techniques, including singular value decomposition and KNN, to improve their models' accuracy.They achieved a classification accuracy of 96.5% and an R 2 of approximately 0.93 for predicting critical temperatures.They also employed their models to predict several new superconductors with high critical temperatures.However, the authors noted that incorrect entries in the database can lead to outliers in the predictions.Pereti et al. [119] proposed an ML approach to identify new superconducting materials.They utilized DeepSet technology, which allows them to input the chemical constituents of the compounds without predetermined ordering (Figure 13b).The method was successful in classifying materials as superconducting and quantifying their critical temperature.The trained neural network was then used to search through a mineralogical database for candidates that might be superconducting.Three materials were selected for experimental characterization, and superconductivity was confirmed in two of them.This was the first time a superconducting material was identified using AI methods.The results demonstrated the effectiveness of the DeepSet network in predicting the critical temperatures of superconducting materials. Nanomaterial Outcome Prediction Rapid advancements in materials synthesis techniques have led to more and more attention being paid to nanomaterials, including nanocrystals, nanorods, nanoplates, nanoclusters, and nanocrystalline thin films.Materials of this class offer enhanced physical and chemical tunability across a range of systems, including inorganic semiconductors, metals, and molecular crystals.A nanomaterial is defined as a material with a dimension smaller than 100 nanometers in at least one dimension.Unlike bulk materials, nanomaterials possess different physical and chemical properties due to their unique size and shape.This technology has a broad array of application prospects, including the conversion and storage of energy, the restoration of water, medical treatment, and the storage and processing of data. Using experimental data, Xie et al. [120] reported the development of an ML-aided method for predicting the crystallization tendency of metal-organic nanocapsules (MONCs).A prediction accuracy of >91% was achieved by using the XGBoost model.Furthermore, they synthesized a set of new crystalline MONCs using the derived features and chemical hypotheses from the XGBoost model.The results of this study demonstrate that ML algorithms can assist chemists in finding the optimal reaction parameters from a large number of experimental parameters more efficiently.Figure 14 shows a schematic representation of the working flow.Pellegrino et al. [121] tuned the TiO 2 nanoparticle morphology using hydrothermal treatment.In their work, an experimental design was employed to investigate the influence of relevant process parameters on the synthesis outcome, enabling ML methods to develop predictive models.After validation and training, the models were capable of accurately predicting the synthesis outcome in terms of nanoparticle size, polydispersity, and aspect ratio.They presented a synthesis method that allows the continuous and precise control of nanoparticle morphology.This method affords the possibility to tune the aspect ratio over a large range from 1.4 (perfect truncated bipyramids) to 6 (elongated nanoparticles) and a length from 20 to 140 nm. Nanomaterial Outcome Prediction Rapid advancements in materials synthesis techniques have led to more and more attention being paid to nanomaterials, including nanocrystals, nanorods, nanoplates, nanoclusters, and nanocrystalline thin films.Materials of this class offer enhanced physical and chemical tunability across a range of systems, including inorganic semiconductors, metals, and molecular crystals.A nanomaterial is defined as a material with a dimension smaller than 100 nanometers in at least one dimension.Unlike bulk materials, nanomaterials possess different physical and chemical properties due to their unique size and shape.This technology has a broad array of application prospects, including the conversion and storage of energy, the restoration of water, medical treatment, and the storage and processing of data. Using experimental data, Xie et al. [120] reported the development of an ML-aided method for predicting the crystallization tendency of metal-organic nanocapsules (MONCs).A prediction accuracy of >91% was achieved by using the XGBoost model.Furthermore, they synthesized a set of new crystalline MONCs using the derived features and chemical hypotheses from the XGBoost model.The results of this study demonstrate that ML algorithms can assist chemists in finding the optimal reaction parameters from a large number of experimental parameters more efficiently.Figure 14 shows a schematic representation of the working flow.Pellegrino et al. [121] tuned the TiO2 nanoparticle morphology using hydrothermal treatment.In their work, an experimental design was employed to investigate the influence of relevant process parameters on the synthesis outcome, enabling ML methods to develop predictive models.After validation and training, the models were capable of accurately predicting the synthesis outcome in terms of nanoparticle size, polydispersity, and aspect ratio.They presented a synthesis method that allows the continuous and precise control of nanoparticle morphology.This method affords the possibility to tune the aspect ratio over a large range from 1.4 (perfect truncated bipyramids) to 6 (elongated nanoparticles) and a length from 20 to 140 nm. Nanomaterial Synthesis Nanomaterial synthesis often involves multiple reagents and interdependent experimental conditions.Each experimental variable's contribution to the final product is generally determined through trial and error, along with intuition and experience.The process of identifying the most efficient recipe and reaction conditions is therefore time consuming, laborious, and resource intensive [122].In a recent study, Erick et al. [123] used SVM classification and regression models to predict the synthesis of CsPbBr 3 nanosheets with controlled layer thicknesses.The SVM classification is shown to accurately predict the likelihood that CsPbBr 3 synthesis would form a majority population of quantum-confined nanoplatelets.Additionally, SVM regression can be used to determine the average thickness of the synthesis of CsPbBr 3 nanoplatelets with sub-monolayer accuracy.Epps et al. [124] proposed a method that is based on ML experiment selection and high-efficiency autonomous flow chemistry.The approach utilized SVM regression to predict the thickness of the nanoplatelets and was shown to be accurate and reliable.Using this method, inorganic perovskite quantum dots (QDs) in flow were synthesized autonomously.By using less than 210 mL of starting solutions and without user selection, this method synthesized precision tailored QD compositions within 30 h.This would enable the commercialization of these QDs, as well as their integration into various applications.Furthermore, the method could be used for other types of nanomaterials, such as nanorods and nanowires. Inverse Design of Nanomaterials As opposed to the direct approach that leads from the chemical space to the desired properties, inverse design starts with desired properties as the "input" and ends with chemical space as the "output" [125].In the field of nanomaterials, the complexity of inverse design is enhanced by the finite dimensions and variety of shapes, resulting in a larger design space [126].The inverse design of nanomaterials was quite challenging in the past.The inverse design of nanomaterials could be explored using interpretable relationships between structure and property generated by ML methods.A new inverse design method for metal nanoparticles based on deep learning was proposed and demonstrated by Wang et al. [127].In comparison to the least squares method, the calculated results indicated that the inverse design method utilizing the back-propagation network had greater adaptability, a smaller minimum error, and can be adjustable based on S parameters.Inverse design systems based on deep learning neural networks may be applied to the inverse design of nanoparticles of different shapes.In another study, Li et al. [126] demonstrated a novel approach to inverse design using multi-target regression methods using RFs.A multi-target regression model was used with a precursory forward structureproperty prediction to capture the most important characteristics of a single nanoparticle before the problem was inverted and a number of structural features were simultaneously predicted.A general workflow has been demonstrated on two nanoparticle datasets, and it has the capacity to predict rapid relationships between properties and structures for guiding further research and development without the need for additional optimization or high-throughput sampling.He et al. [128] employed a DNN to establish mappings between the far-field spectra/near-field distribution and dimensional parameters of three different types of plasmonic nanoparticles, including nanospheres, nanorods, and dimers.Through the DNN, both the forward prediction of far-field optical properties and the inverse prediction of nanoparticle dimensional parameters can be accomplished accurately and efficiently.Figure 15 shows the structure of the reported machine learning model for predicting optical properties and designing nanoparticles. Conclusions, Challenges, and Prospects This review discussed the use of machine learning (ML) in the field of materials science for predicting material properties and guiding material synthesis.The review briefly outlined the basic principles of ML and introduced commonly used algorithms and their applications in material screening and property prediction.It also presented the research progress of ML in predicting material properties and guiding material synthesis.The Conclusions, Challenges, and Prospects This review discussed the use of machine learning (ML) in the field of materials science for predicting material properties and guiding material synthesis.The review briefly outlined the basic principles of ML and introduced commonly used algorithms and their applications in material screening and property prediction.It also presented the research progress of ML in predicting material properties and guiding material synthesis.The review suggested that ML can greatly reduce computational costs, shorten the development cycle, and improve computational accuracy, making it a promising research approach in novel materials screening and material property prediction. It is important to note, however, that the following challenges still exist.Most ML algorithms require large amounts of data to work properly.Even for the simplest problems, thousands of examples are desired.Acquiring an effective dataset is critical for the research and implementation of ML in materials science.However, data in materials science are characterized by high acquisition costs, excessive concentration or dispersion, and a lack of uniform processing standards.A dataset with a large amount of data, a uniform distribution, and matching feature parameters is often extremely difficult to obtain.Although material databases have greatly facilitated researchers' access to data, many published data have not been specified to date.The task of enriching existing databases is challenging.Text mining techniques could be effective in rapidly collecting data scattered in the literature.This approach could greatly enhance existing databases and create specialized databases. The selection of features significantly affects the accuracy of ML models.Currently, the use of manual feature engineering to filter features is often influenced by the researcher's experience and intuition.This approach may overlook some significant features.In contrast, automated feature engineering automatically constructs new candidate features from the data and selects the most appropriate features for model training, which could effectively solve the current dilemma. ML methods cannot replace traditional computational and experimental studies.Although ML methods have shown remarkable promise in guiding the synthesis of novel materials and predicting material properties, they are still mostly "black boxes" [108].The predicted results still need to be experimentally verified and the underlying physicochemical laws still need to be studied in depth.Therefore, ML can only perform some exploratory tasks at present.With further improvement of theories and methods, however, ML might eventually replace traditional experimental research by providing novel ideas and research methods for the field of materials science.The application of ML in the field of materials science and engineering is just the beginning, and its potential is endless in the future. Figure 1 . Figure 1.An example of an ML workflow. Figure 1 . Figure 1.An example of an ML workflow. Figure 2 . Figure 2. Evolution of the ML workflow in nanomaterial discovery and design.(a) First-generation approach.In this paradigm, there are two main steps: feature engineering from raw database to descriptors and model building from descriptors to target model.(b) Second-generation approach.The key characteristic that distinguishes this approach from the first-generation approach is eliminating human-expert feature engineering, which can directly learn from raw nanomaterials.Reproduced with permission from[21]. Figure 3 . Figure 3. Schematic of a typical KNN algorithm. Figure 3 . Figure 3. Schematic of a typical KNN algorithm. Figure 3 . Figure 3. Schematic of a typical KNN algorithm. Figure 4 . Figure 4. Diagram of a DT.The circles and squares indicate internal nodes and leaf nodes, respectively.Different colors represent different classes.Reproduced with permission from [11]. Figure 4 . Figure 4. Diagram of a DT.The circles and squares indicate internal nodes and leaf nodes, res tively.Different colors represent different classes.Reproduced with permission from [11]. Figure 5 . Figure 5. Diagram of a typical ANN. Figure 5 . Figure 5. Diagram of a typical ANN. Liu et al.[51] ANN Development of a predictive model for the chloride diffusion coefficient in concrete. Figure 6 . Figure 6.(a)A portion of the HEA interaction network with Fruchterman Reingold layout, adapted with permission from[44].(b) Schematic illustration of an RF structure, adapted with permission from[47].(c) A multi-layer neural network model layout, adapted with permission from[50]. Figure 6 . Figure 6.(a)A portion of the HEA interaction network with Fruchterman Reingold layout, adapted with permission from[44].(b) Schematic illustration of an RF structure, adapted with permission from[47].(c) A multi-layer neural network model layout, adapted with permission from[50]. Figure 7 . Figure 7.Some deep learning algorithm structures.(a) Schematic illustration of the design procedures of metaporous materials with GANs, adapted with permission from [65].(b) Structure of a vision transformer, adapted with permission from [67].(c) Illustration of the concept of using image identification based on the improved Faster R-CNN model to identify wear, adapted with permission from [69]. Figure 7 . Figure 7.Some deep learning algorithm structures.(a) Schematic illustration of the design procedures of metaporous materials with GANs, adapted with permission from [65].(b) Structure of a vision transformer, adapted with permission from [67].(c) Illustration of the concept of using image identification based on the improved Faster R-CNN model to identify wear, adapted with permission from [69]. derived an artificial-intelligence-aided data-driven infrastructure called Jilin Artificial-intelligence aided Materials-design Integrated Package (JAMIP).The organization of JAMIP abides by the data lifecycle in computational materials informatics, from data generation to col-lection and learning, as shown in Figure 8.It provides tools for materials production, high-throughput calculations, data extraction and management, and ML-based data mining.The authors demonstrated the usefulness of JAMIP in exploring materials informatics in optoelectronic semiconductors, specifically halide perovskites.Hu et al. [75] proposed and developed MaterialsAtlas.org(accessed on 19 August 2023), a web-based materials informatics toolbox.The MaterialsAtlas platform includes tools for chemical validity check, formation energy and e-above-hull energy check, property prediction, screening of hypothetical materials, and utility tools.The toolbox lowers the barrier for materials scientists in data-driven exploratory materials discovery.Materials 2023, 16, x FOR PEER REVIEW 12 of 30The authors demonstrated the usefulness of JAMIP in exploring materials informatics in optoelectronic semiconductors, specifically halide perovskites.Hu et al.[75] proposed and developed MaterialsAtlas.org(accessed on 19 August 2023), a web-based materials informatics toolbox.The MaterialsAtlas platform includes tools for chemical validity check, formation energy and e-above-hull energy check, property prediction, screening of hypothetical materials, and utility tools.The toolbox lowers the barrier for materials scientists in data-driven exploratory materials discovery. Figure 8 . Figure 8. Overview of the JAMIP code framework.The program comprises three major parts based on the material data's lifecycle: data generation (blue), data collection (yellow), and data learning (green).Reproduced with permission from [74]. Figure 8 . Figure 8. Overview of the JAMIP code framework.The program comprises three major parts based on the material data's lifecycle: data generation (blue), data collection (yellow), and data learning (green).Reproduced with permission from [74]. Figure 9 . Figure 9. Logic diagram of predicting the maximum energy density and exploring the potential effective structure of composites through the ML method, reproduced with permission from [85]. Figure 9 . Figure 9. Logic diagram of predicting the maximum energy density and exploring the potential effective structure of composites through the ML method, reproduced with permission from [85]. Figure 11 . Figure 11.Schematic workflow of data compilation, descriptor generation, machine learning modeling, experimental validation, and mechanism interpretation, reproduced with permission from [97]. Figure 12 . Figure 12.Catalyst structures, target properties, and computational framework.(a) Structural rep resentation of three-coordinated and four-coordinated configurations.Letter "M" represents th central metal atom, and letter "C" represents the coordinating atom of M. (b) Target properties fo describing the N2 fixation performance of the catalyst.(c) ML screening and descriptor buildin framework of their work.Reproduced with permission from [112]. Figure 12 . Figure 12.Catalyst structures, target properties, and computational framework.(a) Structural representation of three-coordinated and four-coordinated configurations.Letter "M" represents the central metal atom, and letter "C" represents the coordinating atom of M. (b) Target properties for describing the N 2 fixation performance of the catalyst.(c) ML screening and descriptor building framework of their work.Reproduced with permission from [112]. Figure 13 . Figure 13.(a) Workflow of the integrated model-based ML methods for accurate T c prediction and new superconductor material mining, adapted with permission from [117].(b) A schematic layout of the DeepSet architecture, adapted with permission from [119]. Figure 13 . Figure13.(a) Workflow of the integrated model-based ML methods for accurate Tc prediction and new superconductor material mining, adapted with permission from[117].(b) A schematic layout of the DeepSet architecture, adapted with permission from[119]. Figure 14 . Figure 14.Schematic representation of the working flow when machine learning models are incorporated into the prediction of the crystallization propensity of MONCs, with permission from [120].Figure 14.Schematic representation of the working flow when machine learning models are incorporated into the prediction of the crystallization propensity of MONCs, with permission from [120]. Figure 14 . Figure 14.Schematic representation of the working flow when machine learning models are incorporated into the prediction of the crystallization propensity of MONCs, with permission from [120].Figure 14.Schematic representation of the working flow when machine learning models are incorporated into the prediction of the crystallization propensity of MONCs, with permission from [120]. Figure 15 . Figure 15.Structures of machine learning models for predicting optical properties and designing nanoparticles.(a) Far-and near-field optical data obtained from the finite-difference time-domain (FDTD) simulations were used to train three different machine learning models: far-field spectra and structural information for (i) structure classification, far-field spectra and dimensions for (ii) the spectral DNN, and near-field enhancement maps and dimensions for (iii) the E-field DNN.After training, machine learning models can be used to perform forward prediction and/or inverse design.The solid and dashed red arrows represent the forward prediction and the inverse design process, respectively.(b) Detailed architecture of the three machine learning models in Figure9a, with permission from[128]. Figure 15 . Figure 15.Structures of machine learning models for predicting optical properties and designing nanoparticles.(a) Far-and near-field optical data obtained from the finite-difference time-domain (FDTD) simulations were used to train three different machine learning models: far-field spectra and structural information for (i) structure classification, far-field spectra and dimensions for (ii) the spectral DNN, and near-field enhancement maps and dimensions for (iii) the E-field DNN.After training, machine learning models can be used to perform forward prediction and/or inverse design.The solid and dashed red arrows represent the forward prediction and the inverse design process, respectively.(b) Detailed architecture of the three machine learning models in Figure9a, with permission from[128]. Table 1 . An overview of some databases in material science. https://www.ccdc.cam.ac.uk/ (accessed on 17 July 2023) The world's largest database of small-molecule organic and metal-organic crystal structure data, now at over 1.2 million structures.http://cds.dl.ac.uk/ (accessed on 17 July 2023)A comprehensive collection of crystal structure information for non-organic compounds, including inorganics, ceramics, minerals, and metals, covers the literature from 1915 to the present and contains over 60,000 entries on the crystal structure of in-organic materials. Table 2 . Some applications of shallow learning in materials science.
20,076
2023-08-31T00:00:00.000
[ "Materials Science", "Computer Science", "Engineering" ]
ReadME generation from an OWL ontology describing NLP tools The paper deals with the generation of ReadME files from an ontology-based description of NLP tool. ReadME files are structured and organised according to properties defined in the ontology. One of the problem is being able to deal with multilingual generation of texts. To do so, we propose to map the ontology elements to multilingual knowledge defined in a SKOS ontology. Introduction A ReadMe file is a simple and short written document that is commonly distributed along with a computer software, forming part of its documentation. It is generally written by the developer and is supposed to contain basic and crucial information that the user reads before installing and running the software. Existing NLP software may range from unstable prototypes to industrial applications. Many of them are developed by researchers, in the framework of temporary projects (training, PhD theses, funded projects). As their use is often restricted to their developers, they do not always meet Information technology (IT) requirements in terms of documentation and reusability. This is especially the case for underresourced languages, which are often developed by researchers and released without standard documentation, or written fully or partly in the developer's native language. Providing a clear ReadMe file is essential for effective software distribution and use: a confusing one could prevent the user from using the software. However, there is no well established guidelines or good practices for writing a ReadMe. In this paper we propose an ontology-based approach for the generation of ordered and structured ReadMe files for NLP tools. The ontology defines a meta-data model built based on a joint study of NLP tool documentation practices and existing meta-data model for language resources (cf. section 2). Translation functions (TFs) for different languages (currently eight) are associated to ontology properties characterising NLP tools. These TFs are defined within the Simple Knowledge Organization System (SKOS) (cf. section 2.2). The ontology is filled via an on-line platform by NLP experts speaking different languages. Each expert describes the NLP tools processing the languages he speaks (cf. section 3). A ReadMe file is then generated in different languages for each tool described within the ontology (cf. section 3). Figure 1 depicts the whole process of multilingual ReadMe generation. NLP tools ontology This work takes place in the framework of the project MultiTal which aims at making NLP tool descriptions available through an online platform, containing factual information and verbose descriptions that should ease installation and use of considered NLP tools. This project involves numerous NLP experts in diverse languages, currently Arabic, English, French, Hindi, Japanese, Mandarin Chinese, Russian, Ukrainian and Tibetan. Our objective is to take advantage of the NLP experts knowledge both to retrieve NLP tools in their languages and to generate multilingual ReadMe files for the retrieved NLP tools. A first step to reach this goal is to propose a conceptual model whose elements are as much independent as possible from the language. Then, associate to each conceptual element, a lexicalisation for each targeted language. Ontology conceptualisation In order to conceptualise an ontology that structures and standardises the description of NLP tools we proceeded to a joint study of: • Documentation for various NLP tools processing aforementioned languages that have been installed and closely tested; • A large collection (around ten thousands) of structured ReadMe in the Markdown format, crawled from GitHub repositories; • Meta-data models for Language Resources (LR) as the CMDI (Broeder et al., 2012) or META-SHARE meta-data model ontology (McCrae et al., 2015). This study gave us guidelines to define bundles of properties sharing a similar semantic. For example, properties referring to the affiliation of the tool (as hasAuthor, hasLaboratory or hasProjet), to its installation or its usage. We distinguish two levels of meta-data: 1) a mandatory level providing basic elements that constitute a ReadMe file and 2) a nonmandatory level that contains additional information as relations to other tools, fields or methods. These latter serve tools' indexation within the on-line platform. Figure 2 details the major bundles of properties that we conceptualized to describe an NLP tool. The processed languages are defined within the bundle Task. Indeed, an NLP tool may have different tasks which may apply to different languages. As our ambition is to propose pragmatic descriptions detailing the possible installation and execution procedures, we particularly focused on the decomposition of these procedures into atomic actions. Multilingual translation functions Within the ontology, NLP tools are characterised by their properties. Values allocated to these properties are as much as possible independent of the language (date of creation and last update, developer or license names, operating system information, ...). Hence, what needs to be lexicalised is the semantic of each defined property. Each NLP expert associate to each property a translation functions (TFs) that formalise the lexical formulation of the property in the language he speaks. TFs are defined once for each language. The amount of work have not exceeded half a day per language to associate TFs to the around eighty properties of the ontology. In order to ensure a clean separation between the conceptual and the lexical layer, TFs are defined within a SKOS ontology. The SKOS ontology structure is automatically created from the OWL ontology. Thus, adding a new language essentially consists in adding within SKOS TFs in that particular language to each OWL property. Translation functions are of two kinds: with P a property, * a set of words that can be empty, V 1 , V 2 values of the property P and @lang an OWL language tag that determines the language in which the property is lexicalised. Below, two examples of tranlation functions for Japanese that have been associated to the properties authorFirstName and download. Natural language generation of multilingual ReadMe files In our framework, each NLP expert finds, installs and uses available NLP tools processing the language he speaks. Then, he describes every tool that runs correctly via an on-line platform connected to the ontology (cf. Figure 1). Elements of description do not only come from an existing ReadMe as if they exist, they are rarely exhaustive. Hence, experts also gather tool information from the web and during installing and testing each tool. At this step, the OWL ontology is filled and the translated functions of each property are defined within the SKOS ontology. Our aim is to generate ordered and structured ReadMe files in different languages. To do so, we use Natural language generation (NLG) techniques adapted to the Semantic Web (also named Ontology verbalisation) (Staykova, 2014;Bouayad-Agha et al., 2014;Cojocaru and Trãuşan Matu, 2015;Keet and Khumalo, 2016). NLG can be divided in several tasks (Reiter and Dale, 2000;Staykova, 2014). Our approach currently includes: content selection, document structuring, knowledge aggregation, and lexicalisation. The use of more advanced tasks as referring expression aggregation, linguistic realisation and structure realisation is in our perspectives. Ontology content selection and structuring Unlike the majority of ontology verbalisation approaches, we do not intend to verbalise the whole content of the ontology. We simply verbalize properties and their values that characterise a pertinent information that have to appear in a ReadMe file. The concerned properties are those which belong to the mandatory level (cf. section 2.1). The structure of ReadMe files is formalized within the ontology. First, ReadMe files are organised in sections based on bundles of properties defined in the ontology (cf. Figure 2). Within each section, the order of property is predefined. Both installation and execution procedures are decomposed to their atomic actions. These actions are automatically numbered according to their order of execution (cf. Figure 3). Different installation and execution procedures may exist according the operat-ing system (Linux, Windows, ...), architecture (32bits, 64bites, 86bits, ...), language platform (JAVA 8, Python 3, ...) and so on. As well, execution procedures depend on tasks the NLP tool performs and the languages it processes. Thus, each procedure is distinguished and its information grouped under its heading. Moreover, execution procedures are also ordered as an NLP tool may have to perform tasks in a particular ordered sequence. This structuring is part of the ontology conceptualisation. It consists in defining property and sub-property relations and in associating a sequence number to each property that has to be lexicalised. Ontology content aggregation and lexicalisation Following the heuristics proposed in (Androutsopoulos et al., 2014) and (Cojocaru and Trãuşan Matu, 2015) to obtain concise text, OWL property values are aggregated when they characterise the same object. For example, if an execution procedure (ep i ) has two values for operating system (ex : Linux and Mac) then the two values are merged as the following below: hasOS(ep i ,Linux) ∧ hasOS(ep i ,Mac) ⇒ hasOS(ep i ,Linux and Mac) The last step consists in property lexicalisation. While a number of approaches rely on ontology elements' names and labels (often in English) to infer a lexicalisation (Bontcheva, 2005;SUN and MELLISH, 2006;Williams et al., 2011), in our approach, the lexicalisation of properties depend only on their translation functions. During the ontology verbalisation, each targeted language is processed one after the other. The TF of encountered properties for the current language is retrieved and used to lexicalise the property. Property values are considered as variables of the TFs. They are not translated as we ensure that they are as much as possible independent of the language. Figure 3 gives an example of two installation procedures for the NLP tool Jieba that processes Chinese. In this example, actions are lexicalised in English. Furthermore, the lexicalised command lines appear in between brackets. As a result of this generation, all ReadMe files have the same structure, organisation and, as much as possible, level of detail, especially regarding installation and execution procedures which represent the key information for a tool usage. The resulted texts are simple which suits a ReadMe. However, it could be valuable to use more advanced NLG techniques as referring expression aggregation, linguistic realisation and structure realisation to produce more less simplified natural language texts. Conclusion We proposed an ontology-based approach for generating simple, structured and organised ReadMe files in different languages. Readme structuring and lexicalisation is guided by the ontology properties and their associated translation functions for the targeted languages. The generated ReadMes are intended to be accessible via an on-line platform. This platform documents in several languages NLP tools processing different languages. In a near future, we plan to evaluate the complexity for end-users of different level of expertise to install and execute NLP tools using our generated ReadMe files. We also hope that, as a side-product, the proposed conceptualisation may provide a starting point to establish guidelines and best practices that NLP tool documentation often lacks, especially for under-resourced languages.
2,486.4
2016-09-06T00:00:00.000
[ "Computer Science" ]
iCD8α cells: living at the edge of the intestinal immune system. In order to survive, organisms must distinguish the dangers and growth opportunities that their surrounding environments provide. Food products and microbes are part of these environments and constitute a constant challenge. In humans and most other vertebrates food and microbes have easy access to the inside of the organism by simply entering the gastrointestinal tract. It is here that persistent encounters between the host's immune defenses and outside intruders take place. Although the nature of these encounters varies, there is one common aspect to all of them: the outside environment is separated from the inside tissues by a thin, single layer of epithelial cells known as the intestinal epithelium. Although this monolayer of intestinal epithelial cells (IEC) has a potent barrier function, it does not provide sufficient protection to the organism by itself. This barrier is supported by an extensive network of cells, tissues and organs that interconnect, in one way or another, with the intestinal epithelium. This support system is known as the intestinal mucosal immune system. In between the IEC reside a large number of lymphoid cells known as intraepithelial lymphocytes (IEL) [1]. IEL are at the edge of the mucosal immune system and are therefore considered sentinels and early responders to microbial invaders. IEL constitute a diverse group of immune cells with distinct functions. Many IEL are T lymphocytes of the adaptive immune system that express a T cell receptor (TCR) comprised of either αβ or γδ chains, whereas others lack TCR expression and belong to the innate arm of the immune system. Although extensive research has been focused on defining the functions and developmental origins of TCR+ IEL, little is known about innate-type, TCR− IEL. Recent studies in immunology have focused on subsets of innate lymphoid cells (ILC) [2], and several of these cell types have been identified within the IEL compartment. For example, this compartment contains an ILC population that expresses the natural killer (NK) cell markers NKp46 and NK1.1, and produces the anti-viral and pro-inflammatory cytokine interferon (IFN)-γ [3]. Additionally, our research group recently identified another TCR- IEL population characterized by expression of CD8α homodimers [4]. This novel subset of lymphoid cells, which we have called iCD8α cells, possesses many attributes associated with innate immune effector functions, such as the capacity to produce pro-inflammatory cytokines and chemokines, exhibit cytotoxicity, engulf and kill bacterial pathogens, and present antigens to MHC class II-restricted T cells. Collectively, these recent findings have revealed that the IEL compartment contains multiple populations of both TCR+ and TCR- lymphoid cells with diverse functions. What is the functional relevance of iCD8α cells? Based on their anatomical location, iCD8α cells would be expected to interact with microorganisms present in the lumen of the intestine, especially those microbes that may directly damage the epithelium. Indeed, we reported that iCD8α cells were able to control colonization of the mouse colon by Citrobacter rodentium, a bacterial organism that serves as a model for the human pathogen Escherichia coli [4]. We demonstrated that iCD8α cells are capable of engulfing and killing C. rodentium bacteria ex vivo, raising the possibility that iCD8α cells control pathogenic microbes through this mechanism. Additionally, iCD8α cells may be involved in the homeostasis of commensal microorganisms. Owing to their antigen-presenting properties, combined with their capacity to engulf bacteria, it is tempting to speculate that iCD8α cells can present antigens to TCR+ IEL and help orchestrate immune responses in the intestinal epithelium. Because of their intimate relationship, iCD8α cells likely engage in reciprocal interactions with IEC. For example, IEC produce the cytokine IL-15, which we have shown is critically important for the development and survival of iCD8α cells [4]. Additionally, IEC express the thymus leukemia (TL) antigen (encoded by the H2-T3 gene), a non-classical MHC class I molecule that functions as a high affinity ligand of the CD8α homodimer [5, 6]. TL expression was previously shown to play an inhibitory role in CD8αα+TCR+ IEL activation [5, 7, 8], and this molecule may similarly influence the effector functions of iCD8α cells. Conversely, our findings showed that iCD8α cells express high amounts of granzymes A and B, suggesting that these cells exhibit cytotoxic properties, which may be directed against IEC. Possible conditions in which this may occur include infection or transformation of IEC. In unpublished studies we have further found that iCD8α cells contribute to the development of innate colitis induced by antibodies against the co-stimulatory molecule CD40. We also identified human equivalents to these cells, which were partially depleted in newborns with necrotizing enterocolitis, a condition mostly seen in premature infants [4]. These findings therefore imply an important contribution of iCD8α cells to anti-microbial immunity and colitis. Living at the edge of the intestinal immune system, iCD8α cells are continuously exposed to foreign substances and microbes. Studies thus far have shown that these cells contribute to providing a first line of defense against microbial pathogens. Future studies will no doubt unveil additional functions of these cells in promoting immune and tissue homeostasis in the delicate microenvironment of the gut mucosa. iCD8α cells: living at the edge of the intestinal immune system Danyvid Olivares-Villagómez and Luc Van Kaer In order to survive, organisms must distinguish the dangers and growth opportunities that their surrounding environments provide. Food products and microbes are part of these environments and constitute a constant challenge. In humans and most other vertebrates food and microbes have easy access to the inside of the organism by simply entering the gastrointestinal tract. It is here that persistent encounters between the host's immune defenses and outside intruders take place. Although the nature of these encounters varies, there is one common aspect to all of them: the outside environment is separated from the inside tissues by a thin, single layer of epithelial cells known as the intestinal epithelium. Although this monolayer of intestinal epithelial cells (IEC) has a potent barrier function, it does not provide sufficient protection to the organism by itself. This barrier is supported by an extensive network of cells, tissues and organs that interconnect, in one way or another, with the intestinal epithelium. This support system is known as the intestinal mucosal immune system. In between the IEC reside a large number of lymphoid cells known as intraepithelial lymphocytes (IEL) [1]. IEL are at the edge of the mucosal immune system and are therefore considered sentinels and early responders to microbial invaders. IEL constitute a diverse group of immune cells with distinct functions. Many IEL are T lymphocytes of the adaptive immune system that express a T cell receptor (TCR) comprised of either αβ or γδ chains, whereas others lack TCR expression and belong to the innate arm of the immune system. Although extensive research has been focused on defining the functions and developmental origins of TCR + IEL, little is known about innate-type, TCR -IEL. Recent studies in immunology have focused on subsets of innate lymphoid cells (ILC) [2], and several of these cell types have been identified within the IEL compartment. For example, this compartment contains an ILC population that expresses the natural killer (NK) cell markers NKp46 and NK1.1, and produces the anti-viral and pro-inflammatory cytokine interferon (IFN)-γ [3]. Additionally, our research group recently identified another TCR -IEL population characterized by expression of CD8α homodimers [4]. This novel subset of lymphoid cells, which we have called iCD8α cells, possesses many attributes associated with innate immune effector functions, such as the capacity to produce pro-inflammatory cytokines and chemokines, exhibit cytotoxicity, engulf and kill bacterial pathogens, and present antigens to MHC class II-restricted T cells. Collectively, these recent findings have revealed that the IEL compartment contains multiple populations of both TCR + and TCRlymphoid cells with diverse functions. What is the functional relevance of iCD8α cells? Based on their anatomical location, iCD8α cells would be expected to interact with microorganisms present in the lumen of the intestine, especially those microbes that may directly damage the epithelium. Indeed, we reported that iCD8α cells were able to control colonization of the mouse colon by Citrobacter rodentium, a bacterial organism that serves as a model for the human pathogen Escherichia coli [4]. We demonstrated that iCD8α cells are capable of engulfing and killing C. rodentium bacteria ex vivo, raising the possibility that iCD8α cells control pathogenic microbes through this mechanism. Additionally, iCD8α cells may be involved in the homeostasis of commensal microorganisms. Owing to their antigen-presenting properties, combined with their capacity to engulf bacteria, it is tempting to speculate that iCD8α cells can present antigens to TCR + IEL and help orchestrate immune responses in the intestinal epithelium. Because of their intimate relationship, iCD8α cells likely engage in reciprocal interactions with IEC. For example, IEC produce the cytokine IL-15, which we have shown is critically important for the development and survival of iCD8α cells [4]. Additionally, IEC express the thymus leukemia (TL) antigen (encoded by the H2-T3 gene), a non-classical MHC class I molecule that functions as a high affinity ligand of the CD8α homodimer [5,6]. TL expression was previously shown to play an inhibitory role in CD8αα + TCR + IEL activation [5,7,8], and this molecule may similarly influence the effector functions of iCD8α cells. Conversely, our findings showed that iCD8α cells express high amounts of granzymes A and B, suggesting that these cells exhibit cytotoxic properties, which may be directed against IEC. Possible conditions in which this may occur include infection or transformation of IEC. In unpublished studies we have further found that iCD8α cells contribute to the development of innate colitis induced by antibodies against the co-stimulatory molecule CD40. We also identified human equivalents to these cells, which were partially depleted in newborns with necrotizing enterocolitis, a condition mostly seen in premature infants [4]. These findings therefore imply an important contribution of iCD8α cells to anti-microbial Editorial www.impactjournals.com/oncotarget immunity and colitis. Living at the edge of the intestinal immune system, iCD8α cells are continuously exposed to foreign substances and microbes. Studies thus far have shown that these cells contribute to providing a first line of defense against microbial pathogens. Future studies will no doubt unveil additional functions of these cells in promoting immune and tissue homeostasis in the delicate microenvironment of the gut mucosa.
2,382.8
2015-07-03T00:00:00.000
[ "Biology", "Medicine" ]
Designing Business Models for the Internet of Things standardization of interfaces. Immaturity suggests that quintessential IOT technologies and innovations are not yet products and services but a "mess that runs deep". The unstructured ecosystems mean that it is too early to tell who the participants will be and which roles they will have in the evolving ecosystems. The study argues that managers can overcome these challenges by using a business model design tool that takes into account the ecosystemic nature of the IOT. The study concludes by proposing the grounds for a new design tool for ecosystem business models and suggesting that "value design" might be a more appropriate term when talking about business models in ecosystems. Introduction According to Gershenfeld and Vasseur (2014) the impressive growth of the Internet in the past two decades is about to be overshadowed as the "things" that surround us start going online.The "Internet of Things" (IOT), a term coined by Kevin Ashton of Procter & Gamble in 1998, has become a new paradigm that views all objects around us connected to the network, providing anyone with "anytime, anywhere" access to information (ITU, 2005;Gomez et al., 2013).The IOT describes the interconnection of objects or "things" for various purposes including identification, communication, sensing, and data collection (Oriwoh et al., 2013)."Things" range from mobile devices to general household objects embedded with capabilities for sensing or communication through the use of technologies such as radio frequency identification (RFID) (Oriwoh et al., 2013;Gomez et al., 2013).The IOT represents the future of computing and communications, and its develop-This article investigates challenges pertaining to business model design in the emerging context of the Internet of Things (IOT).The evolution of business perspectives to the IOT is driven by two underlying trends: i) the change of focus from viewing the IOT primarily as a technology platform to viewing it as a business ecosystem; and ii) the shift from focusing on the business model of a firm to designing ecosystem business models.An ecosystem business model is a business model composed of value pillars anchored in ecosystems and focuses on both the firm's method of creating and capturing value as well as any part of the ecosystem's method of creating and capturing value.The article highlights three major challenges of designing ecosystem business models for the IOT, including the diversity of objects, the immaturity of innovation, and the unstructured ecosystems.Diversity refers to the difficulty of designing business models for the IOT due to a multitude of different types of connected objects combined with only modest standardization of interfaces.Immaturity suggests that quintessential IOT technologies and innovations are not yet products and services but a "mess that runs deep".The unstructured ecosystems mean that it is too early to tell who the participants will be and which roles they will have in the evolving ecosystems.The study argues that managers can overcome these challenges by using a business model design tool that takes into account the ecosystemic nature of the IOT.The study concludes by proposing the grounds for a new design tool for ecosystem business models and suggesting that "value design" might be a more appropriate term when talking about business models in ecosystems. New web-based business models being hatched for the Internet of Things are bringing together market players who previously had no business dealings with each other.Through partnerships and acquisitions, […] they have to sort out how they will coordinate their business development efforts with customers and interfaces with other stakeholders." " Theoretical Background In today's networked world, businesses are becoming parts of complex business ecosystems.This complexity increases when transforming from centralized towards decentralized and distributed network structures (Barabasi, 2002;Möller et al., 2005).Different structures emphasize different types of activities in the ecosystem, and a continuously increasing level of complexity calls for new types of value systems (cf.Möller et al., 2005).Muegge (2011) describes business ecosystems as institutions of participation "where organizations and individuals typically self-identify as an ecosystem, both in their own internal discourse and in the brand identity they convey to others".He also points out that a business ecosystem refers to an organization of economic actors whose individual business activities are anchored around a platform, and that a platform is an organization of things. The technological platform forms the core of a business ecosystem (Cusumano & Gawer, 2002).Muegge (2011) defines a platform as a set of technological building blocks and complementary assets that companies and individuals can use and consume to develop complementary products, technologies, and services.Furthermore, Muegge (2013) presents a system of systems view (i.e., an "architecture"), according to which a platform is an organization of things (e.g., technologies and complementary assets), a community is an organization of people, and a business ecosystem is an organization of economic actors.Therefore, the core of an IOT ecosystem refers to the interconnections of the physical world of things with the virtual world of Internet, the software and hardware platforms, as well as the standards commonly used for enabling such interconnection (Mazhelis et al., 2012).Moore (1996) defines a business ecosystem as "an economic community supported by a foundation of interacting organizations and individuals."A business ecosystem includes customers, lead producers, competitors, and other stakeholders.He argues that the leadership (keystone) companies have a strong influence over the co-evolutionary processes.Peltoniemi (2005) refers to systems theory by arguing that "the system is more than the sum of its parts" and reminds us that the operation of the system cannot be understood by studying its parts detached from the entity.She also argues that a socio-economic system such as a business ecosystem is a complex adaptive system, and that its population develops through co-evolution with the greater environment, self-organization and emergence (i.e., the ability and process to create new order), and adaptation to the environment. From the business model of a firm to ecosystem business models Since the early 2000s, the concept of "business model" has surged into management vocabulary, and the use of the term has become fashionable (Shafer et al., 2005).It is a powerful concept (Zott & Amit, 2008) and has become of increasing importance since the dot.com era (Demil & Lecocq, 2010).The academic research into business models is under developed, with no commonly accepted view of what the business mod-Designing Business Models for the Internet of Things Mika Westerlund, Seppo Leminen, and Mervi Rajahonka el should consist of (Morris et al., 2005;Osterwalder et al., 2005;Schweizer, 2005).According to Zott, Amit, and Massa (2011), previous literature has viewed a business model in a multitude of ways, including a statement, a description, a representation, an architecture, a conceptual tool or model, a structural template, a method, a pattern, and a set.Furthermore, they found that the business model is often studied without an explicit definition of the concept. In general, the thinking around business models has changed over the past decade.According to Achtenhagen, Melin, and Naldi (2013), there has been a fundamental change from "what business models are" towards understanding "what business models are for".There seems to be a consensus among scholars that a business model spells out a particular firm's way of doing business (cf.Osterwalder et al., 2005;Rajala & Westerlund, 2008;Casadesus-Masanell & Ricart, 2010;Teece, 2010).For example, Osterwalder, Pigneur, and Tucci (2005) argue that "a business model is the blueprint of how a company does business".Moreover, business models are understood as entities, breakable into components or various modules.Shafer, Smith, and Linder, (2005) identify up to 20 different business model components categorized into four main areas, and Osterwalder and Pigneur (2010) discuss the various components as nine pillars.Muegge (2012) uses the components view to provide a method of business model discovery for technology entrepreneurs. Although scholars are unified in their view of the business model as a firm-level construct, they emphasize its systemic nature (Rajala & Westerlund, 2008).For instance, Timmers (1998) describes business model as the "architecture of the product, service and information flows, including a description of the various business actors and their roles; a description of the potential benefits for the various business actors; and a description of the sources of revenues".The literature on business ecosystems suggests the need for a deeper network view on business models (cf.Carbone, 2009;Muegge, 2013).Existing business model templates and frameworks are adequate when examining the challenges faced by single existing organizations but are less suited to analyzing the interdependent nature of the growth and success of companies that are evolving in the same innovation ecosystem (Weiller & Neely, 2013).Considering the development of the IOT field, it is clear that interdependency due to being connected with other actors through technical and business ties is becoming more and more essential. Pitfalls of Making Money in the Internet of Things Previous research is nearly silent of the challenges related to monetizing the IOT.Wurster ( 2014) is among the few to categorize the barriers that prevent companies from moving ahead in terms of making money with the IOT.According to her, the IOT has a major technological impact, which brings about problems for companies.These issues include the challenge of identifying horizontal needs and opportunities, the managerial challenge related to internal team alignment (i.e., matching technology and to the objectives of business developers), and the ways to overcome the market maturity problem for novel IOT technology.We extend this view and identify three contemporary challenges of the IOT, comprising the diversity of objects, the immaturity of innovation, and the unstructured ecosystems.These challenges are generated based on a literature review and discussions with experts on the IOT.Relying on Muegge (2011), these challenges focus on platform, developer community, and business ecosystem spheres of the formation of IOT-based ecosystem business models. Diversity of objects The problem of diversity of objects refers to the difficulty in designing business models for the IOT due to a multitude of different types of connected objects and devices without commonly accepted or emerging standards.The IOT is a network of interconnected objects (Evans, 2011), where everything from toothbrushes and sportswear to refrigerators and cars will have an online presence.For all these different kinds of "things", it will be extremely challenging to standardize the interfaces with which they can connect to the Internet.The diversity of objects brings about another challenge for managers given that there are virtually endless ways of connecting an object, a thing, a business, and a consumer together (Leminen et al., 2012).Therefore, a continuum of possible business models is increasing.Whereas recent estimates put forward that there are presently 10 billion connected devices and there will be 50 billion devices by 2020, more than 99 percent of physical objects that may one day join the network are still not connected (Evans, 2011).These estimates suggest that an unprecedented number of objects will be part of the future Internet.In addition, Espada and colleagues (2011) There is a need for the emergence of keystones that would shape the IOT business ecosystems through business model innovation (cf.Carbone, 2009).However, presently, it is too early to tell which will be the significant yet evolving ecosystems in the IOT field and which participant(s) will become keystone players within them.Such stakeholders could be, for example, an object/device supplier, a supplier of software infrastructure, a supplier of hosted solutions or smart services, an IOT operator, a value-added service provider or a full service integrator, data collector/analyzer, or even an (open source) user community (cf.Carbone, 2009).Therefore, instead of focusing on the key stakeholder(s), it may be better to focus on the generation and capture of value in the ecosystems.The unstructured IOT ecosystems result in the need for IOT-specific business model frameworks that help construct and analyze the ecosystem and business model choices and articulate this integrated value for the stakeholders. Potential Solutions We propose that managers can overcome the previously discussed challenges and be able to design feasible business models for the IOT if they change their focus towards an ecosystem approach of doing business and if they use business model design tools that consider the ecosystem nature of the IOT rather than emphasize an individual company's self-centered objectives.These endeavours are discussed in this section. We suggest that managers need to shift their focus from "the business model of a firm" to "ecosystem business models".However, the term "ecosystem business models" has at least three interpretations in the literature.First, the term can refer to a business model with specific properties -in this case, a business anchored in ecosystem concepts (e.g., the concept of a "green business model" that appeals to ecologically-motivated stakeholders and has specific "green" qualities) (Westerlund, 2013).Second, an ecosystem business model (or category of business models) can be shared by participants of an ecosystem (e.g., the term "fabless semiconductor business model", which implies that all fabless semiconductor firms are more or less the same) (Low & Muegge, 2013).Third, it can refer to a construct at a level of analysis above the firm that explains how the entire ecosystem works towards common goals rather than how the firm-level business works (cf.Batwww.timreview.ca Designing Business Models for the Internet of Things Mika Westerlund, Seppo Leminen, and Mervi Rajahonka tistella et al., 2013).However, the third interpretation usually refers to the ecosystem structure and mechanisms rather than focusing on the ecosystem as a business model (Ritala et al., 2013). Rather than understanding these various interpretations as distinct concepts, this study understands them as different views of the same phenomena.We argue that an ecosystem business model is composed of a set of value pillars (cf.Osterwalder and Pigneur, 2010) anchored in ecosystems, which focus on both the firm's method of creating and capturing value as well as any part of the ecosystem's method of creating and capturing value to the ecosystem. There have been attempts to define the IOT business ecosystem from the platform perspective (cf.Mazhelis et al., 2012), but the present focus of IOT players on fragmented solutions and applications fails to support these efforts.The basic approach towards understanding IOT business models is looking at the value for all actors in the IOT business ecosystem.This approach identifies the value for the actors that enable the IOT platform.Many telecommunications vendors and operators, as well as IOT platform vendors (e.g., machine-tomachine platform vendors), try to articulate the value of the IOT by using this approach to design their business models.However, the resulting business models are often biased toward the vendor and lack drivers for shared value as one of the explicit components. This study underlines a need to understand integrated value drivers (i.e., shared overall value for an entire IOT ecosystem) rather than fragmented value drivers (i.e., individual actor's value from specific applications or services).Therefore, this study suggests shifting the focus on value creation and value capture in business models from the company level to the ecosystem level.Business model frameworks for the IOT should assume a higher-level perspective to articulate the integrated value of the IOT rather than address the fragmented value drivers.Weill and Vitale (2001) introduce a set of simple schematics intended to provide tools for the design of e-business initiatives.Their "e-business model schematics" include three classes of business model components: participants (firm of interest, customers, suppliers, and allies), relationships, and flows (money, information, product, or service flows). Similarly, Tapscott, Lowy, and Ticoll (2000) suggest a value map for depicting how a business web operates.The value map depicts all key classes of participants (partners, customers, suppliers) and value exchanges between them (tangible and intangible benefits and knowledge).By the same token, Gordijn and Akkermans (2001) propose a conceptual modelling approach, the "e3-value ontology", to define how economic value is created and exchanged within a network of actors. Their ontology puts forward a number of useful valuerelated terms, such as value object and value port.Muegge (2011) argues that the engine driving innovation in an ecosystem is a resource cycle from the platform to the business ecosystem, to the developer community, and back to the platform.He also argues that the developer community is the locus of value creation (innovation) and the business ecosystem is the locus of value capture (innovation commercialization). Lastly, Allee (2000) argues that a "value network" generates economic value through dynamic and complex exchanges between companies, suppliers, strategic partners, community, and customers and users.According to her, these value exchanges can be mapped as flow diagrams showing goods, services, and revenue streams, as well as knowledge flows, and creation of value.Dynamics, which is visible through the value network perspective, is relevant even when describing business models at a company level.For instance, Casadesus-Masanell and Ricart (2010) argue that a business model consists of a set of managerial choices and their consequences.Each choice may result in different outcome; thus, they drive dynamism.Moreover, they summarize three characteristics of a good business model: it is aligned with company goals, it is self-reinforcing (i.e., dynamic and cyclical), and it is robust.These characteristics support business sustainability in ecosystems (cf.Iansiti & Levien, 2002). Principles of a Design Tool for Designing Ecosystem Business Models The major deficits in existing business model frameworks, such as the popular business model canvas (cf.Osterwalder & Pigneur, 2010) or any other componentbased design tools include the fact that they focus on the architecture of the business model.They provide "an exploded view", showing the "parts of an engine".However, these frameworks fail to explain the dynamics between the components, or "how the engine works".Because a system cannot be understood by studying its parts detached from the entity, we aim to establish a foundation for a business model tool that considers the ecosystem nature of the IOT and focuses on the action instead of the parts.Previous research has suggested the integration of actors, various resource flows, and value exchange between them to map an 1 www.timreview.ca Designing Business Models for the Internet of Things Mika Westerlund, Seppo Leminen, and Mervi Rajahonka ecosystem's operation (cf. Battistella et al., 2013;Ritala et al., 2013).Drawing from the ideas presented by, for example Allee (2000) on value networks, our principles for designing ecosystem business models build on different value flows and aspects in the IOT ecosystem. The relevant literature shares the view that business models are about value creation and value capture.We argue that managers can design viable IOT business models by taking into consideration a variety of aspects related to these two essential value tasks.First, there are different value drivers in ecosystems.They comprise both individual and shared motivations of diverse participants, and promote the birth of an ecosystem to fulfill a need to generate value, realize innovation, and make money.We anticipate that a focus on shared value drivers is crucial to create a non-biased, win-win ecosystem.Without respect for the objectives of other actors, a long-term relationship cannot be built.However, each separate value driver will also serve as an individual value node's motivational factor.Sustainability, cybersecurity, and improved customer experience are examples of value drivers that different participants may share in an IOT ecosystem. Second, these value nodes include various actors, activities, or (automated) processes that are linked with other nodes to create value.Moreover, these nodes may include autonomous actors, such as smart sensors, preprogrammed machines, and linked intellingence (avatars).Thus, the ecosystem is a compound of different value nodes; in addition to single activities, automated services, and processes, individuals, or commercial and nonprofit organizations, these value nodes may be groups of such organizations, networks of organizations, or even groups of networks.In short, there is a significant heterogeneity of value nodes in IOT ecosystems. Third, value exchanges refer to an exchange of value by different means, resources, knowledge, and information.The value exchange occurs between and within different value nodes in the ecosystem, and exchanges can be described through different value flows.Literature on value networks (e.g., Allee, 2000) describes these flows as tangible and intangible.Fundamentally, these flows show "how the engine works" by exchanging resources, knowledge, money, and information by different means.In other words, they describe the action that takes place in the business ecosystem in order to create and capture value.Value exchanges are crucial, because they also specify how revenues are generated and distributed in the ecosystem. Fourth, not all created value is meaningful from the commercialization point of view.Value extract refers to a part of ecosystem that extracts value; in other words, it shows the meaningful value that can be monetized and the relevant nodes and exchanges that are required for value creation and capture.Value extract is a useful concept because it can help to focus on a relevant portion of the ecosystem; for example, a manager can "zoom in" and "zoom out" of the ecosystem to focus on something that is beneficial from the business point of view.This portion may be single activities, automated processes, individuals, or commercial and nonprofit organizations, or groups of such organizations, networks of organizations, or even groups of networks and value flows between these nodes.Value extract is helpful in defining the core value and its underlying aspects in the ecosystem. Finally, the concept of value design illustrates how value is deliberately created and captured in an ecosystem.That is, value design is an overall architecture that maps the foundational structure of the ecosystem business model.On one hand, it provides boundaries for the ecosystem and describes the whole entity that creates and captures value.On the other hand, it is a sum of the four value pillars and results in a pattern of operation.In this vein, value design is a concept that is quite similar to the concept of business model.The difference is that, whereas a "business model" is typically associated with the business model of a firm, value design can be defined to apply at the ecosystem level.Thus, we argue that "value design" could be better suited to the context of ecosystems than "business model".In addition, we view that different value designs can be categorized, examined, and compared similarly to different types of business model. Figure 1 illustrates the key value pillars, which we anticipate to be better suited for designing business models for ecosystems than the components put forward by previous business model frameworks.We believe that these value pillars serve as a basis for a new type of design tool for ecosystem business models.The actual tool needs further research and could likely be built around the idea of value webs and their related illustrations. There are certainly limitations in our research, but this conceptual study is intended to present the first attempt -"a plum pudding model" (tinyurl.com/36x8pv9)tocreate a business model design tool for the IOT ecosystem.Although we have not provided an actual tool or its illustration at the present, the study established Designing Business Models for the Internet of Things Mika Westerlund, Seppo Leminen, and Mervi Rajahonka key pillars of the anticipated tool.Future research should verify these pillars and apply them into practice in order to develop the tool.Therefore, we call for more research on business model frameworks in the emerging IOT context, which is a fruitful field for developing a design tool for ecosystem business models.The IOT field has potential not only to radically change our lives, but also our ways of thinking about networked business. Conclusions This research focused on the challenges of designing business models for the emerging Internet of Things (IOT).The study acknowledged that there are ongoing paradigm shifts towards ecosystem thinking both in the discussion of platforms and in the design of business models.The study highlighted three major problems that prevent companies from designing business models and monetizing the IOT; the diversity of objects, the immaturity of innovation, and the unstructured ecosystems.We argue that managers can overcome these challenges and design successful business models if they focus on the ecosystem approach of doing business and use business model design tools that consider the ecosystem nature of the IOT. We provided grounds for a novel tool for designing ecosystem business models required in the IOT context.The pillars of the tool build on the different aspects of creating and capturing value in the ecosystem.They consist of the drivers, nodes, exchanges, and extracts of value.The pillars are interconnected, and, in contrast to existing business model frameworks, they aim to explain the flows and action of a business model rather than components of the model.That way, they form the value design, which is a concept comparable to that of a business model.This aim underlines a shift in scholarly and managerial thinking from the business model of a firm towards ecosystem business models, in which every participant's business model depends on the others in the ecosystem. Our study contributes to managerial understanding of ecosystem business models by different means.First, the study addresses the value pillars that managers should be looking at when designing business models in IOT ecosystems.By identifying value pillars, managers will be able to broaden their views on business model development and procedures from a single-company perspective to a broader, ecosystem context.For the ecosystem to bloom, the business models of different actors and the entire ecosystem should somehow resonate; the pieces of the puzzle should fit together.This on one hand guarantees that the ecosystem as a whole moves in the same direction, and on the other hand, guarantee that the business models of different actors are complementary.For example, if one actor wants to streamline its processes, another actor can receive new business by offering new solutions to meet the needs of the first actor.Second, managers may review their existing underlying assumptions on business model design by designing new value nodes and value exchanges in an ecosystem.This change of a mindset is important because it allows managers to view business model design -and later receive related benefits -at an ecosystem level instead of the restricted company level.We argue that our vision of a possible business model design tool can be used for IOT-related issues, but is applicable in other emerging ecosystem-seeking structures where technological solutions are not yet ready and where existing industry borders must be crossed, if necessary.Finally, our value pillars enable managers to focus on value opportunities in the emerging IOT ecosystem by understanding key challenges of ecosystem business model design. For academics, this study is important because we call for a major shift in business model research.We argue that business models should not be broken down into a number of unconnected components in the way of the majority of previous business model research.Instead, studies should focus on investigating ecosystem business models and the way these models generate and capture value through different value flows.That way, the concept of business model, which is traditionally associated with a single organization's business model, could be replaced with the term "value design", which is better suited to ecosystems. Figure 1 . Figure 1.Key pillars of a business model design tool for IOT ecosystems Business Models for the Internet of Things Mika Westerlund, Seppo Leminen, and Mervi Rajahonka al "things", andmay require specific business logics (Espada et al., 2011).AdSense platform, or the mashup ecosystem enabled by open APIs and open data, or the many business ecosystems anchored around community-developed platforms. Downes and Nunes (2013) physical objects, called "things", are becoming available in digital format.These "virtual objects" are digital elements that have a specific purpose, comprise a series of data, and can perform actions.They integrate with other applications and physic-Designing early majority, late majority, and laggards.The major challenge is to advance from early adopters to early majority, because the business model must allow for "scaling up" the business.The early adopters are willing to tolerate the immaturity of innovation, but the early majority likes to evaluate and buy whole products, including the product, ancillary products, and any related services(Moore, 2006).In addition,Downes and Nunes (2013)argue that big-bang disruption, which is enabled by new digital platforms, such as those underlying the IOT, does not follow the five-step model.Rather, new products are perfected with a few trial users and then are embraced quickly by the vast majority of the market.Again, the innovation must be mature enough for customers to adopt it rapidly.), and an early ecosystem is an unstructured, chaotic, and open playground for participants.The IOT is still in its infancy, just like the Internet once was.The Internet has been a driver for an incredible richness of rival and complementary business ecosystems that all use the Internet in different ways, such as the ecosystem anchored around Amazon Web Services (AWS), or the ecosystem anchored around Google's About the Authors Mika Westerlund , D.Sc.(Econ) is an Assistant Professor at Carleton University's Sprott School of Business in Ottawa, Canada.He previously held positions as a Postdoctoral Scholar in the Haas School of Business at the University of California Berkeley and in the School of Economics at Aalto University.Mika earned his doctoral degree in Marketing from the Helsinki School of Economics.His doctoral research focused on software firms' business models and his current research interests include open and user innovation, business strategy, and management models in high-tech and serviceintensive industries.Seppo Leminen holds positions as Principal Lecturer at the Laurea University of Applied Sciences and Adjunct Professor in the School of Business at Aalto University in Finland.He holds a doctoral degree in Marketing from the Hanken School of Economics and a licentiate degree in Information Technology from the Helsinki University of Technology (now the School of Electrical Engineering at Aalto University).His doctoral research focused on perceived differences and gaps in buyer-seller relationships in the telecommunication industry.His research and consulting interests include living labs, open innovation, value co-creation and capture with users, neuromarketing, relationships, services, and business models in marketing as well as management models in high-tech and service-intensive industries.Mervi Rajahonka, D. Sc. (Econ) is a Researcher at Aalto University's School of Business in Helsinki, Finland.She also holds a Master's degree in Technology from the Helsinki University of Technology and a Master's degree in Law from Helsinki University.Mervi earned her doctoral degree in Logistics from the Department of Information and Service Economy at the Aalto University.Her research interests include supply chain management, business models, modularity, processes, and service innovations.Her research has been published in a number of journals in the areas of logistics, services, and operations management.
6,893.6
0001-01-01T00:00:00.000
[ "Business", "Computer Science" ]
SEEK: A Framework of Superpixel Learning with CNN Features for Unsupervised Segmentation Supervised semantic segmentation algorithms have been a hot area of exploration recently, but now the attention is being drawn towards completely unsupervised semantic segmentation. In an unsupervised framework, neither the targets nor the ground truth labels are provided to the network. That being said, the network is unaware about any class instance or object present in the given data sample. So, we propose a convolutional neural network (CNN) based architecture for unsupervised segmentation. We used the squeeze and excitation network, due to its peculiar ability to capture the features’ interdependencies, which increases the network’s sensitivity to more salient features. We iteratively enable our CNN architecture to learn the target generated by a graph-based segmentation method, while simultaneously preventing our network from falling into the pit of over-segmentation. Along with this CNN architecture, image enhancement and refinement techniques are exploited to improve the segmentation results. Our proposed algorithm produces improved segmented regions that meet the human level segmentation results. In addition, we evaluate our approach using different metrics to show the quantitative outperformance. Introduction Both semantic and instance segmentation have a well established history in the field of computer vision. For decades they have attracted attention of the researchers. In semantic segmentation, one label is given to all the objects belonging to one class. Whereas instance segmentation dives a bit deeper; it gives different labels to each object in the image even if they belong to the same class. In this paper we focus on semantic segmentation. Image segmentation has applications in health care for detecting diseases or cancer cells [1][2][3][4], in agriculture for weed and crop detection or detecting plant diseases [5][6][7][8][9], in autonomous driving for detecting traffic signals, cars, pedestrians [10][11][12], and in other numerous fields of artificial intelligence (AI) [13]. It also poses a main obstacle in the further advancements of computer vision, that we need to overcome. In the context of supervised segmentation, data are provided in pairs, as both the original image and the pixel level labels are needed. Moreover, the CNNs are always data hungry, so there are never enough data [14,15]. It's even more troublesome in some specific domains (e.g., medical field) where even a few samples are quite hard to obtain. Even if you have a lot of data, labelling them still requires a lot of manpower and manual labor. So, all these problems posed by supervised semantic segmentation can be overcome by unsupervised semantic segmentation [16,17], in which an algorithm is generally required to produce more generalized segmented regions from the given image without any precontextual information. We revisit the grueling task of unsupervised segmentation by analyzing the recently developed algorithms of segmentation. We propose an approach that only requires the image to be segmented, with no additional data needed. In particular, our algorithm uses feature vectors produced by the neural network to make segments on the image, but for that, we need a descriptive feature vector as output, which has all the contextual information of the image. The descriptor vector should be the representative of all the textures, contrasts and regional information around each pixel in the image. CNNs are excellent feature extractors; they can even outperform humans in some areas. Each convolution layer learns all the information from the image in their local receptive field from an image or feature map. These combinations are passed through activation functions to infer nonlinear relationships, and large features can be made smaller, with pooling or down sampling, so that they can be seen at once. In this way, CNN efficiently handles the relationship of global receptive fields. There are various structures that can handle features more efficiently than the general CNN structure. In our approach, we also use a CNN architecture which has more representational power than a regular CNN, by explicitly remodeling the interdependencies within the filter channels. Hence, it allows us to extract more feature enriched descriptor vectors from the image. We propose a novel algorithm SEEK (Squeeze and Excitation + Enhancement + K-Means), for entirely unaided and unsupervised segmentation tasks. We summarize our main contributions as follows:  Design of CNN architecture to capture spatially distinct features, so that depth-wise feature maps are not redundant.  Unlike traditional frameworks, no prior exhaustive training is required for making segments. Rather, for each image, we generate pseudo labels and make the CNN learn those labels iteratively.  We introduce a segmentation refinement step using K-means clustering for better spatial contrast and continuity of the predicted segmentation results. Related Work There has been extensive research in the domain of sematic segmentation [18][19][20][21] and each article uses techniques and methods that favors their own targets based on different applications. The most recent ones include bottom-up hierarchical segmentation such as in [22], image reconstruction and segmentation with W-net architecture [23], other algorithms like [24] for post-processing images after getting the initial segmented regions and [25], which includes both pre-processing (Partial Contrast Stretching) and post-processing of the image to get better results. Deep neural networks have proved their worth in many visual recognition tasks which include both fully supervised and weakly supervised approaches for object detection [26,27]. Since the emergence of fully convolutional networks (FCN) [20], the encoder decoder structure has been proven to be the most effective solution for the segmentation problem. Such a structure enables the network to take an image of arbitrary size as an input (encoder) and produces the same size feature representation (decoder) as output. Different versions of FCN have been used in the domain of semantic segmentation [2,21,[28][29][30][31]. Wei et al. [32] proposed a super hierarchy algorithm where the super pixels are generated at multiscale. However, the proposed approach is slow because of CPU implementation. Lei et al. [33] proposed adaptive morphological reconstruction, which filters out useless regional minima and is better in convergence, but it falls behind in the state-of-the-art FCN based techniques. Bosch et al. [34] exploited the segmentation parameter space where highly overand under-segmented hypotheses are generated. Later on, these hypotheses are fed to the framework, where cost is minimized, but hyperparameter tuning is still a problem. Fu et al. [35] gave the idea of a contour guided color palette which combines contour and color cues. Modern supervised learning algorithms are data hungry, therefore there is a dire need of generating large scale data to feed data to the network for segmentation tasks. In this context, Xu et al. [36] studied the hierarchical approach for segmentation, which transforms the input hierarchy to the saliency map. Xu et al. [37] combined a neural networks based attention map with the saliency map to generate pseudo-ground truth images. Wang et al. [38] gave the idea of merging superpixels of the homogenous area from the similar land cover by calculating the Wishart energy loss. However, it is a two-staged process and is inherently slow, and relies heavily on initial superpixels generation. Soltaninejad et al. [39] calculated the three types of novel features from superpixels and then built a custom classifier of extremely randomized trees and compared the results with support vector machine (SVM). This study was performed on the brain area in fluid attenuated inversion recovery magnetic resonance imaging ( FLAIR MRI). Recently, Daoud et al. [40] conducted a study on the breast ultrasound images. A twostage superpixels based segmentation was proposed where in the first stage refined outlines of the tumor area were extracted. In the second stage, a new graph cut based method is employed for the segmentation of the superpixels. Zhang et al. [41] exploited the superpixels based data augmentation and obtained some promising results. This study shows that the role of superpixels based methods in both unsupervised and supervised segmentation in a diverse computer vision domain cannot be undermined. Now, as there have been recent advancements in the deep learning realm through the convolutional neural networks, we combined convolutional neural networks with the graph based superpixel method to obtain improved results. Contrast and Texture Enhancement (CTE) The images taken in the real life scenarios can have a lot of noise, low contrast or in some cases, they might even be blurred. Our algorithm grouped pixels with the same color and texture into one segment and the pixels from different objects into separate regions. It regressed in such a way that pixels in one cluster had high similarity index, while the pixels of different regions had a high contrast. So, by applying a series of filters and texture enhancement techniques, we obtained a noise free image. To produce a better quality image, first the image was sharpened so that each object in the image received a distinct boundary. Then, a bilateral filter was applied, which removed the unwanted noise from the image while keeping the edges of the objects sharp. Different neighborhoods of size (n x n) can be used to apply the filter. We chose n = 7 because using greater values produces very severe smoothing and we would lose a lot of useful information. In this way, we obtained the pre-processed image. Ablation experiments which demonstrate the importance of this step's inclusion in the architecture are reported in section 6. Superpixel Extraction A superpixel is a group of pixels which contain pixels that have the same visual properties, like color and intensity. Superpixel extraction algorithms depend upon the local contrast and distance between pixels in the RGB color space of the image. So, from the pre-processed image, we could extract more detailed and distinct P superpixels . Then, in all superpixels, each pixel was given the same semantic label. The finer the pixels generated by the algorithm, the lesser the iterations done by the CNN to produce the final segmented image. If there are too many categories (superpixels) generated by the algorithm, the CNN will take more iterations. So, to avoid such a scenario, we used the pre-processed image of section 3.1 as input of this block. Many architectures like [42] use the simple linear iterative clustering (SLIC) algorithm [43] for generating superpixels, but in our architecture we used the Felzenswalb algorithm [44] to produce superpixels, because it uses a graph based image segmentation method to produce superpixels. Although it makes greedy decisions, its results still satisfy the global properties. It also handles the details in the image well compared to the other algorithms. Moreover, its time complexity is linear and it is faster than the other existing algorithms [45][46][47]. We can consider this step as pre-segmenting the image. In this step, the Felzenswalb algorithm gives the same semantic labels to the regions where the pixels have similar semantic information. This is because the pixels which have the same semantics usually lie within neighboring regions, so we assign the same semantic labels to pixels which have the same color and texture. Hereinafter, we will refer to this superpixel representation as pre-segmentation results. Network Architecture The complete network architecture is shown in Figure 1. In this section, we will take the enhanced image of Section 3.1 as an input for our neural network. With this RGB image, we calculated the n-dimensional feature vector by passing it through the N convolutional blocks of our network. Each block consists of SE-ResNet (which will be explained later), followed by batch normalization and ReLu activation. Then, from the feature vector output of the final convolutional block, we extracted the dimensions which had the maximum value. Thus, we obtained the labels from the output feature vector. This aforementioned process is equivalent to the clustering of a feature vector into unique clusters, just like the argmax classification in [42]. We used the custom squeeze and excitation networks (SE-Net) originally proposed by Jie Hu et al [48] to perform feature recalibration. Among other possible configurations of SE-Net, we decided to incorporate it with the ResNet [49], to obtain a SE-ResNet block, because of its heightened representational power. For ease of notation, we simply call it SE-Block. CNNs, with the help of their convolutional filters, extract the hierarchical information from images. Shallow layers find trivial features from contexts like edges or high frequencies, while deeper layers can detect more abstract features and geometry of the objects present in the images. Each layer at each step extracts more and more salient features, to solve the task at hand efficiently. Finally, an output feature map is formed by fusing the spatial and channel information of the input data. In each layer, filters will first extract spatial features from input channels, before summing up all the information across all available output channels. Normal convolutional networks weigh up each of the output feature maps equally. Whereas, in SE-Net, each of the output channels is weighted adaptively. To put it simply, we can say that we are adding a single parameter to each channel and giving it a linear shift on the basis of how relevant each channel is. All of this is done by obtaining a global understanding of each channel by squeezing the feature maps to a single numeric value using global average pooling (GAP). This results in a vector of size n, where n is equal to the number of filter channels. Then, it is fed to the two fully connected (FC) layers of the neural network, which also outputs a vector which has the same size as the input. This n-dimensional vector can now be used to scale each original output channel based on its importance. The complete architecture of one SE-block is shown in Figure 2. Inside each SE-Block, in each convolution layer the first value (C) represents the number of input feature maps, the second value represents the kernel size, and the third value represents the number of output channels. The dotted arrows represent the identity mapping, and r is the reduction ratio, which is the same for every block. To be concise, each SE-Block consists of a squeezing operation that extracts all the sensitive information from the feature maps, and an excitation operation that recalibrates the importance of each feature map by calculating the channel wise dependencies. We only want to capture the most salient features for the argmax classification, to produce the best results, as described earlier. Therefore, we used SE-Block to explicitly redesign the relationship between the feature maps, in such a way that the output feature maps contain as much contextual information as possible. After each SE-Block, we used batch normalization (BN), followed by a ReLu activation function. We used BN before ReLu to increase the stabilization and to accelerate the training of our network as explained in [50]. K-Means Clustering We used K-means as a tool to remove noise from the final segmented image. After we obtained the final segmented image, there might have still been some unwanted regions or noise present in the results, so we removed it via the K-means algorithm as in [25]. For the K-means algorithm to work, we need to specify the number of clusters (K) we want in our output image. Different algorithms have been developed to find the number of suitable clusters (K) from raw images like in [51][52][53]. In our case, because of the unsupervised scenario, we do not know in advance how many segmented regions there will be in the final segmented image. So, one way to solve this problem is to count the number of disjointed segmented regions in the final segmented image and assign that value to K. We observed that using this technique, the algorithm further improved the segmentation results. Ablation experiments which demonstrate the importance of K-means are reported in section 6. Network Training Firstly, we enhanced the image quality using successive techniques of contrast and feature enhancement. For our algorithm neighborhood, (n x n) of size n = 7 produced best results. Then, we used the Felzenswalb algorithm [44], which assigns the same labels to pixels which have similar semantic features, to obtain pre-segmentation results. Then, we passed the enhanced image through the CNN layers for argmax classification (as explained in Section 3.3). Furthermore, we assigned the pixels which had the similar colors, textures and spatial continuity the same semantic labels. We tried to make the output of the network, i.e., argmax classification results, as close as possible to the presegmentation results, and the process was iterated until the desired segmented regions were obtained. In our case, iterating over an image for T = 63 times produced excellent results. We used ReLu as an activation function in our network, except at the output of the last SE-block, where we used Softmax activation to incorporate the cross-entropy loss (CE loss). For each SE-Block, we set the reduction ratio at r = 8. We backpropagated the errors and iteratively updated the gradients using a SGD optimizer, where the learning rate was set to 0.01 and the value of momentum used was β = 0.9. Finally, we used K-means clustering to further improve the segmented regions by removing the unwanted regions from the image. For the K-means clustering, the value of the K-clusters was chosen by the algorithm itself by calculating the number of unique clusters in the segmented image. We trained our network on NVIDIA GE-FORCE RTX 2080 Titan, and on average it took about 2.95 seconds to produce the final segmented image, including all the enhancement and refinement stages. Use K-Means clustering to further improve the segmentation quality. Results and Discussion We evaluated our proposed algorithm on the Berkeley Segmentation Dataset (BSDS-500) [54], which is a state of the art benchmark for image segmentation and boundary detection algorithms. The segmentation results of proposed algorithm are shown in Figure 3 and Figure 4. We also compared the results of our proposed algorithm with algorithm [42], the results were compared with both variants of the algorithm [42] (i.e., using SLIC and Felzenswalb) in Figure 5 and Figure 6. It can be seen from the figures that our proposed algorithm is able to produce meaningful segmented regions from the raw unprocessed input images. The boundaries of the objects are sharp and correctly defined and one object is assigned with one semantic label. Moreover, because of the FCN based architecture of our network, it can process images of multiple resolution without any modification at all. Algorithms that need a fixed size input and perform image warping, cropping and resizing, introduce severe geometrical deformation in the images, which is not suitable for some applications. Performance Assessment To provide a basis of comparison for the performance of our SEEK algorithm, we evaluated the performance of our network on multiple benchmark criteria. In this section, we present the details of our evaluation framework. The BSDS500 dataset contains 200 test, 200 train and 100 validation images. It is being used as a benchmark dataset for several segmentation algorithms like [55][56][57]. The dataset contains very distinct landscapes and sceneries. It also has multiple resolution images, which makes it ideal for testing our algorithm. The corresponding ground truth (GT) segmentation maps of all the images are labelled by different annotators [24,55,58]. A few examples of the multiple annotations for one image, are shown in Figure 5 and Figure 6, along with the original images and segmentation results. We evaluate all the images on various metrics one by one. For each image, when corresponding segmentation results are compared with the multiple GT segmentation maps, we obtain multiple values for each metric. We take the mean of those values and retain those mean values as evaluation results. Variation of Information Originally, variation of information (VI) was introduced for the general clustering evaluation [59]. However, it is also being used in the evaluation of segmentation tasks by a lot of benchmarks [23,24]. VI works by measuring the distance between the two segmentations (predicted, ground truth) in terms of their average conditional entropy. VI is defined in Equation (1) as: If S represents the predicted segmentation result and M denotes the GT label, then H(S) and H(M) are the conditional entropies of the respective inputs, where I represents the mutual information between these two segmentations [60]. Here, a perfect score for this metric would be zero. We can say that, except VI, the larger value is better for the metrics. For VI however, smaller is better. The same conclusion can also be drawn from Table 1, where we can see that our proposed algorithm has the lowest VI. Precision and Recall In the context of segmentation, recall means the ability of a model to label all the pixels of the given instance to some class, and precision can be thought of as a network's ability to only give specific labels to the pixels that actually belong to a class of interest. Precision can be defined as: (2) Recall is defined in Equation (3): (3) Here true positives (TP) are the number of pixels that are labelled as positive and are actually also from the positive class. True negatives (TN) are the number of pixels that were labelled as negative and also belong to the negative class. False positives (FP) are the number of pixels which belong to the negative class, but were wrongly predicted as positive by the algorithm. False negatives (FN) are the number of pixels that were wrongly predicted as negative but actually belong to the positive class. While recall represents the ability of a network to find all the relevant data points in a given instance, precision represents the proportion of the pixels that our model says are relevant and are actually relevant. Whether we want high recall or high precision depends upon the application of our model. In the case of unsupervised segmentation, we can see that we need a high recall so that we don't miss any object in the segmented output. Table 1 gives us a comparison of the performance of different setups of our model, along with other algorithms. We can see from Table 1 that all the unsupervised algorithms have a very high recall. Jaccard Index The Jaccard index, also known as intersection over union (IoU), is one of the most widely used metrics in semantic segmentation. This metric is used for gauging the similarity and diversity of input samples. It is used in the evaluation of a lot of state of the art segmentation benchmarks [61][62][63][64]. If we represent our segmentation result by S, and the GT labels which are represented by M, then it is defined as: [42]. SE represents our base CNN architecture; (SE+CTE) shows the results of our model with only CTE-Block as pre-processing unit, (SE + K) represents the results with only K-means block as post-processing unit. Last row, SEEK, shows the results on our complete architecture with all pre-and post-processing blocks included. Dice Coefficient The dice coefficient, also referred to as the F1-score, is also a popular metric, along with the Jaccard index. The dice coefficient is somewhat similar to the Jaccard index. Both of them are positively correlated. Even though these two metrics are functionally equivalent, their difference emerges when taking the average score over a set of inferences. It is given by the equation: The Jaccard index generally penalizes mistakes made by networks more than the F1-score. It can have a squaring effect on the errors relative to the F1-score. We can say that the F1-score tends to measure the average performance, while the Jaccard index (IoU) measures the worst case performance of the network. From Table 1, one can also see that the F1-score is always higher than the Jaccard index (IoU) for all the setups, which also proves the above statement. Ablation Study We also performed some ablation experiments to better explain our work and to demonstrate the importance of each block in the proposed network architecture. We performed all ablation experiments on the BSDS-500 dataset using the same GPUs, while keeping the backbone architecture (SE-ResNet) the same. We demonstrated the effect of inclusion and exclusion of the contrast and texture enhancement block (CTE-block) and the K-means clustering block. Form Table 1 (third row), we can see that if we remove the contrast and texture enhancement block, then all the metrics are affected. From Figure 7, we can also see that one of the effects of this block is that some parts of the objects get wrongly segmented as BG, e.g., tail of the insect, basket and hand of the woman, antlers of the deer, and the person with parachute. Moreover, the boundaries of the segmented regions are not sharp. So, we can say that contrast and texture enhancement block makes the architecture more robust and immune to weak noises in the image. The motive behind adding the K-means block in the algorithm is to further improve the segmentation results by removing the unwanted and wrongly segmented regions from the output of our neural network. The results are shown in Figure 8 visually, and quantitatively in Table 1 (fourth row). It is fairly clear from the results that, by using K-means as a segmentation refinement step, we get better segmentation results. formed (e.g., starfish has dark colored spots, flower is also segmented into two segments, and the same with the bird, as it has two different colored segments), (c) w K-means refinement step, one object is given only one label. Conclusion This paper introduces a deep learning based framework for unaided semantic segmentation. The proposed architecture does not need any training data or prior ground truth. It learns to segment the input image by iterating over it repeatedly and assigning specific cluster labels to similar pixels in conjunction, while also updating the parameters of the convolution filters to get even better and more meaningful segmented regions. Moreover, image enhancement and segmentation refinement blocks of our proposed framework make our algorithm more robust and immune to various noises in the images. Based on our results, we are of the firm belief that our algorithm will be of great help in the domains of computer vision where pixel level labels are hard to obtain and also in the fields where collecting training data for sufficiently large networks is very hard to do. Moreover, because of the SE-Block backbone of our algorithm, it can take input data of any resolution. In that way, it does not produce any geometrical deformations in the images introduced by the warping, resizing and cropping of images that would be done by other algorithms. Lastly, different variants of our network can find intuitive applications in various domains of computer vision and AI.
6,151.6
2020-02-25T00:00:00.000
[ "Computer Science" ]
AI‐Driven Wearable Mask‐Inspired Self‐Healing Sensor Array for Detection and Identification of Volatile Organic Compounds Volatile organic compounds (VOCs) sensor arrays have garnered considerable attention due to their potential to provide real‐time information for monitoring pollution levels and personal health associated concerning VOCs in the ambient environment. Here, an AI‐driven wearable mask‐inspired self‐healing sensor array (MISSA), created using a simplified single‐step stacking technique for detecting and identifying VOCs is presented. This wearable MISSA comprises three vertically placed breathable self‐healing gas sensors (BSGS) with linear response behavior, consistent repeatability, and reliable self‐healing abilities. For wearable and portable monitoring, the MISSA is combined with a flexible printed circuit board (FPCB) to produce a mobile‐compatible wireless system. Due to the distinct layers of MISSA, it creates exclusive code bars for four distinct VOCs over three concentration levels. This grants precise gas identification and concentration prognoses with excellent accuracy of 99.77% and 98.3%, respectively. The combination of MISSA with artificial intelligence (AI) suggests its potential as a successful wearable device for long‐term daily VOC monitoring and assessment for personal health monitoring scenarios. Introduction [3][4] Their high concentrations pose severe environmental repercussions, with prolonged exposure having detrimental effects on various human organs. [5,6]Conversely, low concentrations of VOCs from human DOI: 10.1002/adfm.202309732[9] This has made VOC monitoring increasingly popular as a means of gauging environmental pollution levels, as well as providing vital information about individuals' physiological and pathophysiological states.In order to address the growing demand for effective means of detecting VOCs in an individual's surroundings, the production of wearable VOC sensors has skyrocketed, [10][11][12] largely driven by the advantages of some conventional materials, such as electrochemical materials and metal-oxide semiconductors with high selectivity and low cost. [13,14]An example of this is the isoprene gas sensor developed by Zhang et al., [15] with a limit of detection (LOD) below five ppb.Despite this, for wearable applications that necessitate continuous monitoring-such as smart wound dressings and electronic skin-these VOC sensors may be subjected to external mechanical forces that can ultimately degrade or disable their performance. [16,17]As a result, there is an immediate need for sensors that can guarantee stability during long-term, uninterrupted monitoring. Fortunately, the advancements made in the realm of selfhealing materials have offered a promising solution for this problem.[20][21][22][23][24] Nevertheless, a key challenge of these sensors is that they often show strong adsorptions to multiple types of VOCs, thereby limiting their selectivity.To combat this, researchers have focused on both the hardware and software domains.[27] For example, the self-healing VOC sensor array demonstrated by Huynh et al., [28] which displayed high levels of discrimination, sensitivity, and selectivity despite being subjected to mechanical damage.However, current fabrication methods for producing these arrays typically involve the use of different materials or chemically modified host material, leading to an expensive and complex design.Thus, there is a great demand for a simpler and easier fabrication process that can produce wearable VOC sensing arrays with self-healing capabilities. 31][32][33][34] Despite this, precise identification and concentration prediction of target gases in mixed environments still remain an elusive goal, requiring more precise feature selection and comprehensive visualization.Amidst the current epidemic of disease, the masks have become indispensable because of their ability to filter out gases.Drawing inspiration from this, we propose a novel self-healing sensor array (MISSA) in a layered form, with an uncomplicated and resourceful single-step technique for recognizing VOCs and foreseeing concentration (Figure 1).The selfhealing element used for VOCs detection is called HDIM, comprised of a mix of prepolymer and MXene, which can be disentangled in chloroform to form a conductive ink-like solution.Exploiting this characteristic, a breathable self-healing gas sensor (BSGS) based on HDIM was designed through screen-printing on polyurethane (PU) electrospun film, displaying a low LOD, straight behavior, consistent accuracy, and beneficial self-healing competencies.Just like the mask design, three BSGSs were set up vertically to create the MISSA, which made it possible for the formation of singular barcodes for individual VOCs through the selective adhesion of distinct layers.To offer portable and wearable detection, a smartphone-based joined wireless system was produced, with a flexible printed circuit board (FPCB) to sense array impedance and transfer information to a mobile phone via Bluetooth.With the aid of personalized assessment software, the present-time reactions of the MISSA to VOCs could be conve-niently viewed on the phone screen.Besides, the MISSA highlighted superior gas recognition and concentration prognosis capabilities for four varied VOCs via the application of prudent feature choice and principal component analysis (PCA)-facilitated machine learning (ML), conveniently illustrating the outcomes in a complete way.The effective enforcement of sophisticated MISSA underscores its feasibility as a movable tool to accurately identify VOCs for enduring regular health monitoring. Preparation of the HDIM-Based BSGS Figure 2a presents the synthetic structure of the HDIM, where MXene is uniformly dispersed in the self-healing prepolymer as a conductive material.The synthesis procedure of the prepolymer is depicted in Figure S1 (Supporting Information), with hydroxylterminated polybutadiene (HTPB), 1,10-decanediol (DE), and isophorone diisocyanate (IPDI) as the primary constituents, and dibutyltin dilaurate (DBTDL) as the catalyst.Notably, within the HDIM, HTPB acts as the soft component for controlling flexibility, while DE and IPDI collectively serve as the hard segments.They contribute to hydrogen bonding by forming urea/urethane linkages, impacting the mechanical characteristics and selfhealing properties of the material. [35]The absence of significant peaks of the N═C═O stretching bond at 2264 cm −1 in the FTIR spectrum shown in Figure 2b indicates the complete conversion of diisocyanate monomers into urethane bonds, an essential requirement for the formation of hydrogen bonds. [15]After preparing the transparent prepolymer (Figure S2, Supporting Information), a solution of single-layered Ti 3 C 2 T x MXene with a concentration of 5 mg mL −1 is added proportionally to the prepolymer solution to create a thick mixture, ultimately leading to the production of HDIM (Figure S3, Supporting Information).The X-ray photoelectron spectroscopy (XPS) patterns and elemental mapping in Figure 2c,d revealed the incorporation of Ti and F elements into the HDIM through MXene. [36,37]All of these results demonstrated that MXene was dispersed uniformly within the HDIM.Furthermore, the water contact angle (CA) of ≈104°evidenced the HDIM's good hydrophilicity, visible in Figure 2e.The thermogravimetric analysis (TGA) of HDIM indicated its outstanding thermal stability, even at elevated temperatures of up to 240°C (Figure S4, Supporting Information). It is remarkable that HDIM can be re-dissolved in chloroform to produce an ink-like solution upon request (Figure 2f; Figure S5, Supporting Information).Moreover, this ink-like solution is suitable for the development of printed sensors and displays a good level of electrical conductivity when applied to commonly used substrates such as PU, PTFE, and paper (Figure 2g; Figure S6, Supporting Information). [38]To produce a BSGS with HDIM, the ink-like property was utilized in a screen-printing process (Figure 2h).[41] The PU electrospun film exhibited an ideal hydrophobic quality (Figure S9, Supporting Information), meaning sweat or wound fluids would not affect the performance of the HDIM.Printing procedures used the HDIM ink to create an array with a width of ≈5 mm (Figure 2i; Figure S10, Supporting Information) which enabled cost savings and improved efficiency.Additionally, the BSGS had great air permeability which ensured skin comfort upon direct contact with the sensors (Figure S11, Supporting Information). VOCs Sensing Performance of BSGS The BSGS's electrical property was measured through a currentvoltage (I-V) test at room temperature, showing a positive ohmic behavior (Figure S12, Supporting Information).VOCs exert a significant influence on disease diagnosis, playing a crucial role in monitoring various metabolic processes.For instance, the monitoring of alcohol metabolism, aldehyde production, and other gas emissions originating from bacterial activities at wound sites holds the potential to serve as a supplementary diagnostic criterion for assessing wound infection and healing progress.In this context, we consider the following specific VOCs as illustrative examples, namely ethanol, isobutanol, and formaldehyde.Additionally, we examine chloroform, given its serious harmful effects and its solubility for the sensing materials employed.To evaluate the sensing performance of the BSGS, the response of the sensor was evaluated using the following formula: where R 0 represents the initial resistance observed in a dry air environment, and R t corresponds to the measured resistance in the target gases.The dynamic sensing outcomes of the BSGS at room temperature, with varying concentrations of ethanol ranging from 0.2 to 50 ppm, are illustrated in Figure 3a.As the concentration of ethanol gas increased, the sensor response exhibited a corresponding increase, demonstrating the BSGS's ability to differentiate between different ethanol concentrations.Importantly, as depicted in the inset of Figure 3a, the response of the BSGS exhibited a linear relationship with the concentration of ethanol, simplifying the practical application.The fitting curve revealed a response slope of 0.01934 ppm −1 with a fitting quality r 2 = 0.99677.At room temperature, the calculated noise value of 0.009315 leads to a theoretical detection limit of 0.04 ppm (for calculation details, see the Supporting Information).The assessment of repeatability is an essential measurement for sensors in actual sensing scenarios.In order to analyze this index, the BSGS was continuously assessed through three cycles.Figure 3b demonstrates that the BSGS responded reliably to the same concentration.The response time, which is measured as the time it takes to reach 90% of the maximum response, was recorded as 160 s for 10 ppm of ethanol.The recovery time, which is the time needed to go back to 10% of the stabilized response in dry air, was measured to be 265 s (Figure S13, Supporting Information).Additionally, the BSGS maintained a stable response to 10 ppm of ethanol over 180 days, without any significant degradation in terms of conductivity and sensitivity (Figure S14, Supporting Information).This emphasizes its strength and reliability. The primary components of HDIM's sensing capability consist of two crucial aspects: I) the redox reaction occurring between HDIM and the analyte, and II) the swelling effect (Figure 3c).I) The p-type semiconductor sensing ability of Ti 3 C 2 T x MXene toward adsorbed VOCs is evidenced at room temperature.At this temperature, the oxygen molecules on the surface of HDIM will sequester electrons from HDIM, creating oxygen anions (O 2 − (ads)).[44] The process can be summarized as the two formulas as follows: II) The introduction of VOC molecules into the HDIM induces a swelling process, which in turn leads to a greater separation among MXene sheets, hindering the electron hopping processes and resulting in an increase in the resistance of the BSGS. [45]herefore, the combined effect of these factors contributes to the observed increase in resistance. In addition to ethanol, the BSGS was also assessed for its realtime reaction toward three other distinct VOCs (Figure 3d), with excellent reproducibility being noted (Figure S15, Supporting Information).Notably, the response speed and amplitude of BSGS were highest for chloroform among these four VOCs (Figure 3e).This could be attributed to the more potent interaction between the HDIM material and the larger polarity of chloroform. [46,47]In addition, chloroform's capability of dissolving HDIM can result in structural and property changes in the material.These changes may appear in the form of discrepancies in surface topography, conductivity, and adsorption characteristics of the HDIM, thereby amplifying the effect of the VOCs detection procedure and the related response. Self-Healing and VOCs Sensing Properties of BSGS after Treatment [50] Consequently, it is essential to include self-healing capabilities in order to improve the longevity and dependability of these gadgets.To evaluate the self-healing properties, scratches were deliberately inflicted on HDIM surfaces and then placed in various temperature conditions.As can be seen in Figure 4a, it took roughly 72 h for the scratches to recover completely at room temperature.In contrast, at 40°C and 60°C, the healing time drastically decreased to 60 min and only 10 min, respectively, which shows that HDIM possesses a remarkable self-healing ability that can be further boosted by heating.Dynamic mechanical analysis (DMA) on HDIM disclosed more knowledge about its mechanical and self-healing behavior (Figure 4b).At room temperature, the storage modulus (E') exceeded the loss modulus (E″), which implies that the material behaves like an elastic solid.With higher temperatures, E′ and E″ dropped, demonstrating a transition to a more viscous state in HDIM (Figure S16, Supporting Information).Therefore, the transfer of hydrogen bonds in the urea/PU bonds was sped up, thereby improving the efficacy of self-healing. The damaged HDIM reestablished its electronic conductivity when the two pieces were brought together (Figure 4c).Moreover, HDIM's conductivity stability during the stretching-relaxing process was noteworthy (Figure S17, Supporting Information), attesting to its potential in daily use.To investigate the BSGS's dependability in practical applications, we compared its VOC sensing performance before and after varied treatments.At 10 ppm of ethanol gas, there was little difference in the BSGS's response that was repaired from scratch compared to the original response (Figure 4d).Additionally, Figure 4e presents the results of the fatigue test, which included stretching and relaxing the BSGS multiple times.Despite the heightened noise level, the response value at a similar response time was little altered.These results affirm the BSGS's capability to remain resilient in testing circumstances. VOCs Identification by MISSA Drawing on the benefits of the gas filtration effect and stackable design exhibited in masks, we have crafted a unique selfhealing sensor array, called MISSA, to obtain more information and simplify the detection of different VOCs. Figure 5a illustrates that MISSA was simply produced by aligning three equal BSGS units (with the same shape and size) in a vertical line and tightly enclosing them with tape, thus providing a layered effect.In addition, PU electrospun film, much like the filtration system found in masks, is capable of changing the diffusion rate of gas molecules arriving at layer two and layer three.Figure 5b presents the comprehensive set of various response values and their accompanying time patterns shown by each layer of MISSA in terms of ethanol gas sensing.This can be attributed to the absorption properties and obstruction effect caused by the PU film, leading to a sequential decrease in the response values and rates across layer one, layer two, and layer three.The outcomes suggest that MISSA can be proficiently used as a sensor array to create diverse dynamic reactions to the same VOC.Figure 5c portrays the response traits of MISSA to four VOCs at a concentration of 10 ppm.This unique set of code bars enables an easy examination of certain VOC types, thus permitting efficient classification within the sensor array. To enable mobile gas surveillance and compact display, we merged the MISSA with a FPCB equipped with Bluetooth technology to develop a portable electronic system, as shown in Figures S18 and S19 (Supporting Information).Moreover, a tailor-made application program was designed to obtain and process the real-time VOC sensing data, which was subsequently exhibited on a smartphone.As illustrated in Figure 5d, when the MISSA-based electronic system was placed in the ethanol atmosphere, the smartphone screen showed three real-time response graphs matching the MISSA three-layer construction.Thus, the smartphone-compatible VOC sensing system, blending MISSA with portable circuits, features superior sensing performance and holds potential for application as a wearable covering in VOC gas monitoring. VOCs Analysis and Identification with MISSA Acquiring dynamic gas response curves for four gas species with concentrations of 0.2, 5, and 10 ppm through MISSA, 12 distinct features that accurately reflected the respective response curves were extracted by utilizing fitting parameters labeled according to the gas type and concentration.This process is essential to protect human health in an environment with VOCs frequently encoun-tered in a mixture.These parameters include the maximum response value (S i max ), response time (t i res ), recovery time (t i rec ), and the offset (y i off ), where i represents the layer number of MISSA (Figure 6a).Each line connects the values of the parameters along their respective axes, representing a single point within the 12D parameter space.This visualization offers valuable insights into the clustering patterns of gas responses at specific parameters, highlighting the convergence of measurements. To visualize data more clearly, we employed PCA dimensionality reduction to extract the primary features and transformed the 12D parameters into a 2D space represented by PC1 and PC2 (Figure 6b).From the left panel of the figure, it can be seen that PC1 (accounting for 66.6% of the variance) and PC2 (accounting for 26.1% of the variance) have the highest variances and the combination of the two amounts to a variance of ≈92%.This indicates that PCA retains almost all of the original data information and achieves effective dimension reduction.Moreover, the right panel of Figure 6b displays the loading scores of the extracted parameters onto the main principal component.The combination of PC1 and PC2 forms vectors that demonstrate their respective influences on the principal components, as suggested by the gray arrows in Figure 6c.This dimensionality reduction approach enabled the separation of four gas species, each with three different concentrations, into 12 clusters, thus making it applicable for subsequent gas identification and concentration prediction (Figure 6c).K-Nearest Neighbors (kNN) is both a straightforward and resilient method for multi-category problems with small sample sizes.To create kNN model, the PCA results were utilized as the training data due to its noteworthy efficiency and precision.As seen in Figure 6d, the PCA-assisted kNN produces a clear, discernible boundary decision map for the four VOC gas groups.We used the five-fold cross-validation and illustrated the confusion matrix in Figure 6e which shows the results of 446 test samples on the identification of the four VOC gases.This displays an impressive identification accuracy of 99.77%, further testifying to the usefulness of PCA-assisted kNN in precisely recognizing the four VOCs. To predict the concentration of gas, a linear regression model was created using data from 0.2 and 10 ppm of ethanol.The 5 ppm data was only used to validate the model's prediction accuracy.The red dashed line in Figure 6f displays the fitted line.The model yielded a high prediction accuracy of 98.3% for ethanol (calculation details in Supporting Information).The regression surface was then projected into a 2D PC space to display the prediction regions (Figure 6g).The x-axis and y-axis represent PC1, PC2, and the z-axis corresponds to the ethanol concentration.Notably, the red block within the pink decision region (representing 5 ppm ethanol) matched the data that was not used to train the regression model.This proves the effectiveness of the PCA-assisted ML technique for identifying VOCs and accurately predicting their concentrations. Summary and Conclusion This study designed and created a novel type of MISSA to identify VOCs using an efficient single-stacking technique.MISSA consisted of three levels of BSGS, each developed by applying self-healable HDIM to a PU electrospun film through screenprinting technology.The results showed the superiority of BSGS with a small detection limit of 0.04 ppm, a range from 0.2 to 50 ppm, consistent performance, and reliable self-healing abilities.Also, similar to the mask design, MISSA was able to create a code bar that detected four VOCs in three different concentrations due to selective adsorption in each layer.Additionally, pairing MISSA with FPCB in a mobile phone-ready portable device allowed convenient use for wearable monitoring of VOCs.Hybrid PCA-assisted ML demonstrated the distinction of decision boundaries, providing 99.77% accuracy in VOC identification.Additionally, 3D regression surface prediction accurately predicted the concentration of ethanol with an accuracy of 98.3%.Considering its remarkable characteristics, MISSA is believed to be a promising tool for VOCs detection, having the potential to act as a dependable and precise monitor for prolonged everyday use in order to protect human health. Characterization: Fourier-transform infrared spectra were recorded using the Nicolet 6700 (Thermo Scientific) Fourier Transform InfraRed.The XPS patterns were performed on the AXIS Ultra DLD (Shimadzu).The elemental mapping was carried out using RISE-MAGNA (TESCAN).The recovery of scratches was observed using the MV3000 microscope from Jiangnan.The DMA measurements were performed on the DMA-Q800 (TA Instruments).The electrospinning machine used was the HZ-12, purchased from Huizhi Electrospinning.The TGA was carried out using the Labsys Evo.The CA measurements were performed using the DSA100 instrument from KRUSS. Synthesis of HDIM: HTPB (2.1 g, 1 mmol) underwent a vacuum treatment at 80°C for 2 h to eliminate any residual moisture.Following this, IPDI (467 mg, 2.1 mmol) and DBTDL (5 mg, ≈1600 ppm) were dissolved in THF (10 mL) and added dropwise to the HTPB reaction vessel.The resulting mixture was stirred for 1.5 h under a nitrogen atmosphere to achieve a homogeneous and viscous liquid state.Subsequently, DE (174 mg, 1 mmol) was introduced as a chain extender to the reactor.The mixture was further subjected to an additional 36-h treatment at 80°C to facilitate the desired reaction.It was then poured into a rectangle Teflon mold and left to slowly evaporate at room temperature overnight.The resulting mixture was dissolved in chloroform to achieve a concentration of 0.1 g mL −1 , and 10 mL of the solution was taken and heated at 80°C while stirring.MXene (5 mL, 5 mg mL −1 ) was added dropwise to the heated mixture and stirred for 1 h, resulting in a homogeneous viscous liquid.The final mixed solution was poured into a rectangle Teflon mold and left to slowly evaporate at room temperature overnight.Subsequently, the resulting film was dried in a vacuum oven at 80°C for 24 h to remove residual solvent, yielding a dark HDIM film. Fabrication of BSGS and MISSA: PU was dissolved in a mixture of DMF and THF (40/60, v/v) at 60°C to a concentration of 13 wt.%.The prepared PU solution was loaded into a 5-mL syringe and fed through the electrospinning apparatus at a controlled rate of 1.5 mL h −1 using a syringe pump.An applied voltage of 14 kV was utilized, and the distance between the needle tip and the target receiver was set at 10 cm.The resulting PU film was cut into rectangular shapes measuring 1 cm × 1.5 cm.Subsequently, the prepared HDIM was dissolved in chloroform at a concentration of 0.1 g mL −1 and printed onto the cut PU film as ink using a screen-printing technique.After the solvent evaporation, the sensor was secured in place using conductive adhesive tape.To create a MISSA, three breathable self-healing sensors were arranged in a symmetrical manner, with identical appearances positioned at the top and bottom.The edges and bottom of the sensors were sealed using impermeable PVC tape to ensure a complete enclosure. Gas Sensing System and Resistance Measurements: In order to measure the dynamic resistance variation of the breathable self-healing sensor, a gas-sensing system was established.The prepared sensor was placed within a cylindrical gas-sensing chamber.In order to mitigate the potential impact of humidity, a protective measure was implemented during the testing phase wherein a PU electrospun film was employed to cover the surface of the sensor to be measured.Target gases were appropriately diluted using dry air, and precise control of gas concentrations was achieved by employing accurate digital mass flow controllers (Sevenstar, CS200).The resistance changes of the sensor during gas detection were recorded using a data acquisition module (Keithley, 3706A).Data processing (PCA, kNN, and linear regression) was conducted using Python programs in this study. Figure 1 . Figure 1.Conceptual illustration of MISSA preparation and its use for VOCs analysis and identification. Figure 2 . Figure 2. a) Schematic illustrating the two constituents of HDIM.b) FTIR spectra, c) XPS survey, and d) XRD patterns of HDIM.e) Contact angle of water on HDIM exhibiting favorable hydrophilicity.f) The dissolved HDIM in chloroform to form a conductive ink-like solution.g) The electrical conductivity of HDIM when incorporated into the electrospun PU film.h) Image of the BSGS.i) Standardized array created by screen-printing. Figure 3 . Figure 3. a) Real-time response of the BSGS to ethanol gas; The inset figure shows a linear correlation of the response values as a function of ethanol concentration.b) Repeatability tests of BSGS toward different concentrations of ethanol.c) The gas sensing mechanism underlying the HDIM composites, highlighting the pivotal roles of redox reactions and the swelling effect in the sensing process.d) Responses to different VOC gases at concentrations of 0.2, 5, and 10 ppm, including ethanol, isobutanol, formaldehyde, and chloroform.e) Single response waveforms when exposed to 10 ppm concentrations of various VOC gases. Figure 4 . Figure 4. a) Optical images showing the progressive self-healing process of the scratched HDIM film at room temperature, 40°C, and 60°C.Scale bar: 500 μm.b) Dynamic mechanical analysis highlighting the dominance of the E′ over the E″ across most frequencies.c) Photographs demonstrating the ability of the severed and subsequently self-healed HDIM film to illuminate an LED indicator when connected in series.d) Dynamic response curves to 10 ppm ethanol gas, both in its original state and after scratch healing.e) Dynamic response curves to 10 ppm ethanol gas, both in its original state and after 1000 times stretching. Figure 5 . Figure 5. a) (i) Three individual BSGS units for MISSA construction, each possessing identical shape and size; (ii) The front view of the MISSA; (iii) The back view of the MISSA.b) The response values and corresponding time profiles of the MISSA in the context of ethanol gas sensing.c) Bar chart representation showing the response characteristics of the MISSA when exposed to VOCs at a concentration of 10 ppm.d) Schematic diagram of the smartphone-enabled VOC sensing system. Figure 6 . Figure 6.a) Parallel coordinate plots of 12 parameters extracted from all VOC gas response measurements for ethanol, isobutanol, formaldehyde, and chloroform, corresponding to the blue, red, yellow, and green group lines, respectively.b) Principle component loading score for the 12 extracted parameters.c) PCA scatter plots of four VOC gases at three concentration levels d) Decision map for the four VOC groups identified by the PCA-assisted kNN method.e) Confusion matrix for four VOC gases based on PCA-assisted kNN classification results, EtOH represents ethanol, I-BUT represents isobutanol, HCHO represents formaldehyde, and CHCl 3 represents chloroform.f) Relationship between predicted and true gas concentrations of ethanol.g) Regression surface for the predicted concentration of ethanol.
5,919.2
2023-10-16T00:00:00.000
[ "Computer Science" ]
A Maximum-Entropy Method to Estimate Discrete Distributions from Samples Ensuring Nonzero Probabilities When constructing discrete (binned) distributions from samples of a data set, applications exist where it is desirable to assure that all bins of the sample distribution have nonzero probability. For example, if the sample distribution is part of a predictive model for which we require returning a response for the entire codomain, or if we use Kullback–Leibler divergence to measure the (dis-)agreement of the sample distribution and the original distribution of the variable, which, in the described case, is inconveniently infinite. Several sample-based distribution estimators exist which assure nonzero bin probability, such as adding one counter to each zero-probability bin of the sample histogram, adding a small probability to the sample pdf, smoothing methods such as Kernel-density smoothing, or Bayesian approaches based on the Dirichlet and Multinomial distribution. Here, we suggest and test an approach based on the Clopper–Pearson method, which makes use of the binominal distribution. Based on the sample distribution, confidence intervals for bin-occupation probability are calculated. The mean of each confidence interval is a strictly positive estimator of the true bin-occupation probability and is convergent with increasing sample size. For small samples, it converges towards a uniform distribution, i.e., the method effectively applies a maximum entropy approach. We apply this nonzero method and four alternative sample-based distribution estimators to a range of typical distributions (uniform, Dirac, normal, multimodal, and irregular) and measure the effect with Kullback–Leibler divergence. While the performance of each method strongly depends on the distribution type it is applied to, on average, and especially for small sample sizes, the nonzero, the simple “add one counter”, and the Bayesian Dirichlet-multinomial model show very similar behavior and perform best. We conclude that, when estimating distributions without an a priori idea of their shape, applying one of these methods is favorable. Introduction Suppose a scientist, having gathered extensive data at one site, wants to know whether the same effort is required at each new site, or whether already a smaller data set would have provided essentially the same information. Or imagine an operational weather forecaster working with ensembles of forecasts. Working with ensemble forecasts usually involves handling considerable amounts of data, and the forecaster might be interested to know whether working with a subset of the ensemble is sufficient to capture the essential characteristics of the ensemble. If what the scientist and the forecaster are interested in is expressed by a discrete distribution derived from the data (e.g., the distribution of vegetation classes at a site, or the distribution of forecasted rainfall), then the representativeness of a subset of the data can be evaluated by measuring the (dis-)agreement of a distribution based on a randomly drawn sample ("sample distribution") and the distribution based on the full data set ("full distribution"). One popular measure for this purpose is the Kullback-Leibler divergence [1]. Depending on the particular interest of the user, potential advantages of this measure are that it is nonparametric, which avoids parameter choices influencing the result, and that it measures general agreement of the distributions instead of focusing on particular aspects, e.g., particular moments. For the use cases described above, if the sample distribution is derived from the sample data via the bin-counting (BC) method, which is the most common and probably most intuitive approach, a situation can occur where a particular bin in the sample distribution has zero probability but the corresponding bin in the full distribution has not. From the way the sample distribution was constructed, we know that this disagreement is not due to a fundamental disagreement of the two distributions, but rather that this is a combined effect of sampling variability and limited sample size. However, if we measure the (dis-)agreement of the two distributions via Kullback-Leibler divergence, with the full distribution as the reference, divergence for that bin is infinite, and consequently so is total divergence. This is impractical, as an otherwise possibly good agreement can be overshadowed by a single zero probability. A similar situation occurs if a distribution constructed from a limited data set (e.g., three months of air-temperature measurements) contains zero-probability bins, but from physical considerations we know that values falling into these zero-probability bins can and will occur if we extend the data set by taking more measurements. Assuring nonzero (NZ) probabilities when estimating distributions is a requirement found in many fields of engineering and sciences [2][3][4]. If we stick to BC, this can be achieved either by adjusting the binning to avoid zero probabilities [5][6][7][8][9], or by replacing zero probabilities with suitable alternatives. Often-used approaches to do so are (i) assigning a single count to each empty bin of the sample histogram, (ii) assigning a (typically small) preselected probability mass to each zero probability bin in the sample pdf and renormalizing the pdf afterwards, (iii) spreading probability mass within the pdf by smoothing operations such as Kernel-density smoothing (KDS) [10] (an extensive overview on this topic can be found in Reference [11]), and (iv) assigning a NZ guaranteeing prior in a Bayesian (BAY) framework. Whatever method we apply, desirable properties we may ask for are introducing as little unjustified side information as possible (e.g., assumptions on the shape of the full distribution) and, like the BC estimator, convergence towards the full distribution for large samples. In this context, the aim of this paper is to present a new method of calculating the sample distribution estimate, which meets the mentioned requirements, and to compare it to existing methods. It is related to and draws from approaches to estimate confidence intervals of discrete distributions based on limited samples [12][13][14][15][16][17]. In the remainder of the text, we first introduce the "NZ" method and discuss its properties. Then we apply the NZ method and four alternatives to a range of typical distributions, from which we draw samples of different sizes. We use Kullback-Leibler divergence to measure the agreement of the full and the sample distributions. We discuss the characteristics of each method and their relative performance with a focus on small sample sizes and draw conclusions on the applicability of each method. Method Description For a variable with discrete distribution p with K bins, and a limited data sample S, thereof of size n, we derive a NZ estimatorp for p based on S as follows: For the occurrence probability of each bin ß k (k = 1, . . . , K), we calculate a BC estimator q k and its confidence interval CI p,k = p k,lower ; p k,upper on a chosen confidence level (e.g., 95%). Based on the fact that the occurrence probability of a given bin from n-repeated trials follows a binomial distribution with parameters n and p k , there exist several ways to determine a confidence interval for this situation [18]. Several of these methods approximate the binomial distribution with a normal distribution, which is only reasonable for large n, or use other assumptions. To avoid any of these limitations and to keep the methods especially useful for cases of small n (here the probability of observing zero probability bins is the highest), we calculate CI p,k using the conservative yet exact Clopper-Pearson method [19]. It applies a maximum-likelihood approach to estimate p k given the sample S of size n. The required conditions for the method to apply are: • there are only two possible outcomes of each trial, • the probability of success for each trial is constant, and • all trials are independent. In our case, this is assured by distinguishing the two outcomes "the trial falls within the bin or not", keeping the sample constant and random sampling. In practice, there are two convenient ways to compute the confidence Interval CI p,k . One way is to look it up, for example, in the original paper by Clopper and Pearson [19], where they present graphics of confidence intervals for different sample sizes n, different numbers of observations x, and different confidence levels. The second option is to compute the intervals using the Matlab function [~, CI] = binofit(x, n, alpha) (similar functions exist for R or python) with 1 − alpha defining the confidence level. This function uses a relation between the binomial and the Beta-distribution, for more details see e.g., Reference [20] (Section 7.3.4) and Appendix A. For each k = 1, . . . , K, the NZ estimatep k is then calculated as the normalized mean value m k of the confidence interval CI p,k according to Equation (1). Normalization with the sum of all m k for k = 1, . . . , K is required to assure that the total sum of probabilities inp equals 1. For this reason, the normalized values ofp k can differ a little from the mean of the confidence intervals. Two text files with Matlab code (Version 2017b, MathWorks Inc., Natick, MA, USA) of the NZ method and an example application are available as Supplementary Material. Properties There are four properties of the NZ estimatep k that are important for our application: 1. Maximum Entropy by default: For an increasing number of zero probability bins in q,p converges towards a uniform distribution. For any zero probability bin β k we get q k = 0, assign the same confidence interval, and, hence, the same NZ estimate. Consequently, estimating p on a size-zero sample results in a uniform distributionp withp k = 1/K for all k = 1, . . . , K, which is a maximum-entropy (or minimum-assumption) estimate. For small samples, the NZ estimate is close to a uniform distribution. 2. Positivity: As probabilities are restricted to the interval [0, 1], and it always holds p k,upper > p k,lower , the mean value of the confidence interval CI p,k is strictly positive. This also applies to the normalized mean. This is the main property we were seeking to be guaranteed byp k . 3. Convergence: Since q k is a consistent estimator (Reference [21], Section 5.2), it converges in probability towards p k for growing sample size n. Moreover, the ranges of the confidence intervals CI p,k approach zero with increasing sample size n (Reference [19], Figures 4 and 5) and hence, the estimatesp k converge towards p k . 4. As described above, due to the normalization in the method, the NZ estimate does not exactly equal the mean of the confidence interval. However, the interval's mean tends towards p k with growing n and, hence, the normalizing sum in the denominator tends towards one. Consequently, for growing sample size n, the effect of the normalization is of less and less influence. Illustration of Properties An illustration of the NZ method and its properties is shown in Figure 1. The first plot, Figure 1a, shows a discrete distribution, constructed for demonstration purposes such that it covers a range of different bin probabilities. Possible outcomes are the six integer values {1, 2, . . . , 6}, where p(1) = 0.51 and all further probabilities are half of the previous, such that p(6) = 0.015. Figure 1b shows a random sample of size one taken from the distribution; here, the sample took the value "1". The BC estimator q for the distribution p for outcomes {1, . . . , 6} is shown with blue bars. Obviously, we encounter the problem of zero-probability bins here. In the same plot, the confidence intervals for the bin-occupation probability based on the Clopper-Pearson method on 95% confidence level are shown in green. Due to the small sample size, the confidence intervals are almost the same for all outcomes, and so is the NZ estimate for bin-occupation probability shown in red. Altogether, the NZ estimate is close to a uniform distribution, which is the maximum entropy estimate, except that the bin-occupation probability for the observed outcome "1" is slightly higher than for the others: The NZ estimate of the distribution iŝ p = (0.1737, 0.1653, 0.1653, 0.1653, 0.1653, 0.1653). We can also see that the positivity requirement for bin occupation probability is met. shows a discrete distribution, constructed for demonstration purposes such that it covers a range of different bin probabilities. Possible outcomes are the six integer values {1, 2, … ,6}, where (1) = 0.51 and all further probabilities are half of the previous, such that (6) = 0.015. Figure 1b shows a random sample of size one taken from the distribution; here, the sample took the value "1". The BC estimator for the distribution for outcomes {1, … ,6} is shown with blue bars. Obviously, we encounter the problem of zero-probability bins here. In the same plot, the confidence intervals for the bin-occupation probability based on the Clopper-Pearson method on 95% confidence level are shown in green. Due to the small sample size, the confidence intervals are almost the same for all outcomes, and so is the NZ estimate for bin-occupation probability shown in red. Altogether, the NZ estimate is close to a uniform distribution, which is the maximum entropy estimate, except that the bin-occupation probability for the observed outcome "1" is slightly higher than for the others: The NZ estimate of the distribution is ̂ = (0.1737, 0.1653, 0.1653, 0.1653, 0.1653, 0.1653). We can also see that the positivity requirement for bin occupation probability is met. In Figure 1c,d, BC and NZ estimates of the bin-occupation probability are shown for random samples of size 10 and 100, respectively. For sample size 10, the BC method still yields three zero-probability bins, which are filled by the NZ method. The NZ estimates for this sample still gravitate towards a uniform distribution (red bars) but, due to the increased sample size, to a lesser degree than before. For sample size 100, both the BC and the NZ distribution estimate of bin-occupation probability closely agree with the full distribution, which illustrates the convergence behavior of the NZ method. Compared to the size-10 sample, the Clopper-Pearson confidence intervals for the bin-occupation probabilities have narrowed considerably, and, as a result, the NZ estimates are close to those from BC. In Figure 1c,d, BC and NZ estimates of the bin-occupation probability are shown for random samples of size 10 and 100, respectively. For sample size 10, the BC method still yields three zero-probability bins, which are filled by the NZ method. The NZ estimates for this sample still gravitate towards a uniform distribution (red bars) but, due to the increased sample size, to a lesser degree than before. For sample size 100, both the BC and the NZ distribution estimate of bin-occupation probability closely agree with the full distribution, which illustrates the convergence behavior of the NZ method. Compared to the size-10 sample, the Clopper-Pearson confidence intervals for the bin-occupation probabilities have narrowed considerably, and, as a result, the NZ estimates are close to those from BC. Test Setup How does the NZ method compare to established distribution estimators that also assure NZ bin-occupation probabilities? We address this question by applying various estimation methods to several types of distributions. In the following, we will explain the experimental setup, the evaluation method, the estimation methods, and the distributions used. We start by taking samples S of size n by i.i.d. picking (random sampling with replacement) from each distribution p. Each estimation method we want to test applies this sample to construct a NZ distribution estimatep. The (dis-)agreement of the full distribution with each estimate is measured with the Kullback-Leibler divergence as shown in Equation (2). Note that, for our application, the full distribution of the variable is the reference p, since the observations actually occur according to this distribution; the distribution estimate q is derived from the sample and is our assumption about the variable. We chose Kullback-Leibler divergence as it conveniently measures, in a single number, the overall agreement of two distributions, instead of focusing on particular aspects, e.g., particular moments. Kullback-Leibler divergence is also zero if and only if the two distributions are identical, while, for instance, two distributions with identical mean and variance can still differ in higher moments. We tested sample sizes from n = 1 to 150, increasing n in steps of one. We found an upper limit of 150 to be sufficient for two reasons: Firstly, the problem of zero-probability bins due to the combined effect of sampling variability and limited sample size mainly occurs for small sample sizes; secondly because, for large samples, the distribution estimates by the tested methods quickly become indistinguishable. To eliminate effects of sampling variability, we repeated the sampling for each sample size 1000 times, calculated Kullback-Leibler divergence for each and then took the average. As a result, we get mean Kullback-Leibler divergence as a function of sample size, separately for each estimation method and test distribution. The six test distributions are shown in Figure 2. We selected them to cover a wide range of shapes. Please note that two of the distributions, Figure 2b,f, actually contain bins with zero p. It may seem that, in such a case, the application of a distribution estimator assuring NZ p's is inappropriate; however, in our targeted scenarios (e.g., comparison of two distributions via Kullback-Leibler divergence), it is the zero p's due to limited sample size that we need to avoid, while we accept the adverse effect of falsely correcting true zeros. If the existence and location of true-zero bins were known a priori, this knowledge could be easily incorporated in the distribution estimators discussed here to only produce actual NZ p's. Finally, we selected a range of existing distribution estimators to compare to the NZ method: 1. BC: The full probability distribution is estimated by the normalized BC frequencies of the sample taken from the full data set. This method is just added for completeness, and as it does not guarantee NZ bin probabilities its divergences are often infinite, especially for small sample sizes. 2. Add one (AO): With a sample taken from the full distribution, a histogram is constructed. Any empty bin in the histogram is additionally filled with one counter before converting it to a pdf by normalization. The impact of each added counter is therefore dependent on sample size. 3. BAY: This approach to NZ bin-probability estimation places a Dirichlet prior on the distribution of bin probabilities and updates to a posterior distribution in the light of the given sample via a multinomial-likelihood function [22]. We use a flat uniform prior (with the Dirichlet distribution parameter alpha taking a constant value of one over all bins) as a maximum-entropy approach, which can be interpreted as a prior count of one per bin. Since the Dirichlet distribution is a conjugate prior to the multinomial-likelihood function, the posterior again is a Dirichlet distribution with analytically known updated parameters. We take the posterior mean probabilities as distribution estimate and, for our choice of prior, they correspond to the observed bin counts increased by the prior count of one. Hence, BAY is very similar to AO with the difference that a count of one is added to all bins instead of only to empty bins; like for AO, the impact of the added counters is dependent on sample size. Like the NZ method, BAY is by default a strictly positive and convergent maximum-entropy estimator (see Section 2.2). 4. Add (AP): With a sample taken from the full distribution, a histogram is constructed and normalized to yield a pdf. Afterwards, each zero-probability bin is filled with a small probability mass (here: 0.0001) and the entire pdf is then renormalized. Unlike in the "AO" procedure, the impact of each probability mass added is therefore virtually independent of . Finally, we selected a range of existing distribution estimators to compare to the NZ method: 1. BC: The full probability distribution is estimated by the normalized BC frequencies of the sample taken from the full data set. This method is just added for completeness, and as it does not guarantee NZ bin probabilities its divergences are often infinite, especially for small sample sizes. 2. Add one (AO): With a sample taken from the full distribution, a histogram is constructed. Any empty bin in the histogram is additionally filled with one counter before converting it to a pdf by normalization. The impact of each added counter is therefore dependent on sample size. 3. BAY: This approach to NZ bin-probability estimation places a Dirichlet prior on the distribution of bin probabilities and updates to a posterior distribution in the light of the given sample via a multinomial-likelihood function [22]. We use a flat uniform prior (with the Dirichlet distribution parameter alpha taking a constant value of one over all bins) as a maximum-entropy approach, which can be interpreted as a prior count of one per bin. Since the Dirichlet distribution is a conjugate prior to the multinomial-likelihood function, the posterior again is a Dirichlet distribution with analytically known updated parameters. We take the posterior mean probabilities as distribution estimate and, for our choice of prior, they correspond to the observed bin counts increased by the prior count of one. Hence, BAY is very similar to AO with the difference that a count of one is added to all bins instead of only to empty bins; like for AO, the impact of the added counters is dependent on sample size. Like the NZ method, BAY is by default a strictly positive and convergent maximum-entropy estimator (see Section 2.2). 4. Add p (AP): With a sample taken from the full distribution, a histogram is constructed and normalized to yield a pdf. Afterwards, each zero-probability bin is filled with a small probability mass (here: 0.0001) and the entire pdf is then renormalized. Unlike in the "AO" procedure, the impact of each probability mass added is therefore virtually independent of n. 9.001], which is the range of the test distributions, and an iterative adjustment of the bandwidth: Starting from an initially very low value of 0.05, the bandwidth (and with it the degree of smoothing across bins) was increased in 0.001 increments until each bin had NZ probability. We adopted this scheme to avoid unnecessarily strong smoothing while at the same time guaranteeing NZ bin probabilities. 6. NZ: We applied the NZ method as described in Section 2.1. Results and Discussion The results of all tests, separately for each test distribution and estimation method are shown in Figure 3. We will discuss them first individually for each distribution and later summarize the results. Entropy 2018, 20, 601 7 of 13 5. KDS: We used the Matlab Kernel density function ksdensity as implemented in Matlab R2017b with a normal kernel function, support limited to [0, 9.001], which is the range of the test distributions, and an iterative adjustment of the bandwidth: Starting from an initially very low value of 0.05, the bandwidth (and with it the degree of smoothing across bins) was increased in 0.001 increments until each bin had NZ probability. We adopted this scheme to avoid unnecessarily strong smoothing while at the same time guaranteeing NZ bin probabilities. 6. NZ: We applied the NZ method as described in Section 2.1. Results and Discussion The results of all tests, separately for each test distribution and estimation method are shown in Figure 3. We will discuss them first individually for each distribution and later summarize the results. Sample-based distribution estimates are based on bin counting (grey), "Add one counter" (blue), "Bayesian" (green), "Add probability" (orange), "Kernel-density smoothing" (violet), and the "nonzero method" (red). In all plots except (b), the "bincount" line is invisible as its divergence is infinite, and in plot (b) it is invisible as it is zero and almost completely overshadowed by the "addp" line. In plots (b,f), the "addone" line is almost completely overshadowed by the "bayes" line. For better visibility, all y-axes are limited to a maximum divergence of 2 bit, although this limit is sometimes clearly exceeded for small sample sizes. For the uniform distribution as shown in Figure 2a, the corresponding Kullback-Leibler divergences are shown in Figure 3a. For small sample sizes up to approximately 40, both AP and KDS show very large divergences, AO, BAY, and NZ perform considerably better, with a slight advantage of NZ. This order clearly reflects the methods' different estimation strategies, and how capable they are to reproduce a uniform distribution: For small sample sizes, both AP and KDS will maintain "spiky" distribution estimates, while AO, BAY, and NZ gravitate towards uniform distribution or maximum-entropy estimates. For larger sample sizes, beyond 80, the performance differences among Sample-based distribution estimates are based on bin counting (grey), "Add one counter" (blue), "Bayesian" (green), "Add probability" (orange), "Kernel-density smoothing" (violet), and the "nonzero method" (red). In all plots except (b), the "bincount" line is invisible as its divergence is infinite, and in plot (b) it is invisible as it is zero and almost completely overshadowed by the "addp" line. In plots (b,f), the "addone" line is almost completely overshadowed by the "bayes" line. For better visibility, all y-axes are limited to a maximum divergence of 2 bit, although this limit is sometimes clearly exceeded for small sample sizes. For the uniform distribution as shown in Figure 2a, the corresponding Kullback-Leibler divergences are shown in Figure 3a. For small sample sizes up to approximately 40, both AP and KDS show very large divergences, AO, BAY, and NZ perform considerably better, with a slight advantage of NZ. This order clearly reflects the methods' different estimation strategies, and how capable they are to reproduce a uniform distribution: For small sample sizes, both AP and KDS will maintain "spiky" distribution estimates, while AO, BAY, and NZ gravitate towards uniform distribution or maximum-entropy estimates. For larger sample sizes, beyond 80, the performance differences among the methods quickly vanish. For the small sample sizes as shown in the figure, the BC approach was still frequently afflicted with zero-probability bins, resulting in infinite divergence. Quite expectedly, the relative performance of the estimators for the Dirac distribution (Figures 2b and 3b) is almost opposite from the uniform distribution. BC shows zero and AP almost-zero divergence for all sample sizes. The reason is that even a very small sample from a Dirac distribution yields a perfect estimate of the full distribution, and both methods do not interfere much with this estimate (in fact, BC not at all). AO and BAY show almost identical performance, NZ is similar but slightly worse. All of them show high divergences for small samples and a gradual decrease with sample size. The reason lies in the methods' tendency towards a uniform spreading of probabilities, which is clearly unfavorable if the true distribution is a Dirac. Interestingly, the KDS estimator performs constantly poorly over the entire range of sample sizes, which can be explained by its tendency of locally distributing probability mass around the BC estimate. In particular, as the kernel function was chosen to be normal, the observed divergence of about 0.8 bit corresponds to the divergence of a Dirac and a normal distribution extending over the nine bins covering the codomain. For the narrow normal distribution as shown in Figures 2c and 3c, obviously the normal kernel of KDS is of advantage, such that, for small sample sizes, divergence is smaller than for any other estimator. The performance of AP varies greatly with sample size: For small samples it is poor, for sample sizes beyond about thirty it scores best. AO and BY are almost identical, NZ is similar to them but shows worse performance; altogether it is the worst estimator. Beyond sample sizes of about 80, all methods perform almost equally well, except for BC, whose divergence is infinite due to the occasional occurrence of zero-probability bins. For the wide normal distribution as shown in Figures 2d and 3d, KDS remains the best estimator except for very small sample sizes. AO and BAY are similar and perform better than the NZ method. AP performs worst for sample sizes smaller about thirty; for larger samples, NZ performs worst. For the bimodal distribution as shown in Figures 2e and 3e things look differently: Both KDS and AP show poor performance even for large sample sizes; AO, BAY, and NZ are almost indistinguishable and they perform well even for small sample sizes. Finally, results for the application to the irregular distribution as shown in Figure 2f are shown in Figure 3f. As this distribution shows no pattern in the distribution of probabilities across the value domain, any approach assuming a particular shape of pattern (like KDS) will have difficulties, at least for small sample sizes. This is clearly reflected in the large divergences of KDS. Interestingly, AP also struggles to reproduce the irregular distribution, but not because of the absence of a probability pattern across the value domain, but because filling a bin that has zero probability due to chance with always the same small probability mass, irrespective of the sample size, here is less effective than filling it with an adaptive probability mass as done by AO. AO, BAY, and NZ, again, perform almost equally well and better than the other methods (BC again has infinite divergences). Summary and Conclusions We started by describing use cases that involve estimation of discrete distributions with the additional requirement that all bins of the estimated distribution should have NZ probabilities. As the standard BC approach does not guarantee this, we proposed an alternative approach based on the Clopper-Pearson method, which makes use of the binominal distribution. Based on the BC-distribution estimate, confidence intervals for bin-occupation probability are calculated. The mean of each confidence interval is a strictly positive estimator of the true bin-occupation probability and is convergent with increasing sample size. For small samples, it converges towards a uniform distribution, i.e., the method effectively applies a maximum-entropy approach. We compared the capability of this "NZ" method to estimate different distributions (uniform, Dirac, narrow normal, wide normal, bimodal, and irregular) based on i.i.d. samples thereof of different sizes. For comparison, we applied four alternative estimators guaranteeing NZ bin probabilities (adding one counter to each empty bin of the sample histogram, a BAY approach applying a Dirichlet prior, and a multinomial likelihood Entropy 2018, 20, 601 9 of 13 function, adding a small probability to the sample pdf, and KDS). We measured the agreement of the distributions and their respective estimates via Kullback-Leibler divergence. The most obvious result is that the relative performance of the estimators strongly depends on whether their estimation strategy matches the shape of the test distribution or not. So if the latter is known (or can be reasonably guessed) a priori, a case-specific choice should be made. However, if this is not the case, it is reasonable to select an estimator that performs, on average, well across all distributions. For the range of distributions tested here, this could be either the straightforward method of adding one counter to each empty bin of a sample histogram, the BAY method, or the NZ method. As could be expected by their design, the first two show almost identical behavior and performance. The NZ method is similar to them in overall performance and its dependency of performance on sample size, except that it performs better for close-to-uniform distributions and worse for spiky distributions. Each of the three methods (AO, NZ, and BAY) is straightforward to implement and computationally inexpensive, so from a practical viewpoint, there is no preference for one method or the other. The main differences are in the formal background: The "AO" method lacks a formal justification; the NZ method is based on a statistical/frequentist background, while the BAY method applies a BAY perspective. Although the NZ and the BAY methods are formulated in different formal frameworks, they are in fact very similar (both are maximum-entropy estimators by construction), and so is their performance. Their main differences are that the NZ method applies the binominal distribution to evaluate each bin separately, while the BAY method applies the multinomial distribution simultaneously to all bins. The second difference is that the NZ method uses the normalized mean of the confidence interval of bin probability as the best estimate of bin probability; the BAY method uses the posterior mean. An advantage of the NZ and the BAY over the AO method is that, in addition to the distribution estimate, they also provide confidence intervals that offer additional avenues of analysis or conditioning. An additional advantage of the BAY method is that it offers adaptability: If a priori estimates of the distribution shape are available, they can be considered via the choice of the Dirichlet distribution parameter alpha. Overall, users may make a choice according to the formal setting they are most comfortable with.
7,593.6
2018-08-01T00:00:00.000
[ "Mathematics", "Computer Science" ]
High speed hybrid silicon evanescent Mach-Zehnder modulator and switch We demonstrate the first high speed silicon evanescent Mach Zehnder modulator and switch. The modulator utilizes carrier depletion within AlGaInAs quantum wells to obtain Vπ L of 2 V-mm and clear open eye at 10 Gb/s. The switch exhibits a power penalty of 0.5 dB for all ports at 10 Gb/s modulation. ©2008 Optical Society of America OCIS codes: (250.4410) Modulators; (250.5300) Photonic integrated circuits; (250.7360) Waveguide modulators. References and Links 1. A. S. Liu, R. Jones, L. Liao, D. Samara-Rubio, D. Rubin, O. Cohen, R. Nicolaescu, and M. Paniccia, “A high-speed silicon optical modulator based on a metal-oxide semiconductor capacitor,” Nature 427, 615–618 (2004). 2. D. Marris-Morini, X. Le Roux, L. Vivien, E. Cassan, D. Pascal, M. Halbwax, S. Maine, S. Laval, J. M. Fedeli, and J. F. Damlencourt, “Optical modulation by carrier depletion in a silicon PIN diode,” Opt. Express 14, 10838-10843 (2006). 3. Q. Xu, S. Pradhan, B. Schmidt, J. Shakya, and M. Lipson, “12.5 Gbit/s carrier-injection-based silicon microring silicon modulators,” Nature 435, 325-327 (2005). 4. Y.-H. Kuo, H.-W. Chen, and J. E. Bowers, “High speed hybrid silicon evanescent electroabsorption modulator,” Opt. Express 16, 9936-9942 (2008). 5. H.-W. Chen, Y.-H. Kuo, and J. E. Bowers, "A Hybrid Silicon-AlGaInAs Phase Modulator," IEEE Photon. Technol. Lett. 23 (to be published) 6. A. W. Fang, E. Lively, Y-H. Kuo, D. Liang, and J. E. Bowers, "A distributed feedback silicon evanescent laser," Opt. Express 16, 4413-4419 (2008). 7. H. Park, Y.-h Kuo, A. W. Fang, R. Jones, O. Cohen, M. J. Paniccia, and J. E. Bowers, “A hybrid AlGaInAssilicon evanescent preamplifier and photodetector,” Opt. Express 15, 13539-13546 (2007). 8. H. Ohe, H. Shimizu, and Y. Nakano, "InGaAlAs Multiple-Quantum-Well Optical Phase Modulators Based on Carrier Depletion," IEEE Photon. Technol. Lett., 19, 1616-1618 (2007). 9. D. Liang, E. A. Lucero, and J. E. Bowers, “Highly Efficient Vertical Outgassing Channels for Robust, VoidFree, Low-Temperature Direct Wafer Bonding,” The 35th Conference on the Physics and Chemistry of Semiconductor Interfaces, Santa Fe, NM, Jan. 2008. 10. J. Vinchant, J. A. Cavailles, M. Erman, P. Jarry, and M. Renaud, “InP/GaInAsP Guided-Wave Phase Modulators Based on Carrier-induced Effects: Theory and Experiment,” IEEE J. Lightwave Technol. 10, 6370 (1992). Introduction Recent interest in silicon optical interconnects is driven by the need for high capacity data communication at a relatively low cost.To this end, research on interconnect technology using modulators and switches is being actively pursued.Mach-Zehnder modulators (MZM) using carrier depletion have been reported to have 40 Gb/s operation with 40 V-mm DC drive [1,2].A ring resonant structure using carrier injection is another approach to implement modulators with compact footprint and narrow optical bandwidth, where speeds up to 12.5 Gb/s were demonstrated with pre-emphasized electrical signals [3].The tradeoff between modulator efficiency and speed is always an issue for silicon modulators.One new approach to achieve high speed operation while maintaining large modulation efficiency is the hybrid silicon evanescent electroabsorption modulator (EAM) [4], which had 10 dB extinction ratio (ER), 30 nm optical bandwidth, and over 16 GHz modulation bandwidth.Another new approach is to use carrier depletion inside offset multiple quantum wells (MQW) in a hybrid silicon evanescent MZM.This MZM have less wavelength sensitivity and larger optical bandwidth compared to an EAM.Most importantly, the Mach-Zehnder interferometer (MZI) structure can be made into a 2x2 switch, as demonstrated here, and that can be scaled to larger switches, such as 32x32 nonblocking switches.The MZM was reported with modulation efficiency of 4 V-mm, over 100 nm optical bandwidth, and 28 mW power handling [5], but high speed performance of such a MZM was not demonstrated.In this work, we successfully demonstrated a high speed modulator with 10 Gb/s operation.In addition, by changing the passive waveguide structure of this MZM, a high speed 2x2 switch based on a MZI is implemented and demonstrated on the same platform.Switch arrays are important for interchip and intrachip communication networks, and these elements together with lasers [6], amplifiers, and photodetectors [7], can be integrated as transmitters or receivers for future optical networks.In order to introduce the necessary phase change inside the Mach-Zehnder interferometer for both the hybrid silicon modulator and switch, a III-V epitaxial wafer with doped QW and separate confinement heterostructure (SCH) layer [8] is bonded to patterned silicon waveguides.For such a structure with carriers inside the MQW, the plasma effect dominates at lower reverse bias levels while Pockels, Kerr, and quantum confined Stark effects (QCSE) become more obvious with larger electrical fields.The combination of these effects results in better linearity than devices utilizing simple electro-optic effects. Device description The top view of a 500 μm MZM is shown in Fig. 1(a).It has two MMIs, each 6 μm wide and 40 μm long, at the input and output functioning as the splitter and the combiner.By changing the 1x2 MMIs to 2x2 MMIs, a switch based on MZI was also fabricated on the same chip.Due to the mode mismatch between the passive and hybrid sections, two 60 μm long tapers, on silicon and III-V section, are added to minimize reflection and increase coupling efficiency.In addition, we use coplanar waveguides (CPW) as the traveling-wave electrode to achieve high speed performance.The optical image of a fabricated device is shown in Fig. 1(b).A thin layer of silicon nitride, which appears as an orange cross region in Fig. 1(b), is deposited at the end of the process to protect the device from scratches.Fig. 1(c) illustrates the cross section of the hybrid section.The device has a 4 μm cladding width while the QW/SCH layers are under-cut to 2μm to reduce the device capacitance.Moreover, the two arms of the MZM are electrically isolated by etching down the n-contact layer in between.The silicon waveguides have a height of 0.46 μm, a slab height of 0.19 μm, and a width of 0.94 μm. The devices are fabricated as follows.The III-V epitaxial layers are first bonded to a silicon-on-insulator wafer by using vertical outgassing channel (VOC) technique with an anneal time of 3 hours at 300°C [9].The substrate is then removed and ready for III-V mesa fabrication.In order to align contact metal with narrow width of the cladding layer, the cladding mesa is formed by using a self-aligned dry etch process.The sample is then dipped into a mixture of H 2 O/H 2 O 2 /H 3 PO 4 to create the under-cut in the MQW/SCH layers, while a circular pattern is used as a reference to control the undercut depth.Next, all III-V epitaxial layers are removed on top of passive regions, and a 1 μm n-metal is deposited to form the ground pad of the CPW.A 5 μm thick polymer is then applied to provide additional mechanical support to the thin bonding layer and to separate the probe metal from the bottom n-metal in order to implement the desired CPW design and reduce parasitic capacitances. Device characteristics The devices were first anti-reflection coated to eliminate the undesired resonance and reduce the reflection on both facets.Two lensed fibers were then used to couple the light in and out of the silicon waveguides.Since there is no difference between modulator and switch inside the hybrid region, the transmission response of the modulator and one port of the switch should be identical.The experiment results of transmission as a function of reverse bias with different input optical powers is shown in Fig. 2. As can be seen, the lowest modulation efficiency is about 2 V-mm, half of the valued reported previously for the same epitaxial structure.The improvement in modulation efficiency is due to the alignment between the III-V crystal facet and waveguide direction such that the Pockels effect adds to the other sources of phase modulation [10]. All three curves in Fig. 2 have a highest transmission at bias other than zero due to fabrication imperfections.As shown in Fig. 2, the voltage-length product decreases from 1.95 V-mm to 1 V-mm as the input power increases from -7.5 dBm to 12.5 dBm.The reduction in voltage-length product can be attributed to the excess carriers generated by two-photon absorption (TPA) at higher optical intensity.Meanwhile, the shift of the peak point to higher reverse bias voltages also indicates the existence of the extra carriers; where stronger electrical field is required to completely deplete the QW/SCH layers.The DC modulation response depends on input optical power level, but the microwave modulation response is unaffected because the frequency range is much higher than the response time of the carriers (carrier life time ~ns) generated by TPA.To reduce this difference, the bandgap and doping of the QW can be redesigned to decrease the carrier life time such that the contribution of index change from TPA can be reduced.The extinction ratios (ER) of MZMs are 12.77 dB, 11.51 dB and 8.64 dB for input powers of 12.5 dBm, 2.5 dBm, and -7.5 dBm, respectively.In addition, the static characteristics of a hybrid silicon switch are also measured as illustrated in Fig. 3.The highest crosstalk is -13 dB and the lowest ER is 12.77 dB for all port configurations.The extinction ratio is limited by QCSE [5], which is more obvious when bias voltage is larger than -3 V.By utilizing the push-pull structure in the future, both arms can be driven and the amplitudes of these two arms are closer so that the destructive interference is closer to zero; and hence increase the ER and decrease the crosstalk.To investigate the high speed performance of the devices, both electrical and optical small signal modulation response were measured using an Agilent 8164A PNA network analyzer and HP 8703A Lightwave component analyzer.As shown in Fig. 4, the electrical response (black curve) depicts a 3 dBe cutoff frequency at 7.5 GHz while the optical response without any impedance termination has a 3 dBe cutoff frequency at 3.5 GHz (blue curve), which is expected from a lumped RC model simulation(blue dash curve).The degradation of high speed performance is due to the large reflection from the open end of the CPW electrode. The reflection, however, can be reduced by applying a 25 Ω termination such that the cutoff frequency is increased to 8 GHz, sufficient for 10 Gb/s data transmission.The termination used in the this experiment has a build in inductance, hence cause a resonance at 1.5 GHz as illustrated in Fig. 4. A transmission line characteristic impedance of 20 Ω and the electrical propagation loss of 5 dB/mm at 10 GHz are calculated by extracting full four port S parameters.The MZM was also tested with large signal modulation to characterize its performance for high speed communications.A 2 31 -1 pseudorandom bit sequence (PRBS) pattern generator connected to an electrical amplifier is used to provide the drive signal.The device is biased at -3.8 V with 1.5 V swing while the bias at other arm is adjusted to achieve best signal quality.The 10 Gb/s modulated light is then collected by a lensed fiber and amplified with an EDFA following by a filter to eliminate the ASE noise before the signal is sent to an Agilent digital communication analyzer (DCA).The signal has an ER of 6.3 dB, as shown in Fig. 5, which is smaller than the ER (11.6 dB) measured at DC bias due to the partial voltage drop across the series resistance and cladding layer.The eye is clearly open and sufficient for 10 Gb/s operation.The noisy one level is due to the frequency overshoot at 1.4 GHz as mentioned in previous section.A 10 Gb/s bit-error-rate (BER) measurement was also performed to explore the sensitivity at all ports configurations of a switch.As can be seen in Fig. 6, all power penalties are below 0.5 dB at 10 -9 BER and no error floor is shown.Furthermore, the rise and fall time of the switch is around 50 ps while the drive signal itself has a rise time of around 70 ps as depicted in the inset of Fig. 6.This indicates potential use in a high speed switch or routing network for future high speed optical interconnects. Conclusion We have demonstrated high speed Mach-Zehnder silicon evanescent modulators and switches utilizing the carrier depletion effect with offset MQWs.The modulator has a V π L of 2 V-mm, a modulation bandwidth of 8 GHz, and clear open eye diagram at 10 Gb/s with 6.3 dB ER.The hybrid switch exhibits low power penalty of 0.5 dB for all port configurations at 10 Gb/s data stream format while rise and fall time of 50 ps shows the ability for high speed switching.The hybrid silicon evanescent modulators and switches can be used as interconnects and further integrated with other optical devices to achieve a transparent optical communication system. Fig. 1 . Fig. 1.(a) Top view of a device with a CPW electrode (b) The optical image of the device under microscope (c) Cross section (along A-A') of the hybrid waveguide. Fig. 2 . Fig. 2. Modulation efficiency of one port of a silicon hybrid switch with different input power at 1540nm. Fig. 3 . Fig. 3. Transmission of a switch as a function of reverse bias. Fig. 4 . Fig. 4. Experimental electrical and optical response together with RC fitted estimation. Fig. 6 . Fig. 6.BER versus optical received power for all ports configurations at 10 Gb/s with 2 31 -1 NRZ PRBS.Inset is the response of rise and fall time measured by 10% and 90% with 200ps/div.
3,089.8
2008-12-08T00:00:00.000
[ "Physics" ]
Radio Polarization of Millisecond Pulsars with Multipolar Magnetic Fields NICER has observed a few millisecond pulsars where the geometry of the X-ray-emitting hotspots on the neutron star have been analyzed in order to constrain the mass and radius from X-ray light-curve modeling. One example, PSR J0030 + 0451, has been shown to possibly have significant multipolar magnetic fields at the stellar surface. Using force-free simulations of the magnetosphere structure, it has been shown that the radio, X-ray, and γ-ray light curves can be modeled simultaneously with an appropriate field configuration. An even more stringent test is to compare predictions of the force-free magnetosphere model with observations of radio polarization. This paper attempts to reproduce the radio polarization of PSR J0030 + 0451 using a force-free magnetospheric solution. As a result of our modeling, we can reproduce certain features of the polarization well. INTRODUCTION A millisecond pulsar (MSP) has an extremely small spin period (P ∼ 1 − 30 ms) and differs from ordinary pulsars in that it has an extremely small spindown rate and exists in a binary system (Lorimer 2008).Their short periods are thought to be caused by accretion of matter from a donor star (Alpar et al. 1982).A pulsar's magnetic pole emits cones of bright radio emission when it rotates rapidly, sweeping around like a lighthouse, and the interval between pulses is as precise as that of an atomic clock.Radiation emitted by pulsars is generally believed to be produced in pair plasma outflow streaming along the dipolar magnetic field lines (Philippov et al. 2020;Melrose et al. 2021;Philippov & Kramer 2022).However, it is still theoretically unclear how pulsars produce such coherent sources of radio waves.Nevertheless, over the past decades, pulsars have proven to be fascinating laboratories for studying fundamental physics.For example: (1) measuring their masses and radii can constrain the equation of state for nuclear matter (Raaijmakers et al. 2019;Bogdanov et al. 2021), (2) measuring their motion can help test general relativity (GR) (Kramer et al. 2006;Kramer & Wex 2009), (3) Pulsar timing can be used to detect gravitational-wave background (Arzoumanian et al. 2018(Arzoumanian et al. , 2020)), and finally (4) they are good probes for studying the local environ-ments and properties of their host galaxies (Coles et al. 2015;Jones et al. 2017). We can get constraints on the emission physics from the pulsar magnetosphere through multiwavelength analysis.For example, NICER has obtained detailed X-ray observations of hot spots present in the pulsar PSR J0030+0451 (Miller et al. 2019;Riley et al. 2019) which has allowed us to constrain its magnetic field geometry (Bilous et al. 2019).Their results showed that the magnetic field is far from a simple dipole but favors multipolar components at the stellar surface.The shape and location of the hot spots observed in thermal X-rays are the footprints of open magnetic field lines.Here, active pair-production occurs, which leads to particle bombardment of the stellar surface and production of X-rays.The thermal X-ray lightcurves of PSR J0030+0451, and, in addition, radio and γ-rays, produced in the outer magnetosphere, were successfully modeled by Chen et al. (2020); Kalapotharakos et al. (2021).This modeling confirmed that a multipolar magnetic field is key in understanding this system and can be further used to constrain the magnetic inclination angle. The magnetic field in ordinary pulsars has been shown to evolve through the Hall effect and Ohmic dissipation in the crust mediated by free electrons (Cumming et al. 2004;Pons & Geppert 2007;Gourgouliatos & Cumming 2014;Bransgrove et al. 2018), and ambipolar diffusion driven by binary scattering processes in electron-protonneutron plasma (Goldreich & Reisenegger 1992;Castillo et al. 2017;Bransgrove et al. 2018).Modeling the interior and exterior field consistently is difficult, and in fact, most models for the interior field in the star do not model the magnetosphere and just have some fixed exterior boundary, and vice versa.The dynamical interplay between these regions is crucial to understand some magnetospheric phenomenology.Global magnetohydrodynamics simulations with initial dipolar magnetic field for ordinary pulsars have shown that after birth the system evolves to a state with significant power in the multipolar components (e.g., Sur et al. 2020).Higherorder multipoles near the surface of the star have been proposed to activate pair production that is supposed to power the radio emission (Gil et al. 2006).However, these multipoles would dissipate within a few million years due to Ohmic decay (Gourgouliatos & Cumming 2014) even if they were generated after an ordinary pulsar is recycled into a MSP.The formation process of MSPs through accretion may also change the magnetic field configuration of MSPs, either through burial of magnetic field (Romani 1990;Melatos & Phinney 2001;Payne & Melatos 2004), or due to field migration as the neutron star spins up (Ruderman 1991;Chen & Ruderman 1993;Chen et al. 1998).The magnetic field is therefore challenging to model in MSPs.The dipolar field model has been commonly used in various studies, such as determining the magnetic field strength from the dipole spindown and explaining the inverse relationship between spin period and pulse width.However, the presence of non-dipolar magnetic field configurations has an impact on various aspects of pulsar astrophysics, including birth velocities (Radhakrishnan 1984;Bailes 1989) and the interpretation of multi-wavelength magnetospheric emission.Therefore, it is crucial to have a precise representation of the magnetic field when studying MSP-related phenomena. One of the most important tools to connect the pulsar magnetic field geometry to observations is the radio light curve and polarization modeling.In the past, the rotating vector model (RVM) (Radhakrishnan & Cooke 1969) was widely used in this respect.The model assumes that the emission comes from near the magnetic pole, and the polarization is determined by the direction of the magnetic field at the emission point.As the line of sight cuts across the magnetic pole, the plane of linear polarization sweeps a characteristic S-shaped swing over a single pulse period.Radio pulsar mean profiles are typically interpreted according to the hollow cone model (Manchester & Taylor 1977).The main assumptions in this model are the following: (1) the emission is generated in the inner magnetosphere (where the magnetic field (B) is thought of as a dipole), (2) the emission travels in a straight line, (3) cyclotron absorption may be ignored, and finally (4) the polarization is determined at the emission point.Analytically, the change of the position angle (PA) with respect to the rotation phase (ϕ) of the pulsar is given by where α is the angle between the magnetic moment axis and the rotation axis while δ is the angle between the rotation axis and the line of sight of the observer.According to the RVM, PA is determined solely by the projection of magnetic field on the sky plane (see figure 1).In this convention, the x-axis is along the projection of the angular velocity Ω on the sky plane.Another modification to the RVM was the model presented by Blaskiewicz et al. (1991) (BCW) which incorporates relativistic effects, such as aberration due to significant corotation component of the plasma velocity and retardation.The BCW model calculates the position angle by finding the direction of the acceleration of radiating charged particles at the emission point.The relativistic effect causes the polarization profile to lag the intensity profile.The magnitude of the lag of the inflection point of the PA profile in the presence of aberrations and retardations can be calculated as follows: where P is the period, r em is the emission radius, and c is the speed of light.Based on the observed lag, the emission radius can be inferred. Although the RVM with a dipolar field successfully explains pulse profiles of many pulsars qualitatively (e.g., Manchester et al. 1975;Johnston et al. 2023) and even quantitatively (e.g., Desvignes et al. 2019), observations of MSPs often show flat, distorted, or even random PA profiles hinting towards possible non-dipolar configurations (Backer et al. 1976).In addition, propagation of radio waves through magnetospheric pair plasma can further change the polarization properties (Petrova & Lyubarskii 2000;Wang et al. 2010;Beskin & Philippov 2012;Hakobyan et al. 2017;Galishnikova et al. 2020) which are not included in the RVM model.It is important to note that some MSPs have flux densities similar to ordinary pulsars and their radio profiles are only marginally more complex (Kramer et al. 1998).Nonetheless, physical conditions in magnetospheres of MSPs and their surface magnetic field structure could differ considerably owing to its evolutionary history, which can result in changes to their observed appearance (Philippov & Kramer 2022). In this paper, we are interested in comparing radio polarization with observations for the MSP PSR J0030+0451 taking into account a multipolar magnetic field configuration as obtained in Chen et al. (2020).Developing a better understanding of the multipolar structure and surface field will help to constrain not only radio emission sites, but also the evolution of the magnetic field.As a first step, we use a modified RVM: we assume that the radio emission is produced as plasma normal modes at a radius r em which then propagates along a straight line, where its PA evolves adiabatically following the magnetic field.The PA then freezes at a distance h from the emission point where the magnetospheric plasma density has dropped so that the radio waves follow vacuum propagation afterwards.We take into account the aberration effect self-consistently, but neglect other propagation effects for now.With this approach, we systematically obtain PA sky maps and curves with r em and h as free parameters, which we then compare with the observed PA signatures.The paper is organized as follows: in section 2 we discuss the magnetic field model, in section 3 we derive an expression for PA considering aberration, in section 4 we describe our method, in section 5 we present the results, and finally, in section 6 we discuss conclusions. THE MAGNETOSPHERIC MODEL At the polar caps of a plasma-filled magnetosphere, electric current flows on open field lines.It has been reported that PSR J0030+0451 has a spin period of 4.18 ms with hotspots that are beyond a simple dipole, requiring multipolar magnetic field components (Bilous et al. 2019).We use the field configuration first deduced by Chen et al. (2020), who showed that a quadrudipolar magnetic field can be used to reproduce the light curves in X-rays, gamma rays, and radio waves for PSR J0030+0451.The analytical form of this field in vacuum is given by where the dipolar component is with the dipole moment p = p 0 (0, 0.985, 0.174), while the quadrupolar component is with where R is the neutron star radius.The dipole inclination angle is 80 • and the quadrupolar component is centered at (0, 0, −0.4R).These exact set of parameters were used in Chen et al. (2020).A 3-dimensional view of the field is shown in figure 2. The field is symmetric about the y-z plane, and the polar caps are located in the southern hemisphere, one of them circular and the other crescent-shaped. The structure of the force-free magnetic field is shown in figure 3. Beyond the light cylinder radius (R LC ), which is located at 20R for PSR J0030, the field lines are opened, and opposite polarities are separated by a current sheet (Spitkovsky 2006;Kalapotharakos & Contopoulos 2009;Chen & Beloborodov 2014;Philippov & Spitkovsky 2018;Hakobyan et al. 2019). EFFECT OF ABERRATION In addition to our force-free magnetic field solution, we assume that the outflowing plasma has γ = 1/ 1 − v 2 /c 2 ≫ 1 moving along the magnetic field lines.Let us also assume that the emission comes from a sphere of constant radius (r em ).1 In the case of emission from one magnetic pole, there is a single point on this sphere where particles beam along the observer direction n at a particular time.In ordinary pulsars, the emission direction is solely dependent on the direction of the magnetic field.But since PSR J0030+0451 is an MSP with spin period of 4.18 ms, the aberration angle at the emission point is approximately Ωr em /c (∼ 0.3 at r em = 5R) which cannot be neglected compared to the angular size of the emission cone (1/γ)2 .As inferred from the aberration-retardation modeling, typical values of the emission radius for MSPs are less than 100 km, which translates to ∼ 0.05 − 0.5 R LC (Rankin et al. 2017).On the other hand, ordinary pulsars emit radiation at a height ∼ 300 − 1000 km (Gupta & Gangadhara 2003;Dyks et al. 2004;Johnston et al. 2023) where the magnetic field is dipolar.The aberration angle of an ordinary pulsar, for example, PSR J1808-0813 with period P = 0.876 s, is Ωr em /c ∼ 1×10 −7 , where r em is taken to be ∼ 693 km (Mitra et al. 2023).This is negligibly small compared to the aberration angle of PSR J0030+0451.In this work, we assume 0.1 R LC ≤ r em ≤ 0.5 R LC . The location of the emission point, r em , is determined by demanding the direction of the particle propagation, along which the radiation is beamed, to be along the observed line of sight, n (Wang et al. 2010;Beskin & Philippov 2012).The speed of the outflowing plasma is close to the speed of light, c, resulting in the following where b = B/B is a unit vector in the direction of the magnetic field, and κ is a dimensionless constant.Its magnitude can be obtained by demanding that the flow is away from the star, and n • n = 1. Normal modes Normal modes in plasma physics refer to the oscillations and pattern of waves that a plasma can sustain due to interaction between the electromagnetic forces and charged particles.Assuming a cold pair plasma, the wave dispersion relation in the rest frame is given by (Philippov & Kramer 2022): where ω p is the plasma frequency, ω is the wave frequency, and k || and k ⊥ are the components of the wave vector k parallel and perpendicular to the background magnetic field, respectively.The solution to equation 9 gives three linearly polarized waves: the extraordinary mode (X-mode) having the electric field (E) perpendicular to both the background magnetic field and k-vector; and two ordinary modes (O-modes) with the E-field in the plane of k-vector and background magnetic field.In the pulsar magnetosphere, only radio waves corresponding to plasma normal modes can propagate and escape. The new Position Angle Given that we found the location of the emission point, we next derive the polarization as seen by an observer on Earth.Radio waves are emitted as plasma normal modes as discussed before.In the nearly force-free magnetosphere, the plasma is rotating together with the NS so the rotation will affect the plasma normal modes.Here we first obtain the normal modes in the corotating frame, then transform to the laboratory frame, because the normal modes are relatively easy to write down in the corotating frame.In the corotating frame, the background electric field is zero and the polarization of the wave modes is determined by the background magnetic field and the wave vector in the same way as in the plasma rest frame (see, e.g., Wang & Lai 2007). Transforming back to the laboratory frame, we can obtain the polarization as seen by a distant observer.In what follows, we consider the limit where the refractive index n ≡ ck/ω ≈ 1.Here, the wave vector in the laboratory frame is k and the frequency is ω.The background magnetic field in the laboratory frame is B, and the magnetospheric electric field is E = −β × B, where β = Ω × r/c is the dimensionless rotation velocity.In the corotating frame (we use primed quantities for the corotating frame), the background magnetic field becomes where Γ = 1/ 1 − β 2 is the Lorentz factor corresponding to the rotation velocity.The wave frequency in the corotating frame is and the wave vector becomes Considering an O mode with electric field E ′ w in the plane of the wave vector k ′ and the background magnetic field B ′ , we can write the electric and magnetic field in the wave as the following: Now, transforming back to the laboratory frame, we obtain the wave electric field Plugging equations ( 13) and ( 14) in equation ( 15), and after some algebra3 , we get where we have assumed ω = ck, and ⊥ indicates the component perpendicular to the wave vector k.Therefore, if we choose a coordinate system such that k is Sur et al. along ẑ, the wave polarization in the laboratory frame is given by Thus, the new position angle including the effect of aberration is given by In order to obtain the PA curves, we first construct sky maps of the polarization.Since the force-free solution reaches a steady state in the corotating frame, we just need one snapshot from the force-free simulation to create the all sky map.As a demonstration of principle, we consider emission coming from a spherical surface with radius r em .This is the point in the pulsar magnetosphere where an outgoing radio wave is assumed to be emitted.Because of the large uncertainties in the emission physics, we consider a large range of emission heights, from close to the surface all the way up to 10 times the radius of the pulsar4 . Suppose each emission point has a polar angle θ and an azimuth angle ϕ, the emission direction n is determined using equation ( 8).We denote the polar angle of n as θ e and its azimuth angle as ϕ e .After emission, the ray propagates in a straight line, so it will reach an observer with polar angle θ obs = θ e , and the corresponding phase is determined by (e.g., Bai & Spitkovsky 2010) which takes into account the phase delay due to finite light travel times.This step enables the mapping from the emission point (ϕ, θ) to the observer plane (ϕ obs , θ obs ).For each ray, the polarization first evolves following the background magnetic field, then it freezes at a distance h from the emission point.This height is the distance above the r em where the radio wave decouples from the plasma and propagates freely before reaching the observer without changing its polarization.Therefore, we determine the polarization angle at the freezing point r = r em + hn using the local magnetic field, taking into account the rotation of the magnetosphere during the time interval it takes the ray to propagate through a distance h.The polarization angle is then obtained through equation ( 18): for each ray we put the observer along z and the rotation axis of the NS in the x-z plane, then we find the transverse components of the magnetic field B x , B y and the rotation velocity β x , β y at the freezing point.Note that numerically, both the numerator and the denominator in equation ( 18) are zero at the emission point by definition.Therefore, a small but finite h is needed even if we consider the usual RVM limit where the polarization is determined immediately at the emission point. For our sky map, we select a 200 × 200 grid uniformly spaced in θ obs and ϕ obs .A ray originating from the emission surface at (ϕ i , θ j ) carries the information of the PA and maps to a particular grid point (ϕ k obs , θ l obs ) on the sky map.Once we have our sky map, we can fix the observing angle θ obs , and plot the PA as a function of ϕ obs . RESULTS As a first step, we perform a simple test to determine whether we get the usual result for a purely dipolar magnetic field.This is shown in figure 4, where two S-shaped swings from the two magnetic poles are clearly visible.The wider S curve (which gets broken into two parts by the phase beyond 180 • ) comes from the magnetic pole farther from the observer while the narrower one comes from the pole close to the observer.We also see the effect of aberration which makes the curves shift towards right, i.e. slightly towards higher ϕ obs .This effect is because aberration changes the orientation of the Evector in the wave and orientation of the polarization ellipse, which leads to changes in the PA profile.As per the RVM, most polarization observations are used to identify the Ω Ω Ω −µ µ µ meridional plane which is the plane containing the magnetic axis µ µ µ and the pulsar rotation axis Ω Ω Ω.In this scenario, the steepest gradient or the inflection point of the PA curve is contained within the Ω Ω Ω−µ µ µ plane, and coincides with the midpoint of the pulse profile precisely.Nevertheless, in the observer's frame, as Blaskiewicz et al. (1991); Dyks et al. (2004) pointed out, the aberration and retardation effects cause the PA inflexion point to appear delayed relative to the midpoint of the pulse profile.The magnitude of this delay is given by: where C is some constant coefficient (see equation 3).Next, we analyse the force-free magnetic field model for PSR J0030+0451.Due to the radio emission coming from the open field lines, we restrict our analysis to only using open field lines to obtain the sky map corresponding to the two magnetic poles.A view of the polar caps from different heights above the magnetic poles is shown in the left panel of figure 5.As the height increases, the poles become more oval-shaped.The sky map at r em = 3R is also shown in the right panel of figure 5 as an example. In figure 6, we plot the PA curves by fixing the viewing angle to θ obs = 54 • as reported in Bilous et al. (2019).Several different effects are explored, including the effect of varying the emission radius, varying the freezing height, and examining the effects of aberration.Our main aim is to reproduce the observed PA curves of PSR J0030+0451 at 430 MHz and at 1.4 GHz, as shown in Gentile et al. (2018).When comparing different PA curves, we focus more on their shape than on their exact values.Let us first discuss the PA curve observed at 430 MHz which is shown in figure 6.It has two distinct parts corresponding to two different poles (Gentile et al. 2018).The one between ϕ obs = (20 • -100 • ) comes from the smaller intensity pulse while the other comes from the higher intensity pulse.The latter demonstrates a flat and distorted PA profile, which hint that non-dipolar magnetic field effects are present.If the emission comes from very close to the surface, for example r em ∼ 3R, we can see only one of the magnetic poles (see figure 5), and a very small part of the PA curve5 .At very far from the surface (r em > 8R), the PA curve becomes S-like due to the field being dipolar6 there (see panel c in figure 6).Varying the freezing height (h) while fixing the emission radius also yields similar results.Hence, we identify a region at r em ∼ 5R where the PA becomes flat.This is shown by the blue dots obtained at r em = 5R and h = 0.4R in panel (a) figure 6.We see that for the higher intensity pulse, our model roughly reproduces the observed PA trend, but there is a significant discrepancy for the low intensity pulse. In either decreasing or increasing h, an abrupt discontinuity is seen at ϕ ∼ 290 • , as shown by the squares obtained at r em = 5R and h = 1.0R (panel b).This feature is only present when aberration is taken into account in which case the E y component becomes close to zero along the trajectory.Without aberration, the curve reverses its direction as shown by the green diamonds in panel (d).On the other hand, when neglecting the effect of aberration and choosing a higher emission radius/freezing height, for example r em = 7R and h > 4R (panel e), or r em = 8R (panel f), it appears that the observed PA trend for both pulses can be more or less reproduced.However, the aberration effect is particularly important for MSPs and should not be neglected.Our failure to reproduce the PA trend for both pulse peaks at 430 MHz when aberration is included may suggest that either the other propagation effects need to be taken into account, like GR light bending close to the neutron star (Poutanen 2020), and refraction of the radio waves by the plasma (Beskin & Philippov 2012), or the field configuration obtained by Chen et al. (2020) is not the actual field of PSR J0030+0451.A few other possible field geometries exist, as shown by Kalapotharakos et al. (2021).Our modeling indicates that radio polarization is a more stringent test that may be able to break the degeneracy and select a more realistic field configuration. Next, we examine the PA curve at 1.4 GHz, which again has two parts corresponding to two different intensity peaks in its pulse profile (see figure 1 in Gentile et al. 2018).The PA for the lower intensity pulse, shown in figure 7, resembles the tail of an S-shaped curve resulting from a dipolar magnetic field.To explain this, we explore emission radii/freezing height greater than 5R because the dipolar components of the magnetic field are expected to become dominant over the quadrupolar components at larger distances.As we can see, the PA curve considering aberration and corresponding to r em = 8R, and h = 9.9R exhibit a similar S-shaped swing (panel (a) in figure 7).Decreasing the emission radius (panel e) or the freezing height (panel b) makes the PA curve straight.Furthermore, neglecting the effect of aberration with the same set of parameters causes the PA curve to be concave (panel f) and thus different from the observed feature.A comparison of panels (a), (c), and (d) in figure 7 indicates that a higher freezing height is necessary to recover the observed PA.Furthermore, we cannot go lower than r em = 5R because the line of sight of the observer does not cross through this magnetic pole.7 In order to explain the PA corresponding to the higher intensity peak which looks non-dipolar, we study emission coming from close to the surface of the NS.The PA is divided into three sections: an initial rise at the beginning, a distorted shape in the middle, and a decreasing tail at the end (figure 8).It turns out that this feature can be explained by varying the freezing height along the pulse.As the polarization is set when the emission decouples from the plasma, i.e. when the density sufficiently drops, decoupling at different distances might occur because of the plasma density distribution across magnetic field lines.The observed features can be recovered at r em = 2R, starting with h = 0.4R for the initial rise, h = 0.1R for the middle, and h = 0.01R for the decreasing tail.The error bars (standard deviation) correspond to a variation of 100% in h and the PA(s) are reported as the mean value.We note that very different emission radius r em and freezing height h are needed at the two poles in order to reproduce the observed PA trend at 1.4 GHz.Although the plasma density at the two poles could be different, leading to different r em and Sur et al. h, it may be hard to account for such a large difference.This may again indicate that we either need to consider propagation effects, in particular GR light bending and plasma refraction close to the neutron star, or our field configuration is not the exact field possessed by PSR J0030+0451. CONCLUSIONS AND DISCUSSIONS Polarization modeling is a powerful tool and a very stringent test to constrain the magnetic field configuration of pulsars.An effective field model must pass all tests, including radio, X-ray, gamma-ray light curves, and radio polarization.In this paper, we compared radio polarization with observations for the MSP PSR J0030+0451 (which has detailed X-ray hotspot modeling) based on a multipolar magnetic field configuration, and we present easy equations for calculating the PA that can be used in parameter search models; if all physics is parameterized using r em and h. We used the force-free configuration obtained by Chen et al. ( 2020) that can simultaneously produce the radio, X-ray, and gamma-ray light curves.In our models we calculate the polarization of plasma normal modes, taking into account the co-rotational motion of the plasma.We derive an expression of the PA by considering an Omode instead of calculating the properties of the emission of charged particles in vacuum as described in Blaskiewicz et al. (1991).We modified the RVM to see if the multipolar field configuration could produce the observed radio polarization features.To determine the PA curves, we first created sky maps of the polarization taking into account aberration, and then selected a viewing angle.For PSR J0031+0451, the viewing angle was fixed at θ obs = 54 • .We were able to reproduce some of the observed features of the PA swing both in the case of 430 MHz and 1.4 GHz observations by constraining the emission heights in each case at which emission is produced. When considering aberration, for 430 MHz, the PA swing at the higher intensity pulse could be explained by emissions coming from an effective radius 5R with h = 0.4R.However, for the lower-intensity pulse, the PA swing couldn't be reproduced at any chosen emission radius/height.On the other hand, without taking aberration into account, both pulses can be roughly reproduced simultaneously by emissions from an effective radius greater than 7R.It is not possible to explain simultaneously high and low intensity pulses at 1.4 GHz by any emission radius.Hence, we examined effective emission radii and heights separately for the two pulses to determine whether the observed features were reproducible.Our first finding was that when aberration was not considered, the PA swing for the smaller intensity pulse at any radius did not match the data.Therefore, we considered aberration in all our models for 1.4 GHz.Considering emissions from a radius greater than 8R, it is possible to determine a similar PA swing for the smaller intensity pulse that resembles the tail of an S-curve.The higher intensity pulse with a distorted PA swing could be explained by emission from 2R and three different freezing heights.We note, therefore, that an important physics behind the MSP radio polarization model is including aberration effect, since without it many of the observed characteristics cannot be explained. All our models, however, ignore propagation effects.Radio waves propagate through dense magnetospheric plasma, where the polarization signature first evolves adiabatically following the magnetic field, before becoming permanent as they reach a larger distance where the plasma density has dropped (e.g., Petrova & Lyubarskii 2000;Wang et al. 2010;Beskin & Philippov 2012).Effects like wave refraction, cyclotron absorption, and transition from geometrical optics to vacuum propagation can all influence the observed polarization.These propagation effects will depend on the properties of the magnetospheric plasma, e.g., the plasma density and its radial dependence, the drift motion of plasma particles, and the distribution function of the outgoing plasma.It has also been found that pulsars whose PA profiles are not fitted with the RVM exhibit a much higher fraction of circular polarization than those with linear polarization (e.g., Johnston et al. 2023).Circular polarization is usually considered to be a result of propagation effects in the magnetospheric plasma (e.g., Melrose & Luo 2004).Furthermore, if the emission is produced close to the neutron star surface, GR light bending also needs to be taken into account (e.g., Beloborodov 2002;Poutanen 2020), which we neglected in this work.A complete framework thus needs to include all the ingredients: a self-consistent magnetic field configuration, GR light bending, and propagation effects in the magnetospheric plasma.We plan to develop such a framework for polarization modeling in the future and combine it with multiwavelength light curve fitting.As a result of high-precision observational measurements, we may be able to constrain not only magnetic field configurations and radio emission sites but also magnetospheric plasma properties, which include density, bulk Lorentz factor, etc.This will give us further insights into the properties of pulsar plasma and radio emission physics. The multiwavelength light curve and radio polarization modeling of PSR J0030+0451 suggest that a multipolar magnetic field may be important near the neutron star surface.A new analysis of the NICER data for PSR J0030+0451 to constrain the size and location of its Xray hot spots shows that there is a multi-modal structure in the posterior surface (Vinciguerra et al. 2023).Given the uncertainty and degeneracy in X-ray fitting, radio polarization may turn out to be an important constraint to help distinguish between different field configurations.Obtaining a good understanding of the field configuration can have implications for other branches of pulsar physics as well, including the interior magnetic field evolution, formation of gravitational-wave mountains, and damping of r-modes when considering a superconducting core.These will be investigated in our future works. Figure 1 . Figure1.The rotating NS is located in the center of XYZ frame with its magnetic moment axis (⃗ p) and rotation axis ( ⃗ Ω). ⃗ Ω lies in the x-z plane.The observer is located on the z-axis, the green dotted line shows the magnetic field line, while the curly black solid line shows a ray propagating towards the observer. Figure 2 . Figure 2. The vacuum magnetic field configuration as seen from the meridional view along the x-axis.The left figure shows the closed magnetic field lines with some of the quadrupolar loops while the right figure shows the open dipolar field lines.The rotation axis is along the z-axis and passes through the center of the star. Figure 3 . Figure 3.The force-free magnetic field configuration as seen from (left) meridional view (along x-axis), and (right) equatorial view.The green lines represent the closed field lines while the cyan lines represent the open field lines.Also shown in the background is the location of current sheets.This field configuration was obtained in Chen et al. (2020). Figure 4 . Figure 4. Left panel: the sky map for a vacuum dipolar magnetic field with an inclination angle of α = 80 • , and emission radius rem = 3R.The color scale represents PA values while the horizontal black solid line represents PSR J0030+0451's viewing angle from earth.Right panel: the corresponding PA curves with and without aberration were observed from a viewing angle of 79 degrees and freezing height h = 0.1R. Figure 5 . Figure5.Left panel: the shape of the polar caps at r = 2R (red), r = 3R (blue), r = 5R (green), and r = 7R (violet), as seen by the observer.Close to the surface, one of the polar caps at 2R is crescent-shaped, while the other pole is more circular in nature.With increasing height, the poles become oval-shaped.Right panel: the sky map at rem = 3R with the colorscale representing the PA values.Again, the horizontal black line is the line of sight of the observer set at 54 degrees. Figure 6 . Figure 6.A comparison of the different PA curves represented by the scatter plots, obtained from our models by varying the freeing height (panel b), emission radius (panel c), and not including the effect of aberration (bottom-row) for the same parameters.The black dots shows the observation at 430 MHz. Figure 7 .Figure 8 . Figure 7.Comparison between our models and observed PA of PSR J0030+0451 at 1.4 GHz corresponding to the smaller intensity peak in its pulse profile.
8,199.8
2024-02-18T00:00:00.000
[ "Physics" ]
Corona With Lyme: A Long COVID Case Study The longevity of the coronavirus disease 2019 (COVID-19) pandemic has necessitated continued discussion about the long-term impacts of SARS-CoV-2 infection. Many who develop an acute COVID-19 infection will later face a constellation of enduring symptoms of varying severity, otherwise known as long COVID. As the pandemic reaches its inevitable endemicity, the long COVID patient population will undoubtedly grow and require improved recognition and management. The case presented describes the three-year arc of a previously healthy 26-year-old female medical student from initial infection and induction of long COVID symptomology to near-total remission of the disease. In doing so, the course of this unique post-viral illness and the trials and errors of myriad treatment options will be chronologized, thereby contributing to the continued demand for understanding this mystifying disease. Introduction As the coronavirus disease 2019 (COVID-19) global pandemic enters its third year, its persistence will undoubtedly result in a sustained rise in the population of patients suffering from its unique post-viral illness syndrome, long COVID. Current estimates of the prevalence of long COVID suggest nearly half of all hospitalized patients and a third of all non-hospitalized patients who are infected with SARS-CoV-2 will endure long-term sequelae regardless of a symptomatic or asymptomatic initial infection [1]. The World Health Organization provides a consistent definition for these long-term sequelae included under the long COVID umbrella: "Post COVID-19 condition occurs in individuals with a history of probable or confirmed SARS CoV-2 infection, usually 3 months from the onset of COVID-19 with symptoms and that last for at least 2 months and cannot be explained by an alternative diagnosis. Common symptoms include fatigue, shortness of breath, cognitive dysfunction but also others and generally have an impact on everyday functioning. Symptoms may be new onset following initial recovery from an acute COVID-19 episode or persist from the initial illness. Symptoms may also fluctuate or relapse over time" [2]. The pathophysiology behind long COVID remains primarily theoretical in its understanding, although recent data have identified promising correlations for further investigation, including specific immunoglobulin signatures, viral titers, and/or autoantibodies associated with the development of the syndrome [3]. The overarching presumption of long COVID resulting from a hyperinflammatory state induced by SARS-CoV-2 infection has spurred a series of discussions on how this mechanism results in symptomatic injury. Such theories include direct neurovascular damage from the virus itself, embolic damage from the hypercoagulable states commonly induced by acute COVID-19 infection, microbiota dysfunction, or secondary autoimmune destruction brought about by this excessive inflammatory response. Comparisons are also commonly drawn between long COVID and several poorly understood chronic conditions with coinciding symptoms, such as myalgic encephalomyelitis and/or chronic fatigue syndrome (ME/CFS), postural orthostatic hypotension syndrome (POTS) and/or dysautonomia, and mast cell activation syndrome (MCAS) [4]. Despite this increase in awareness and well-intended discussion, millions of patients are currently or may soon suffer from long COVID and require more immediate attention from physicians throughout the spectrum of medical practice. Superficially, the deteriorations in the health of these patients have already resulted in significant decreases in productivity and/or quality of life while simultaneously exacerbating the inequities that plague our healthcare systems. More specifically, the "long COVID diagnosis" is laden with deeper controversy, as many "chronic-fatigue-like" syndromes are often met with skepticism by misinformed healthcare providers. Therefore, it is imperative to understand and legitimize both the pathophysiology and patient experience of long COVID while advocating for continued investment in research of viable treatment options. 1 2 The following case report details an otherwise healthy and health-literate medical student's journey with long COVID from the pandemic's earliest stages to the present day. It intends to contribute to the demand for knowledge and understanding of this unique post-viral illness as a genuine and far-reaching medical syndrome. It will also seek to denote individual successes and failures in recognizing and treating long COVID, including misdiagnoses surrounding a concurrent acute Lyme disease infection, as well as outline future directions in its management. Case Presentation A previously healthy 26-year-old female medical student living in New York, NY, was one of the first of her colleagues to be symptomatically infected with the then-novel coronavirus, SARS-CoV-2, in March 2020. Although COVID-19 testing was not available at the time to non-hospitalized patients, the presumed diagnosis of acute COVID-19 infection was confirmed in April 2020 during volunteer antibody testing and blood plasma donation. The patient's only significant past medical history consisted of acquired hypothyroidism, which has remained controlled since the institution of levothyroxine in 2018. Her initial infection resulted in two weeks of acute symptoms, followed by spontaneous recovery. Week one consisted of classic "flu-like symptoms," including fevers, chills, dry cough, headaches, mild dyspnea, and moderate fatigue with myalgias. At the start of week two, the first week's symptoms subsided and were replaced with anosmia and ageusia, with complete resolution in four to five days thereafter. The patient remained asymptomatic until mid-July 2020 when she abruptly began experiencing the symptoms, which eventually contributed to her long COVID diagnosis. At this initiation, the patient described a sudden and unprovoked burning sensation starting from her forehead and radiating through her entire scalp and down her neck. The sensation persisted for several hours before eventually dissipating and being followed by intense frontotemporal headaches, chest tightness with some dyspnea, palpitations and tachycardia with anxiety, dizziness with episodes of near-syncope on sitting or standing, and blurred vision. In the days following, the aforementioned symptoms would resume upon waking each morning and persist throughout the day. They were eventually coupled with significant fatigue, mild cognitive impairment or "brain fog" with impaired focus and memory recall, loss of appetite, diarrhea, uncharacteristic heat intolerance, and diffuse myalgias. The final symptom to appear during this cascade was severe, right-sided shoulder joint and/or muscle pain with radiations to the neck and right upper extremity, which led the patient to seek treatment. Due to her history of hypothyroidism and concerns of exogenous thyrotoxicosis secondary to levothyroxine treatment, the patient initially sought care from endocrinology but was found to be euthyroid. She was later directed to primary care for the management of her right shoulder pain, at which she endorsed uncharacteristically extensive time spent outdoors in wooded areas of northern New Jersey prior to symptom onset due to the social distancing protocols of the time. A comprehensive workup was then completed with additional considerations for autoimmune or infectious etiologies. This workup included a complete blood count (CBC), comprehensive metabolic panel (CMP), lipid panel, hemoglobin A1c, erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), rheumatoid factor, antinuclear antibodies (ANA) with reflex, vitamin B12, vitamin D, and a Lyme disease antibody panel. All results were unremarkable except for the Lyme disease antibody panel, which contained two positive bands (41 KD IgG and 23 KD IgM) and led the patient to be referred to infectious disease for further workup. Despite the lack of the minimum five positive bands on the antibody panel to support a Lyme disease diagnosis, the patient's clinical presentation led primary care and infectious disease to agree on initiating treatment. In early August 2020, she completed a two-week regimen of doxycycline 100 mg twice a day for treatment of a perceived early-disseminated Lyme disease infection. At this time, she was also able to more easily obtain outpatient COVID-19 testing and was found to still be both polymerase chain reaction (PCR)-positive and antibody-positive for the virus in July 2020 and September 2020. (Of note, COVID-19 titer quantification was not available to the general public at this time.) Following the completion of the doxycycline regimen, the patient noted no improvement in her symptoms other than a mild reduction in her right shoulder pain. Furthermore, at treatment completion, she endorsed worsening blurred vision with the introduction of bilateral floaters, worsening of palpitations with tachycardia and/or anxiety, worsened heat intolerance, and an increased frequency of near-syncopal events, which all interfered with all physical activity. Her headaches remained near-constant and began alternating between tension-like and left-sided, migraine-like presentations with concurrent nausea and left-sided allodynia of the scalp. In the weeks thereafter, these symptoms were then coupled with new-onset sleep disturbances with insomnia and intermittent night terrors. In addition, the patient's previously recovered olfactory sense was replaced with strong phantosmia. Per the patient, this caused the smells of otherwise benign foods, perfumes, or body odors to be swapped with foul smells of "burning rubber" or "rotting meat." Lastly, the patient endorsed new-onset oligomenorrhea, with her menstrual cycles ranging from 60-70 days since her initial infection and her longest and most current cycle at the time lasting 82 days. As her symptoms continually failed to improve, the patient sought care from a variety of specialists throughout the final months of 2020. Infectious disease ruled out several tick-borne illnesses through negative results of Babesia, Anaplasma, and repeat Lyme disease serology, as well as obtained negative Giardia, Entamoeba histolytica, and Cryptosporidium sampling. Neurology completed an extensive workup of her headaches including brain magnetic resonance imaging (MRI) without contrast but offered no findings. She was initially diagnosed with migraine-like headaches but found no benefit from common abortive therapies, including triptans and calcitonin gene-related peptide (CGRP) antagonists. She was eventually rediagnosed with a novel "post-COVID headache" and found some relief with treatment like that for tensiontype headaches, including Excedrin or Fioricet as needed for pain and prochlorperazine for nausea. Endocrinology performed a more extensive workup for fatigue and oligomenorrhea, including a repeat CBC, CMP, lipid panel, glycosylated hemoglobin (HbA1C), thyroid function panel, vitamin D level, and urinalysis, as well as morning cortisol, fasting insulin, follicle-stimulating hormone (FSH), luteinizing hormone (LH), estradiol, testosterone, dehydroepiandrosterone sulfate (DHEA-S), prolactin, iron panel with total iron binding capacity (TIBC) and ferritin, and vitamin B12 with folate. They also had the patient complete a 14day continuous blood glucose monitoring protocol by placing a FreeStyle Libre 14-day device (Abbott Laboratories, Chicago, IL) intramuscularly. By January 2021, a diagnosis of polycystic ovarian syndrome (PCOS) with persistent nocturnal hypoglycemia was reached based on results from the continuous glucose monitoring, a positive LH:FSH ratio of 1.77, and her documented oligomenorrhea. Metformin escalation treatment was initiated alongside diet modifications, which led to a moderate reduction in cycle length and morning hypoglycemic symptoms. In February 2021, cross-conferencing between endocrinology, primary care, and neuromuscular medicine led to a formal diagnosis of post-acute COVID-19 syndrome or long COVID, and provided a formal referral to a comprehensive post-COVID care center. Upon evaluation at the post-COVID care center, no residual pulmonary deficits were noted on chest imaging and pulmonary function tests (PFTs), but a new-onset right bundle branch block with occasional premature ventricular contractions was noted on multiple ECGs. However, after a negative echocardiography (ECHO), exercise stress test, and sleep study, no intervention was indicated. With supervision from her primary care provider, post-COVID care team, and medical school faculty, the patient simultaneously attempted her own self-rehabilitation. Through extensive trial and error and anecdotal evidence from programs designed for POTS rehabilitation, she constructed an exercise program using horizontally designed cardio equipment (i.e., reclined stationary bicycles, rowing machines, etc.) with extensive rest periods before, during, and after exercise. She also incorporated graded compression stockings and breathing exercises into both exercise and daily activities to lessen episodes of palpitations and/or tachycardia. In addition, she sought osteopathic manipulative treatment (OMT) through her faculty, which provide a moderate reduction in the severity and frequency of associated myalgias, headaches, and scalp allodynia. A variety of anxiety reduction techniques and sleep hygiene improvements were also employed with limited success in reducing her fatigue and brain fog. By November 2020, the patient's symptoms began to gradually improve, with notable reductions in palpitations, anxiety, and fogginess, but with a compensatory increase in fatigue and new-onset constipation and hair loss. The symptom pattern switched from a near-constant symptom presentation to a cyclic presentation with clear triggers associated with physical or mental overexertion; however, the threshold for "symptom relapse" increased over time. In January 2021, the patient completed the two-dose Moderna mRNA COVID-19 vaccination course and noted moderate improvements in her fatigue and brain fog in the first 24 hours following her first dose. Her headaches and scalp allodynia eventually receded as well nearly six months later in August 2021. To date, the patient identifies as having made a full recovery from long COVID and only endorses residual blurred vision managed with a stronger vision prescription and reduced but continued oligomenorrhea and hypoglycemic sensitivity managed with diet, exercise, and metformin treatment. She has since been reinfected twice with SARS-CoV-2 in July 2022 and January 2023 and received molnupiravir for the first of these two reinfections but spontaneously recovered both times without relapse of any of her long COVID symptoms. Discussion The case presented details the uniquely subjective and varied disease process of long COVID, thereby continuing the need for physicians to evaluate it as a diagnosis of exclusion. However, the anecdotal evidence detailed here of this eventual diagnosis suggests a phasic approach to its progression. The "prephase" or "Phase 0" consists of the initial SARS-CoV-2 infection, with either an asymptomatic presentation or classic COVID-19 symptoms, i.e., flu-like symptoms with or without loss of taste and smell. "Phase 1" represents the acute hyperinflammatory response associated with the development of "hyperactive" long COVID symptoms. In the case presented and other anecdotal accounts, the symptoms typically developed within weeks to months following the initial infection; however, they can also present as a continuation of specific symptoms from the initial infection. Cardinal symptoms may include headaches, palpitations with or without tachycardia, unprovoked anxiety, dizziness and/or near-syncope, blurred vision, brain fog with or without memory deficits, diarrhea and/or increased gastric motility, losses of appetite, heat intolerance, acute myalgias, and new-onset phantosmia. "Phase 2" represents a perceived reduction in hyperinflammation with the emergence of "hypoactive" long COVID symptoms. As presented in this case, these symptoms appear several months following the first phase, with clear shifts from hyperactive to hypoactive symptoms in several body systems. Such shifts can include a compensatory increase in fatigue and a decrease in anxiety and/or palpitations, a decrease in gastric motility and/or constipation, telogen effluvium, and more diffuse or generalized myalgias. The final phase, "Phase 3," consists of a gradual recession of some "hypoactive" symptoms and the emergence of seemingly permanent deficits. The switch between the second and third phases is more obscure in practice and can occur over several months to years with intermittent periods of symptom relapse. At this point, the prognosis of one's auto-recovery versus a limited recovery may be clearer to both patients and providers and can guide futility assessments of symptomatic treatments or interventions. A subjective visualization of the interplay of these phases is provided in Figure 1 below. FIGURE 1: A subjective visualization of the perceived phases of long COVID symptomology, based on the case presented. Although long COVID shares myriad symptoms with other well-known post-viral illnesses, the illness itself exists as a distinct diagnosis with unique symptoms, including phantosmia and a potentially unpredictable onset depending on one's transition from "Phase 0" to "Phase 1." In addition, SARS-CoV-2 post-viral shedding commonly occurs for greater periods of time in the long COVID population [5]. This persistent PCR-positivity may correlate with the onset of long COVID symptoms and thereby have the potential as a predictive or diagnostic value in this patient population. Nevertheless, long COVID will remain a clinical diagnosis following an exclusionary workup for the foreseeable future. The greatest improvements in a patient's long COVID prognosis seemingly come from allowing for the passage of time and support of an extended healing period. Concepts such as "radical rest" or "pacing" have entered the long COVID conversation as both a preventative and restorative concept to allow for ample healing and prevent symptom exacerbations, regardless of the presence of post-exertional malaise [6]. These techniques should be considered in conjunction with mRNA-based COVID-19 vaccination administration, which is believed to be both preventative and therapeutic in the long COVID setting [7]. Thereafter lies utility in treating secondary illness that may have been exacerbated by SARS-CoV-2 infection or associated deconditioning and encouraging a variety of lifestyle changes to adapt to this otherwise chronic illness. The benefit of this account lies not only in its detailed recollection of symptomology but in its availability of trial-and-error data in self-studied treatment options for clinician reference. Although the aforementioned reliance on the passage of time and vaccination proved most beneficial, several symptomatic treatments were also endorsed by the patient. These include graded horizontal exercise therapy like that employed in the POTS patient population [8], as well as breathing techniques and graded compression stocking usage during exercise, daily activities, and periods of symptom exacerbation. (Of note, in this case, the patient's compression stocking usage only provided temporary benefit and over time became "overly compressive" and seemingly contributed to several symptom re-exacerbations.) Multiple sessions of OMT were effective in reducing the patient's myalgias and any headaches present at the time of treatment and provided useful meditative and stretching strategies for at-home pain reduction. Abortive therapies for tension-type headaches with nausea (i.e., Excedrin, Fioricet, and prochlorperazine) in combination with increases in caffeine and salt consumption with increased fluid intake helped managed associated headache pain, orthostatic-related palpitations, and some aspects of fatigue and brain fog. The patient was also able to access several digital interventions with moderate symptom alleviation, including self-guided smell retraining for phantosmia and a clinical trial evaluating therapeutic gaming for mild cognitive impairment (NCT04843930). Lastly, an increased prioritization of her mental health through cognitive behavioral therapy and a strong network of peers within and outside of medicine, as well as sleep hygiene through improvements in her pre-sleep environment and reduction of nocturnal hypoglycemic episodes both made small but recognizable differences in her fatigue and cognition. Several ineffective interventions were noted by the patient, with the most prominent being traditional exercise rehabilitation (i.e., vertical cardio training via treadmill or elliptical, or strength training with excessive straining), as this would reliably exacerbate her fatigue to varying degrees. In addition, not only were traditional abortive therapies for migraine ineffective in managing her symptoms, but the introduction of a CGRP antagonist resulted in a moderate increase in fatigue and brain fog for the duration of treatment. Although likely unrelated, it is also worth noting that her doxycycline regimen for presumed early disseminated Lyme disease contributed to a subjective worsening of blurred vision, heat intolerance, and near-syncopal events. Finally, the patient attempted additional self-treatment following anecdotal recommendations of a variety of supplements, including vitamin C, vitamin D, vitamin B12, a generalized Bcomplex vitamin, n-acetyl cysteine (NAC), and turmeric extract. None of these supplements provided her with any discernable benefit and NAC led to a subjective increase in headache frequency and duration. A strong patient-provider relationship is essential to the success of comprehensive long COVID treatment. Many patients are still routinely dismissed by the medical community over concerns of malingering or lack of knowledge about management options. The patient presented here is health literate and has access to seemingly high-quality healthcare, yet endorsed several instances where her symptoms were ignored, misdiagnosed, or misattributed to perceived anxiety by several clinicians. Because long COVID exists as a clinical, exclusionary diagnosis with limited ability to differentiate the effects of the syndrome versus deconditioning, it is critical for all healthcare providers to recognize and adequately treat it to avoid further deconditioning and reductions in quality of life [9]. Furthermore, considering the plausible increase in depression and suicide risk in the long COVID population [10], these patients deserve attentive and evidence-based treatment regardless of the multifactorial causes of their illness. Engagement with cognitive behavioral therapy and online support communities has already shown some benefit in the reduction of psychosomatic symptom burden for these patients [11,12]. Additional educational initiatives for patients and providers, as well as advocacy for accommodations for those with more significant impairments, are necessary for the systemic support of this population. Although the subjective nature of this case report limits its impact, it may still serve as a launch pad for both improved diagnostic understanding of long COVID and research into more targeted treatment options. Such options may include examining correlations between long COVID and all metabolic syndromes, including PCOS, and the subsequent therapeutic benefit of improved insulin control through metformin treatment [13]. Exploring the greater interplay of long COVID and autoimmunity both in general and with respect to estrogen modulation may support causational theoretical models linking the illness with POTS, ME/CFS, MCAS, and other post-viral illness syndromes [14]. Through such models, targeted anti-inflammatory or immunomodulatory treatments can also be trialed to ideally prevent or reduce long COVID sequelae in a more controlled manner. The influence of co-infections, such as the Lyme disease infection described here, or other accounts of Epstein-Barr virus or other herpesviruses infection or reactivation [4], should also be granted further consideration in the syndrome's pathogenesis and management. Finally, the triumphs in anti-viral therapy for acute COVID-19 infection may have not only potentially protected this patient from symptom relapse following re-infection but are highly favorable avenues for long COVID treatment investigation [15]. Conclusions As the COVID-19 pandemic is reduced to endemicity, those suffering from long COVID will continue their emergence as a distinct patient population in need of improved diagnosis and management. In the case of this 26-year-old female, the unfortunate early timing of her illness left her unable to seek comprehensive long COVID care, as it did not yet exist. However, as scientific bodies mount ample evidence of the existence and impact of the long COVID syndrome, it is imperative for clinicians to adapt and improve the delivery of care accordingly. Such initiatives should include support for the creation of additional post-COVID care centers with an understanding of "pacing" and POTS or dysautonomia rehabilitation. Logistically, improving the recognition of long COVID beyond these post-COVID centers would conceivably reduce excess expenditure on unnecessary examinations while guiding patients earlier in their disease to more beneficial resources, such as physical therapy, occupational therapy, and/or OMT. Ultimately, the legitimization of long COVID as a clear and present potential consequence of an acute COVID-19 infection through the continual collection of anecdotal and statistical data will inevitably improve the delivery of care by bestowing it with the significance it deserves as a chronic but manageable illness. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: A preliminary overview of this case study was previously presented as a virtual poster at the American Academy of Family Physicians' 2022 FMX Conference. The authorship of both the poster and case report are identical; the poster served as an introductory presentation to this far more extensive and up-to-date case report submitted here. The poster can be found here: https://www.researchgate.net/publication/364154743_Corona_with_Lyme_An_Overview_of_A_Long_COVID_Case_Study? channel=doi&linkId=633ccaa2769781354ebc8723&showFulltext=true#fullTextFileContent.
5,369
2023-03-01T00:00:00.000
[ "Physics" ]
Investment Portfolio Optimization in Infrastructure Stocks Using the Mean-VaR Risk Tolerance Model Infrastructure a crucial role in economic development and the achievement of Sustainable Development Goals (SDGs), with investment being a key activity supporting this. Investment involves the allocation of assets with the expectation of gaining profit with minimal risk, making the selection of optimal investment portfolios crucial for investors. Therefore, the aim of this research is to identify the optimal portfolio in infrastructure stocks using the Mean-VaR model. Through portfolio analysis, this study addresses two main issues: determining the optimal allocation for each infrastructure stock and formulating an optimal stock investment portfolio while minimizing risk and maximizing return. The methodology employed in this research is the Mean-VaR approach, which combines the advantages of Value at Risk (VaR) in risk measurement with consideration of return expectations. The findings indicate that eight infrastructure stocks meet the criteria for forming an optimal portfolio. The proportion of each stock in the optimal portfolio is as follows: ISAT (2.74%), TLKM (33.894%), JSMR (3.343%), BALI (0.102%), IPCC (5.044%), KEEN (14.792%), PTPW (25.863%), and AKRA (14.219%). The results of this study can serve as a foundation for better investment decision-making. Introduction The presence of infrastructure, as a collection of physical and non-physical facilities serving society, plays a central role in supporting economic growth.In line with this view, adequate infrastructure development is seen as an essential strategy in improving the quality of human resources.Besides being a driver of economic growth, infrastructure also focuses on achieving Sustainable Development Goals (SDGs).SDGs set targets for building resilient infrastructure, promoting inclusive and sustainable industrialization, and fostering innovation.In this context, sustainable investment in the infrastructure and innovation sector plays a crucial role as an integral element of one of the 17 Global Goals in the 2030 Sustainable Development Agenda).The importance of adopting an integrated approach is key to the success of achieving all these goals. Based on the previous explanation, public interest in investment is increasing as investment provides flexibility for use both in the short and long term.Investment activities involve the placement of money or capital in a company or project with the hope of gaining profit within a certain period.To achieve optimal investment goals, especially in facing market fluctuations, it is important for investors to form a portfolio that not only maximizes expected returns but also manages risks at an acceptable level (Asthana & Ahmed, 2023).Success in managing this portfolio requires a careful approach and consideration of factors such as asset diversity, balanced allocation, and appropriate risk management strategies.Portfolios are formed as a step to reduce investment risk by combining multiple assets (Deng et al., 2021).The principles of portfolio optimization and diversification play a significant role in the development and understanding of financial markets (Vereshchaka, 2021).Portfolio selection can create a combination that maximizes expected returns according to the accepted level of risk (Hu et al., 2021). To achieve optimal investment goals, especially in the face of market fluctuations, it is crucial for investors to construct a portfolio that not only maximizes expected returns but also maintains risk at an acceptable level.Success in managing this portfolio requires a careful approach and consideration of factors such as asset diversification, balanced allocation, and appropriate risk management strategies.Thus, the formation of an optimal investment portfolio becomes a crucial step toward achieving long-term financial success and minimizing potential risks. Therefore, the formation of an optimal portfolio becomes a crucial step in designing an investment strategy.Portfolios are constructed as a measure to reduce risk in investments by combining multiple assets.The principles of portfolio optimization and diversification play a significant role in the development and understanding of financial markets.Portfolio selection can form a combination that maximizes expected returns according to the accepted level of risk (Hu et al., 2021). The determination of weights to achieve an optimal portfolio has involved several researchers using Mean-Variance optimization.However, it is worth acknowledging that traditional approaches like Mean-Variance Optimization have shortcomings, particularly in the use of variance as a risk parameter, which is often questioned (Salsabilla et al., 2023).Therefore, research in this field is increasingly highlighting the use of alternative methods such as Value at Risk (VaR).VaR becomes a crucial risk measure, defined as the estimate of the maximum loss that can occur over a specific period at a certain confidence level.Although widely applied to estimate financial risk, VaR has the advantage of providing a more comprehensive overview of risk by identifying the percentiles of the distribution of losses or gains, without focusing on every loss that exceeds the level (Lesmana et al., 2019). There are several relevant studies to this research, Liu et al. (2021), discussed portfolio selection with uncertain returns based on Value at Risk.Behera et al. (2023), discussed portfolio optimization using Mean-VaR and developed a Machine Learning model for predictive modeling.Gharaibeh, O. (2019), discussed portfolio optimization on infrastructure sub-index returns in Jordan using CVaR. Based on the descriptions above, this research examines optimization conducted with the Mean Value-at-Risk (Mean-VaR) model approach to determine the optimal selection of company stocks for constructing a portfolio with minimal risk and maximum return.The results of this study are expected to provide considerations for investment decision-making for investors, especially in the stocks analyzed in this research. Investment Investment is an individual's commitment to allocate owned assets with the goal of obtaining benefits from the allocation in the future (Balamurugan & Sivanesan, 2022).Investment should have a specific goal, allowing the determination of a timeframe to align with suitable products.One of the benefits of investment is the potential for asset or capital growth, as it can generate higher profits.Investments are divided into two types: real assets and financial assets.Real assets are usually tangible assets, such as land, machinery, gold, or houses.Meanwhile, financial assets include stocks, deposits, and mutual funds (Feruza, 2023).The selection of capital placement to be invested can be in various types; therefore, sufficient knowledge is needed to analyze the risks and benefits of which investment type is good to buy or sell. Stock Return In investing, individuals aim to achieve rewards after allocating their capital to a particular stock.Return serves as the reward for investors who bear the risk of their investment.Stocks offer investors the potential for significant returns in a short period, but these returns are proportionate to the associated risks (Liu et al., 2021).Stock return can be calculated using the following formula, (1) with, : Return of stock i at time , : Price of stock i at time , : Price of stock i at time . Furthermore, the expected value of return can be determined from the stock return using the following formula, ∑ Where k is the number of periods used.The determination of variance and covariance can be calculated using the following mathematical equations, ∑ (3) with, : Variance of stock i, : Covariance between stock i and j, : Expected return of stock i, : Expected return of stock j. Burr (4P) Distribution The Burr distribution is one of the significant non-negative continuous probability distributions with fat tails (M.Shakil & Kibria, 2020).The Burr distribution is typically used to depict statistical characteristics that are not uniform and is widely applied in financial risk assessment and insurance (Xia et al., 2023) Log-Logistic (3P) Distribution The Log-Logistic distribution is one of the significant continuous probability distributions with heavy tails determined by scale and shape parameters (Muse et al., 2021).The Log-Logistic distribution has a probability density function and a hazard function that resemble those of the Log-Normal distribution but with heavier tails, supporting more accurate inferences.As for the probability density function and cumulative distribution function of the Log-Logistic distribution with 3 parameters, they are as follows, Based on equation ( 5), the formulas for expectation and variance can be obtained as follows, Portfolio Portfolios are collections of all the assets owned by an investor.While measuring the return and risk for individual assets is important, for portfolio managers, understanding the return and risk of the entire set of assets in the portfolio is crucial.In the modern era, the approach to portfolio construction has shifted from traditional portfolio approaches to modern portfolio approaches.Traditional portfolio approaches involve diversifying the portfolio by randomly selecting assets, whereas modern portfolio approaches involve analytically forming portfolios using statistics and mathematics (Kumar & Shahid, 2023). Algebraically, the return of an investment portfolio , consisting of N risky assets, is expressed as the weighted sum of the returns of each asset in the portfolio, as shown in Equation ( 13) (Sukono et al., 2017), Using a mathematical approach, the return of an investment portfolio in Equation ( 13) can be expressed as shown in Equation ( 14), ( 14) Based on Equation ( 13), the average return of an investment portfolio can be determined as shown in Equation ( 15), ∑ where is the average return of stock with N being the number of analyzed stocks) at time t, and is the transpose vector of the asset returns with N being the number of analyzed stocks).If we denote , represent the vector of stock return, mean, vector of weights, and the unit vector, respectively, defined as, Referring to Equation ( 13), the algebraic expression for the variance of the investment portfolio return can be formulated in Equation ( 17), And the form of can be expressed as, where is the covariance between stock and , with √ called standard deviation. Furthermore, let dan respectively denote the covariance matrix and the identity matrix, as expressed in equation ( 18), Optimization Portfolio Investment with Mean-VaR The estimation of Value at Risk depends on the probability distribution of asset returns or investment portfolio.The Value at Risk (VaR) of an investment portfolio with weight vector , denoted as , is calculated using Equation ( 20) where is the allocated fund in the formation of the investment portfolio and is the percent quantile of the standard normal distribution when a significance level of is given.Typically, the significance level .If the risk level is measured using Value at Risk (VaR), the optimization problem becomes the Mean Value at Risk portfolio optimization as expressed in equation ( 21) (Lesmana et al., 2019), In searching for a solution to the Mean Value at Risk portfolio optimization problem with a risk tolerance factor as in equation ( 20), there are approaches to determine the optimal weights, including: (a) Risk tolerance factor approach and (b) Lagrange multiplier approach.The Lagrange function can be expressed as follows, The optimal weight values can be obtained using the risk tolerance factor approach The vector equation for the weights represents the solution to the Mean Value at Risk portfolio optimization problem, with the optimal weight solution for a specific risk tolerance factor denoted as w as shown in equation ( 23), ( 23) In addition to utilizing the risk tolerance factor approach , another method involves employing the Lagrange multiplier approach, √ (24) with , with being the inverse of the covariance matrix . Materials In this research, the objects used are daily stock closing data in the infrastructure sector listed on the Indonesia Stock Exchange.The period used is from December 1st, 2021, to December 1st, 2023.The data obtained is secondary data obtained from site yahoo finance.In the research, the tools used are Microsoft Excel and EasyFit. Methods 1) Calculate the stock return values using equation (1).Next, conduct a test of the stock return distribution model using EasyFit.Stocks for which the hypothesis is rejected are excluded from the calculation.2) Calculate the expected value and variance of stock returns based on their distributions using equations ( 7) and ( 8) for the Burr distribution (4P) and equations ( 11) and ( 12) for the Log-Logistic distribution (3P).Stocks with negative expected returns are excluded from the calculation.Next, calculate the covariance of returns using equation (4).And then, determine the expected value and covariance of the portfolio and represent them in vector form as in equation ( 16). 3) Optimizing the portfolio using Mean-VaR Return of Stocks The first step is to calculate the stock returns.Stock returns are calculated using equation (1).After obtaining the stock returns, the next step is to test the distribution model of the returns to examine the distribution of the return values for each stock.The distribution model test is conducted using the Anderson-Darling test with a significance level of 1%.Next, the distribution test process is carried out with the Anderson-Darling test with the following hypotheses: : Stock returns follow the assumed distribution, : Stock returns do not follow the assumed distribution.The Anderson-Darling test is conducted using EasyFit.The results can be seen in Table 1.Based on Table 1, it is found that there are 21 stocks for which the assumption ( ) is not rejected, namely ISAT, TLKM, JSMR, BALI, etc. Expected, Variance, and Covariance of Stocks Return Based on the previous results, 21 stocks follow either the Burr (4P) distribution or the Log-Logistic (3P) distribution.The first step is to calculate the expected and variance return values using the formulas for each distribution.Any non-positive expected return values are excluded from the calculation.The results can be seen in Table 2. Portfolio Optimization using Mean-VaR The computation of the weight for each stock ( ), the anticipated return, and the portfolio's VaR using Microsoft Excel yielded outcomes detailed in Table 4.The relationship between the expected return portfolio and the risk level, or the efficient frontier graph, is depicted in graphical form in Figure 1.Next, calculate the ratio between the expected return and VaR, and the results are presented in Table 5.The optimization portfolio graph between the ratio and VaR is presented in Figure 2. 2, it can be observed that the ratio value between the expected return and portfolio VaR continues to increase within the risk tolerance interval of .Subsequently, the highest ratio value between the expected return and portfolio VaR is 0.073448, with a value of 6.8.Thus, the optimal portfolio using the Mean-VaR model is obtained when . Conclussion There are 8 stocks that meet the criteria for forming an optimal portfolio with the proportion of each stock as follows: ISAT at 2.74%.TLKM at 33.894%.JSMR at 3.343%.BALI at 0.102%.IPCC at 5.044%.KEEN at 14.792%.PTPW at 25.863%.and AKRA at 14.219%.The optimal portfolio with Mean-VaR is obtained when the highest ratio value is achieved.In this study.the optimal portfolio is generated when the risk tolerance is 6.8.where the VaR risk level is 0.014261 and the expected return is 0.00105. Figure Figure 1: Efficient Frontier Portfolio Figure 2 : Figure 2: Optimal portfolio Based on Figure2, it can be observed that the ratio value between the expected return and portfolio VaR continues to increase within the risk tolerance interval of .Subsequently, the highest ratio value between the expected return and portfolio VaR is 0.073448, with a value of 6.8.Thus, the optimal portfolio using the Mean-VaR model is obtained when . . The probability density function and cumulative distribution function of the Burr distribution with 4 parameters, they are as follows, Table 1 : Anderson-Darling Test Return of Stock Table 2 : Expected and Variance Return of StockBased on Table2, it is found that only 8 stocks meet the criteria for portfolio formation.The next step is to calculate the covariance, as presented in Table3. Table 3 : Variance-Covariance Return of Stock [] Table 4 : Portfolio Optimization Results
3,535
2024-04-23T00:00:00.000
[ "Engineering", "Economics", "Business" ]
Speaking truth to power: Exploring a Ministry’s evaluation department through evaluators’ and policymakers’ eyes ‘Evidence-based’ development policy has caused impact evaluations to prioritise accountability over addressing processual learning questions. Moreover, evaluation scholarship is dominated by surveys, whereas qualitative research remains scant. This article traces one particular evaluation, within the independent Evaluation Department of the Dutch Ministry of Foreign Affairs. It asks, ‘How do evaluators and policymakers interact and what adjustments follow from the illustrative evaluation?’ It used participant observations, documents and interviews with policymakers and evaluators. An in-depth thematic analysis resulted in a typology of evaluator roles: (1) knowledge broker, (2) facilitator, (3) archive, (4) truth-revealing and (5) critical voice. Finally, policymakers and managers adjusted in three ways: symbolic, instrumental and empowerment. These results imply that if evaluators deliberate a suitable role, they (1) increase their partial understandings of the programme under scrutiny and the involved stakeholders, and (2) enhance the potential of synergies in collective learning to emerge in an evaluation team and the broader institution. Background and problem statement A major buzzword in current International Development practice and academia is 'evidencebasedness' (White and Raitzer, 2017). Following discussions of aid effectiveness of the 1990s and 2000s, a shared recognition has emerged among scientists and professionals that learning and accountability should be central concerns (Doucouliagos and Paldam, 2008;Easterly, 2007). Banerjee and Duflo (2011) famously pioneered these concerns in their experimental poverty research. One straightforward way in which actors and institutions in the International Development sector attempt to be (more) evidence-based is through evaluation of policies and programmes. However, the rise of 'evidence-based' Development Cooperation policy has caused evaluations to overemphasise accountability at the cost of learning (Kogen, 2018). This tension, between learning (i.e. reflecting on past programmes in hopes of improving these) and accountability (showing the ways in which taxpayer money is spent), is commonly referred to as the 'dual purpose' of evaluation. What is more, it is found that the goal of accountability often overshadows learning purposes of evaluations (Bjørkdahl et al., 2017). One reason for this is that quantitative studies, such as randomised controlled trials (RCTs), can demonstrate direct impacts of programmes, while the benefits of qualitative research focused on policy learning are much less easily measurable and interpretable; these unfold over time and emerge from complex factors and stakeholders interacting in the process of programme implementation (Slade et al., 2020). As such, quantitative evaluations tend to focus on accountability between donors, implementing organisations and beneficiaries, overlooking the learning purpose that evaluations also intend to serve (Kogen, 2018). Rarely is the eventual uptake of lessons, drawn from evaluations, analysed. Furthermore, many studies in the policy and learning realm are survey-based. This means that current scholarship lacks detailed processual descriptions of learning processes between individuals. Moreover, many studies focus on cases where learning did happen, which skews our perception of policy adjustment (Moyson et al., 2017). This relates to so-called 'survivorship bias', where many studies in the evaluation realm analyse cases in which an evaluation led to a (desired) policy change, but instances where nothing happened, or an undesired change occurred, are rarely studied. Survivorship bias is widespread in, but not limited to, the world of business advice; stories of commercial success (either of individuals or businesses) are often distorted by ignoring all those who dropped out of college, or business ideas that never made it. Taleb (2010) refers to these unstudied cases of failure as 'silent evidence'. Similarly, one could make the argument that participants in evaluations are often times the 'usual suspects', creating further bias by leaving the 'unusual suspects' out of sight (see also Ware, 2014). In a recent special issue on policy success and policy failure, Dunlop (2017) calls attention to the importance of studying failure: Compared to the large volume of publications on 'good practices' and 'best practices', far less scholarly attention has been paid to 'bad practices' or 'worst practices' despite their widespread prevalence. As a result, public officials have failed to learn valuable lessons from these experiences. (p. 4) As Dunlop states, analysing cases where learning did not happen (or policy failures) is important, not least because failures may prove a breeding ground for learning, according to May (1992): Cases involving policy failure are useful to consider since failure serves as a trigger for considering policy redesign and as a potential occasion for policy learning. One of the basic tenets of the organisational learning literature is that dissatisfaction with program performance serves as a stimulus for a search for alternative ways of doing business . . . Policy successes might be said to provide a stronger basis for learning by making it possible to trace conditions for success. However, dissatisfaction serves as a stronger stimulus for a search for new ideas than success. (p. 341) In short, policy learning scholarship is dominated by survey-based research and its focus on policy success skews our perception of policy learning. A recent study by Pattyn and Bouterse (2020) stresses the importance of focusing on interactions between policymakers and evaluations in learning processes. They find that engaging policymakers in the evaluation design increases evaluation use (Pattyn and Bouterse, 2020). Finally, Barbrook-Johnson et al. (2020) show that views of evaluators influence evaluation practice. For instance, the variety of backgrounds that evaluators come from lead to different conceptions of what constitutes an evaluation in the first place (Barbrook-Johnson et al., 2020). Hence, this study asks the question, 'How do evaluators and policymakers interact and what, if any, adjustments follow from the illustrative evaluation?' This study focuses on learning (rather than accountability), using a mix of qualitative methods. It is focused on the position of evaluators and their interaction with policymakers. Finally, it analyses the adjustments made by policymakers and their managers, by following an illustrative evaluation as-it-happened. Because the study's data collection took place as the evaluation process unfolded, the subsequent policy changes were not yet known. In this way, the study avoided the tendency of focusing on usual suspects and stories of successful policy change. In short, this article aims to address the following knowledge gaps: ○ Addressing the lack of processual qualitative studies in policy learning scholarship by researching the interactions between evaluators and policymakers, and ○ Refocusing attention from accountability to institutional learning by analysing the follow-up of an unfolding evaluation process. Theoretical framework In order to situate this study within current policy evaluation scholarship, this section will first discuss institutional learning. Second, it provides an overview of existing evaluation uses, a metric used to analyse learning. Third and finally, it sheds a light on the positions of policymakers and evaluators. Institutional learning An important source for understanding policy change and learning is Hall's 1993 article 'Policy Paradigms, Social Learning and the State'. Hall distinguishes between three potential ways in which states change policies. A first-order change refers to changing levels of existing instruments (e.g. tax rates increase by x%). A second-order change involves the changing of instruments themselves (e.g. providing tax cuts instead of subsidies). A third-order change appears when the overarching goals, or paradigm, of policies change (e.g. moving from a Keynesian paradigm to monetarism). These third-order changes happen rarely and are often the result of political and societal contestation (Hall, 1993). This framework is useful because it sheds light on the spheres of influence and dimensions of change of policymakers and evaluators. They are visualised in the policymakers' sphere of influence in the conceptual scheme. For instance, evaluation departments often suggest rethinking of strategies behind policies, but the extent to which that is possible depends, in part, on the credibility and consistency of the status quo paradigm vis-à-vis an alternative one. Hall's conceptualisation of change and learning is used to analyse policy evaluation outcomes in this study. Evaluation use Government-commissioned evaluations are expected to not only serve accountability, but also stimulate institutional learning. As such, practitioners are 'utilization-focused', implying that evaluations are constructed with a specific user in mind and valued according to their usefulness (Patton, 2011: 315). Evaluation use is an often-used indicator for learning and Bouterse (2016) finds a total of five types of evaluation uses (see Table 1). Especially instrumental, conceptual and empowerment use are relevant, for this is when learning takes place (Bouterse, 2016). In order to understand the variety of ways in which evaluations may be used, it is important to take a closer look at their users (policymakers) and creators (evaluators). Policymakers and evaluators It is advisable to analyse policymakers and evaluators at the individual level, since they are best positioned to describe their own changes in learning. In a recent study, Schmidt-Abbey et al. (2020: 205) call for an increased need to focus on evaluators themselves, given their 'embeddedness within an evaluand'. Grob (1992) studied policymakers and evaluators, which according to him sometimes appear to be worlds apart. He characterises evaluators as critical and concerned, and eager to make a difference, yet often ending up frustrated when their Empowerment Evaluation helps people to change their work, helps them address issues they're facing Learning might take place Fetterman (1994) Source. This table was adapted from Bouterse (2016: 12). It was available for use for research purposes, as stipulated in the 'License to inclusion and publication of a Bachelor or Master thesis) in the Leiden University Student Repository' (Leiden University, 2012). findings are ignored or misused. Policymakers, on the contrary, complain that evaluations are too long, published too late or at times irrelevant (Grob, 1992). Policymakers and evaluators therefore have separate spheres of influence (see Figure 1). Nonetheless, Pattyn and Bouterse (2020) show that their interaction may result in improved uptake of evaluation lessons. What is more, increased cooperation (e.g. developing a research question together, holding regular feedback interviews) between policymakers and evaluators may benefit learning through a process called developmental evaluation, or adaptive evaluation (Patton, 2011: 305). Hence, it is worthwhile to study the interaction between evaluators and policymakers, visualised in Figure 1. Conceptual scheme: Key concepts and operational definitions The conceptual scheme in Figure 1 guides the analysis of this study by highlighting its key concepts and relationships, showing an evaluation process. It will be used to structure the analysis of the study when presenting its results. Given the variety of contextual factors at play, it will be impossible to establish a causal relationship, hence the exploratory nature of this study. Nonetheless, a number of key concepts will be disentangled, and their relationships analysed. The main concepts of this study are evaluation, evaluandum (object of evaluation) and adjustment. On one hand, the study aims to analyse the position of the evaluator and their interactions with policymakers. This part of the study finds itself in the evaluators' sphere of influence. On the other hand, it analyses the interactive learning process of policymakers and evaluators by tracing the managerial adjustments following the illustrative evaluation. Research setting Empirical data collection took place within the Dutch Ministry of Foreign Affair's Evaluation Department. This is a relevant research setting for three reasons: First, carrying out research here ensured access to rich qualitative data (e.g. Terms of Reference and interviews) which improved the robustness of the study. Second, the Evaluation Department is one of the first government evaluation units (founded in the 1970s) of development aid, resulting in a long tradition of evaluation expertise and high level of 'maturity' (Pattyn and Bouterse, 2020). As such, the Netherlands has a strong evaluation culture (Dahler-Larsen and Boodhoo, 2019). Third and finally, the setting provides the researcher with the opportunity of studying evaluation and policymaking 'as it occurs', increasing the ecological validity of the study. As the day-to-day business of policymaking is included in the analysis, the study paints a rich description of learning processes. This study's units of analysis include evaluations, evaluators and policymakers. The units of observation are employees of the Evaluation Department, policymakers of the Ministry of Foreign Affairs and evaluation reports. Data collection To answer the question of this research, the following data sources were used: three evaluation reports (ranging from development cooperation to foreign trade and international relations-themed studies), semi-structured interviews with evaluators and policymakers (N = 38) and meeting minutes as well as participant observations in six stakeholder meetings, where Note. Authors' construction. evaluation outcomes were discussed. Data collection took place in the period September-December 2019. For the semi-structured interviews, an interview guide was used to collect views and experiences of policymakers and evaluators, based on the evaluator sphere of influence of the conceptual scheme. Questions included 'In what discipline were you [evaluator] trained?' and 'What goal(s) do you [evaluator] try to achieve by carrying out/supervising evaluations?', while the interviews investigating the policymaker-evaluator nexus included, for instance, 'How do you [policymaker] estimate the influence of evaluation recommendations on policymaking generally?'. The goal of these observations and interviews was to move past 'official recordings' of actions, such as policy letters, and shed light on learning as experienced by individuals. Besides empirical data, this study makes use of existing literature and policy documents. The illustrative evaluation process, used to study learning specifically, concerns the publishing and response to the report 'Less Pretension, More Realism' (Directie Internationaal Onderzoek en Beleidsevaluatie (2019a)). It is referred to as the 'illustrative evaluation' for the remainder of the article. Using a snowballing sampling technique, interviews were held with evaluators and policymakers, including the author of the policy response and the director of the respective policy department (Directie Internationaal Onderzoek en Beleidsevaluatie (2019b). All interview transcripts, meeting minutes and documents were uploaded to Atlas.ti, coded using two cycles (starting with hypothesis coding, ending with evaluation coding) and subsequently thematically analysed. A detailed overview of the collected data can be found in Supplementary Table S1. Limitations and data quality This section shortly lists potential limitations and assesses its data quality. The study cannot infer causality, as there is no way of establishing a counterfactual, that is, what would have happened in a given situation if there had not been an evaluation. Moreover, it must be emphasised that the adjustments that follow evaluations are not per se due to the evaluation; an evaluation's input serves as one of many sources for policymaking and programme design. To decrease selection bias among interviewees, all employees of the evaluation department were interviewed and posed the same questions to increase replicability in different thematic fields, or in other locations (Bryman, 2012;LeCompte and Goetz, 1982). Focusing on one evaluation department provides limited external validity (Bryman, 2012;LeCompte and Goetz, 1982). In this study, data collection took place in a mature evaluation setting. With decades of experience, this department has built a strong reputation and extensive knowledge of past, current and future programmes. Hence, the study's findings and recommendations may not be generalised to just any evaluation setting, but may prove relevant for other mature evaluation contexts. Results This section presents the main results of the analysis along two spheres of influence of the conceptual scheme. The model also portrays the illustrative evaluation process. First, it presents the position of evaluators and, second, it illustrates policymakers' adjustments in response to the illustrative evaluation. A full overview of the variety of data collected (interviews, participant observations and documents) for this study can be found in Supplementary Table S1. Speaking truth? Evaluators play different roles and are uniquely positioned In the semi-structured interviews with policymakers and evaluators, respondents were asked about their perceptions of evaluators. A number of themes recurred in the interviews surrounding questions about their perceived impact as well as their position within the Ministry. A number of assumptions and views surrounding what evaluators ought to do, or not do, became apparent. For instance, several respondents indicated the evaluation department is too academic, as it desires to be 'the expert'. As one policymaker put it, 'The evaluation department has the tendency to want to come up with new methods, and first becoming experts in a domain rather than using existing material and moving ahead' (policymaker, interviewee 30, 2019). Interestingly, respondents held contrasting views about how critical evaluators should be. Several respondents indicated evaluators need to be more critical, as the evaluation department is precisely the department that can afford to do so, because its reputation and budget is strong. As such, it should not shy away from writing critical reports. It differs from consultancy and nongovernmental organisation (NGO)-based research: It suffers less from positive bias, which arises when evaluators over-report positive findings (or even exclude negative ones), in order to uphold a good relationship with the organisation funding the evaluation. Other respondents, on the contrary, urged evaluators to strike a more diplomatic tone: 'Evaluators need to avoid "attacking" policymakers by writing more diplomatically. Though there is a risk of writing too diplomatically; this requires pedagogic skills' (evaluator, interviewee 27, 2019). Furthermore, several notions of the relationship between policymakers and evaluators surfaced from the interviews. A recurring concern among policymakers and evaluators alike was the apparent divide in understanding of each other's context: It is important for evaluators to understand the limits (in terms of workload, political sensitivity) of policymakers, and what their spheres of influence are. For instance, a recommendation to increase capacity is applauded by employees, but at the same time, they cannot decide to hire people themselves. (Policymaker, interviewee 30, 2019) Besides their perceived lack of understanding, there is certainly a sense of appreciation for each other's work: Policymakers speak highly of evaluators and acknowledge their independent position: I also tell them (fellow policymakers, red.) to, when in doubt, ask IOB (the evaluation department) for advice, they can be seen as neutral experts, and their advice only sharpens conclusions we as policymakers draw about an evaluation. The reputation of IOB is high, both in the Netherlands and abroad. (Policymaker, interviewee 30, 2019) Finally, the expert status is recognised by policymakers, who indicate there is a recent desire to improve monitoring and evaluation (M&E) capacity in several departments: 'At the same time, I think now, there's more desire for having ex-evaluators in policy departments, because evaluators have time, unlike policymakers, to get really deeply informed with a topic, which means they become almost experts' (policymaker, interviewee 30, 2019). In summary, respondents hold a variety of views regarding the position of evaluators. On the basis of the interview data presented above, it was found that various, and at times contrasting, functions were attributed to the evaluation department. To this end, a typology was created of roles, characteristics, outcomes and a discussion of their advantages and disadvantages. This typology is presented in Table 2. These various roles are within the sphere of influence of the evaluator and may therefore serve as a deliberation tool. If evaluators are conscious of their respective roles, within the team and institution, they become more aware of their acquired understandings and the partiality and potential complementarity of that. The specific implications of the typology will be discussed in the 'Implications and recommendations for M&E practitioners' section. Finally, the interview data comprised many views of the interactions between policymakers and evaluators. This policymaker-evaluator nexus, where varying types of evaluation use surfaced, and hence learning may take place, will be discussed in the next section. Speaking truth to power? Policymakers and managers adjust in various ways This section presents the results of interviews conducted with policymakers and evaluators, as well as a document analysis (i.e. the evaluation report and policy response letter), all pertaining to one illustrative evaluation trajectory. Three different types of evaluation use (symbolic, instrumental and empowerment) were found and will be discussed below. Symbolic. Evaluators found that the achievement and sustainability of results had been impaired by high levels of fragmentation: Funding was spent too scarcely between various small and geographically distant activities. The policy response letter of the Cabinet (signed by the Minister of Development Cooperation and Trade) recognised this recommendation. The Ministry asserted it has started limiting the number of activities, as more focus will increase the quality of Dutch efforts in development cooperation (Directie Internationaal Onderzoek en Beleidsevaluatie (2019b)). During the interviews, several policymakers indicated that this lesson is not new: Fragmentation had been a recurring issue in development cooperation spending. However, two policymakers did point out that the document for policymakers to 'make their case' better for reducing fragmentation, vis-à-vis their managers, but also towards implementing organisations like NGOs. As such, the evaluation is used as a substantiation for the ongoing fragmentation discussion within the Ministry. Instrumental. In response to recommendations, a number of tangible actions have been taken: first, the establishment of an internal working group for defragmentation efforts as well as exploring alternatives to tendering, which include important so-called 'change agents' within the Ministry. The goal of this group is to investigate the existing bottlenecks in defragmentation efforts and to find the best way to reduce the number of activities of departments by about 30 per cent. It was one of the first instances that a dedicated working group was established after an evaluation, thus setting up the stage for a 'learning-team' in which collective learning could come to full fruition. Empowerment. Evaluators find an overemphasis on accountability vis-à-vis learning in current M&E efforts. The use of standardised indicators is justified, but its dominance damages the use of M&E for learning purposes. Policymakers face pressure from Parliament to report results. As a consequence, result frameworks developed in advance hardly suit the changing and fragile contexts in which programmes take place. In this way, both NGOs and the Ministry are not incentivised to reflect and learn, or report negative results, either fearing the loss of funding or facing parliamentary criticisms (Directie Internationaal Onderzoek en Beleidsevaluatie (2019a)). The Cabinet acknowledges that monitoring and evaluation should be given more attention across the board. Hence, it promises to increase the capacity for M&E staff as well as training current employees, both within the Ministry and at embassies (Directie Internationaal Onderzoek en Beleidsevaluatie (2019b)). In interviews, policymakers recognise the tentative rising interest in M&E across the Ministry. There appears to be more room to do something around 'lessons learnt' and M&E. One policymaker thought that, on one hand, external pressures, like politicians asking for transparency about results, drive this development. On the other hand, she observed an internal drive to organise M&E better, although this differs per subject and level: 'At the activitylevel, there is a lot of opportunity for change and amendment. It gets trickier at higher levels, where political wishes may run counter lessons we learn about effectiveness' (policymaker, interviewee 31, 2019). Summarising, the results of the illustrative evaluation trajectory showed a variety of adjustments and interactions between policymakers and evaluations. Symbolic (evaluation is used as substantiation in internal discussions about fragmentation), instrumental (the goal of 30% activity reduction and establishment of a working group) and empowerment (call to increase staff capacity in the ministry) uses of evaluations were found. These findings are presented in Table 3, adapted from Bouterse (2016), which recaps evaluation uses and presents an illustration from this evaluation trajectory. Implications and recommendations for M&E practitioners This penultimate section takes the study's key findings and, based on their implications, formulates a number of recommendations to M&E practitioners. A snapshot of these findings, implications and recommendations can be found in Table 4. As reported in the 'Results' section, two key findings were distilled from the study's data. First, evaluators play different roles and are uniquely positioned. The typology of roles, presented in Table 2, gives an idea of these roles, typical characteristics and corresponding products. This is not the first study to challenge the idea of evaluators as singularly oriented to research methods and models. Skolits et al. (2009) find that evaluators take on a wide variety of demands and recommend a more 'situational' perspective on the role of the evaluator. They find that consideration of the expected evaluation activities, their particular demands and required products (e.g. types of deliverables) warrants careful consideration of roles when recruiting evaluation team members (Skolits et al., 2009). Therefore, this study recommends deliberation of required roles at the very outset of an evaluation trajectory. However, role deliberation is by no means definitive. Evaluators may, where possible, take on multiple roles throughout an evaluation trajectory. Verwoerd et al. (2020) find that combining the role of evaluator and facilitator, for instance, resulted in an evaluation that better matched the project under scrutiny. This flexibility in roles can provide an evaluation with emergent qualities, where adjustments can be made in response to needs of policymakers, (external) researchers or changing political realities (Verwoerd et al., 2020). Hence, the benefit of an evaluation trajectory with emergent qualities, that allows evaluators to change roles when necessary. Furthermore, the study found that evaluators are deemed independent and having time to get deeply involved in a project. Grob (2012) shows that while decisions in policymaking are never made by one person or organisational entity, evaluators have a unique position because of their independent and helpful reputation. What is more, the nature of their work allows evaluators to build their knowledge, since they have the time to get deeply acquainted with programmes under scrutiny, as well as state-of-the-art research of 'what works' (Tourmen et al., 2021). Their unique, independent position, as well as the time they have to build a strong basis of knowledge, implies their added value lies with acting as knowledge brokers while recognising the partiality of their own knowledge and need for knowledge exchange with others. Second, three types of managerial adjustments were found when analysing the illustrative evaluation trajectory: symbolic, instrumental and empowerment. A more detailed overview of these adjustments was presented in Table 3. Managers may use evaluations in a symbolic way, for instance, to substantiate an already ongoing discussion. This entails a risk of evaluators being pressured to report previously held beliefs (Pleger and Sager, 2018). However, Pleger and Sager find that these influences are not necessarily negative, but may also be positive. They offer three differentiating questions to evaluators to discern the type of influence at hand: Is the attempt to influence consciously or unknowingly (awareness)? Is the reason of influence selfinterest or an attempt to improve the quality of the evaluation (intention)? And finally, is the influence in accordance with scientific standards (accordance)? (Pleger and Sager, 2018). Hence, evaluators need to know how to distinguish positive and negative external influences to manage these effectively. Furthermore, Bourgeois and Whynot (2018) assert that instrumental use of evaluations, by managers specifically, increases as actionable recommendations are included. Therefore, this study recommends evaluators to do just that. Finally, existing studies corroborate this study's finding that the empowerment use of an evaluation may be promising; Donaldson (2017) argues that empowerment evaluation has always prioritised stakeholder involvement, as well as stimulate evaluation capacity (not only of evaluators, but with policy departments and implementing organisations), leading to increased use of evaluations (Donaldson, 2017). Hence, this study recommends evaluators to proactively manage stakeholders (e.g. by involving them in the trajectory from the outset) and maintaining (in)formal contact with policymakers to increase and maintain sensitivity to their context. Discussion This section highlights key contributions of the study and subsequently outlines potential avenues for future research. Finding Implication Recommendation Main finding I: Evaluators play different roles and are uniquely positioned Typology of roles Range of activities and demands lead to different roles to be assumed (Skolits et al., 2009) Reflect on necessary roles that should be fulfilled in an evaluation team; how can these be complementary? When evaluators take on multiple roles, for example, as facilitators and evaluators, this enhanced understanding of the evaluated programme and involved stakeholders (Verwoerd et al., 2020) Incorporate emergent qualities (where different roles can be assumed throughout the trajectory) and potential for knowledge synergies Neutral, independent and reputable status Evaluators' independent reputation makes them valuable and stand out from other professionals involved in policymaking (Grob, 2012) Added value of having time and credibility to act as knowledge broker while being conscious of the boundaries of knowledge linked to evaluators' respective roles Time to get deeply involved in a topic Evaluators theorise and build experience because they have time to do so (Tourmen et al., 2021) Main finding II: Policymakers and managers adjust in various ways Symbolic (evaluation was used as substantiation in ongoing discussions) Pressure to report previously held beliefs and external influence can be negative and positive (Pleger and Sager, 2018). Know how to discern different influences to effectively manage these Instrumental (working group and 30% reduction in activities) Instrumental use by managers increases as recommendations are included (Bourgeois and Whynot, 2018) Write down actionable recommendations and validate these with stakeholders Empowerment (more attention and capacity for M&E) Empowerment evaluation is a promising tool for (evaluation) capacity building (Donaldson, 2017) Maintain sensitivity to policymaker context by including stakeholders proactively The first key result, that evaluators play different roles, was summarised in Table 2. The idea of evaluator roles is not new; indeed, Skolits et al. (2009) previously defined evaluator roles on the basis of their demands. However, an empirical basis for this conceptualisation was lacking to date. This study has provided this empirical basis, by distilling roles from a mix of empirical data sources, ranging from semi-structured interviews to participant observations. Of these roles, that of knowledge broker is expected to be most effective, since evaluators' added value lies with their time to get familiar with programmes, as well as their independent reputation (Grob, 2012;Tourmen et al., 2021). Ridde (2007Ridde ( : 1020 sees the evaluator as knowledge broker as '. . . an intermediary between the worlds of research and action'. The interviewed evaluators of this study indeed previously worked in academia or in policy departments and implementing organisations, such as NGOs. This mix of backgrounds, and unique position between research and action, requires careful consideration of roles required in particular evaluation teams. This study finds that a mix of evaluator roles, as well as incorporating emergent qualities in evaluation trajectories, where roles may switch, increases understanding of the programme under scrutiny. Second, three types of managerial adjustments were found in response to the illustrative evaluation trajectory. Certainly, the Ministry has, in response to this evaluation, taken concrete actions, summarised in Table 3. Examples of these include the postponement of a parliamentary debate (the Ministry wanted to await evaluation findings and lessons before publishing the new subsidy framework), the target to reduce activities by about 30 per cent and the goal of increasing M&E capacity and cutting the number of activities per policymaker. Simultaneously, these illustrations highlight three types of evaluation use (see Table 1), corresponding with Bouterse's (2016) overview of evaluation uses: symbolic, instrumental and empowerment use of evaluations. Interestingly, the illustrative evaluation process shows resemblances with Hall's (1993) fundamental framework for policy change. For instance, the goal of reducing fragmentation, to which policy departments have responded by initiating 30 per cent cuts in activities, portrays a first-order change, a mere decrease in the level or 'setting' of an instrument. Furthermore, the suggestion to move away from tendering as method of contracting implementing organisations portrays a second-order change, the changing of instruments. Finally, the typical recommendation for evidence-based programmes hints at a paradigmatic change. This also portrays the lively debate surrounding 'what works', in which evaluators have a role to play as knowledge brokers, is alive and well. As Hall (1993) points out, in the realm of first-and second-order change, there is room for expert judgement. The paradigm, however, provides the context in which potential adjustments are made. These are not directly amenable because they refer to reigning worldviews and are the result of political contestations, determining, for instance, who is deemed an expert. Although evaluators can hardly influence the dominant paradigm, a government can look at evaluation departments for inspirations and input about alternative, perhaps better, paradigms than the status quo. In this combination, of looking back and reflecting, but also offering alternative ways of thinking and acting, lies the worth of an evaluation department. In conclusion, we believe this study's empirically based typology of evaluator roles constitutes a novel contribution to policy learning scholarship. These roles call for careful consideration of evaluation teams and incorporation of emergent qualities in evaluation trajectories. The role of knowledge broker is promising, since evaluators' time and reputable status gives them credibility and extensive insight into programmes. Managers adjust to evaluations in various ways. Yet, evaluators are equipped to respond to potential pressures by knowing how to discern positive from negative influences, as well as by engaging proactively with stakeholders. Finally, the study contributes to an ongoing methodological gap in evaluation literature identified by Moyson et al. (2017). Using a mix of qualitative methods, and analysing an evaluation as-it-happened, the study presents unprecedented insights into evaluation processes within a Ministry. Several suggestions for future research arise from this article. A replication study could be executed in another context, for instance, in the Ministry of Foreign Affairs of another country, which may have different organisational structures, or within another Dutch Ministry. It would be interesting to analyse whether follow-up and learning work through similar mechanisms in other policy areas. Future studies could incorporate elements of systems thinking and institutional analyses to discern bottlenecks and path dependencies in policy learning. Furthermore, in terms of methodology, future research could use time series methods, to analyse whether evaluations' recommendations stick in the long-term, or comparative studies to analyse the follow-up of several evaluations, instead of one illustrative evaluation. Finally, future studies could dig deeper into the enabling circumstances for learning, in order to move closer to the ideal of 'evidence-based' policymaking. An example research question could be, 'What factors incentivise, or constrain, policymakers to learn from evaluation?' There's a lot to learn.
7,816.2
2022-07-01T00:00:00.000
[ "Political Science", "Economics" ]
COVID-19 is spatial: Ensuring that mobile Big Data is used for social good The mobility restrictions related to COVID-19 pandemic have resulted in the biggest disruption to individual mobilities in modern times. The crisis is clearly spatial in nature, and examining the geographical aspect is important in understanding the broad implications of the pandemic. The avalanche of mobile Big Data makes it possible to study the spatial effects of the crisis with spatiotemporal detail at the national and global scales. However, the current crisis also highlights serious limitations in the readiness to take the advantage of mobile Big Data for social good, both within and beyond the interests of health sector. We propose two strategical pathways for the future use of mobile Big Data for societal impact assessment, addressing access to both raw mobile Big Data as well as aggregated data products. Both pathways require careful considerations of privacy issues, harmonized and transparent methodologies, and attention to the representativeness, reliability and continuity of data. The goal is to be better prepared to use mobile Big Data in future crises. This crisis is spatial The current COVID-19 pandemic highlights the strong spatial dynamics of crises. The virus outbreak, mitigation measures to contain it and societal impacts all take place across geography. Hot spots, quarantine, closed borders, video-conferencing, and social distancing are all profoundly about distance, separation, and space. In short, the COVID-19 crisis is spatial and therefore our responses must also be spatial. We already see changes in the mobilities and sociospatial behavior of individuals and societies. Countries continue to restrict border crossing, banning international travel and implement national and regional containment measures to address local outbreaks. Governments have taken drastic measures to limit the usual daily mobility of people by temporarily closing factories, schools, retail shops, restaurants, and recreational facilities. People are strongly advised or even required to work from home, and all social gatherings and face-to-face social interactions, both professional and leisure, have been and continue to be banned in many places. In short, the response to the COVID-19 pandemic is the biggest disruption to individual mobilities in modern times. Or, as Oliver et al. (2020) argue, the measures to fight the virus have not been as much pharmaceutical, as they have been geographical. Studying human mobility, individual movements in space and time, has been part of the human geography agenda since Torsten H€ agerstrand in the 1960s (H€ agerstrand, 1970). What is different now is the scale and scope of available spatial data. Namely, the avalanche of data on individual activity spaces (geographic areas where people conduct their social activities) for entire populations collected by our mobile devices. Mobile Big Data provides a ready means to study the spread of the virus, understand the changes in people's daily interactions and mobilities, and track the recovery process. In short, population-wide data on individual activity spaces has ready application during the pandemic: understanding the spread of the virus, evaluating adherence to restrictions and analyzing the broader societal impacts of these policies. Our advocacy for the use of these data, however, is tempered both by our experiences in recent months with the limitations of using mobile Big Data and our unease with the power of these same data to track, surveil and discipline social behavior at the scale of entire populations. The question we pose here is: How can we use mobile Big Data for social good, while also protecting society from social harm? By social good, we mean improvements in the quality of life for the general population rather than individuals or sub-segments. To do so we outline lessons learned from Estonia and Finland, as well as the practices of corporations more globally. Pre-COVID-19 mobile Big Data research Mobile Big Data refers to all Big Data with spatial (geographic location of the event) and temporal (time specification of the event) information. These data reveal behavior of people in space and time via the proxy of unique technological entities, e.g. smart or mobile devices such as mobile phones, public transit cards or sport watches as well as applications used on these devices such as Twitter, Facebook or Google. The ever-growing share of people carrying digital devices provide data that allows tracking of the spatial flows of dynamic populations (Shoval, 2007), but also the activity spaces of individuals across a range of spatial and temporal scales (J€ arv et al., 2018). These data include call detail records collected by mobile network operators as well data from mobile operating systems (e.g., Android or iOS) that collect significantly denser spatial and temporal data (via GPS and other signals) and is only available to the developers of these systems (e.g., Google or Apple). Geographically located posts on social media platforms such as Twitter and Instagram are a third example of mobile Big Data about people's activities and attention through their content creation and curation practices (Poorthuis et al., 2019) across space (Toivonen et al., 2019). Finally, there are thousands of mobile applications with location-based features, e.g. weather forecast providers such as The Weather Channel, sports apps such as theScore, ride-sharing platforms and food delivery companies that collect data on the location of individuals more sporadically or surreptitiously. The rollout of 5G networks and the internet of things open further opportunities for mobile Big Data production and collection including the means and opportunity to monitor a population's location continuously. Analysis with these data has provided insights on a wide variety of social phenomena and socio-spatial processes, including crisis situations. Examples include, e.g., analysis on population mobility and commuting (Ahas et al., 2015;J€ arv et al., 2012), detecting functional economic regions (Novak et al., 2013;OECD, 2020), the provision and accessibility to state services (J€ arv et al., 2018), identifying migration flows (Kamenjuk et al., 2017) and cross-border mobility (Silm et al., 2020a), analyzing (in)equity between population groups and spatial segregation (Mooses et al., 2016;Shelton et al., 2015;Silm et al., 2018), supporting transport solutions (Positium, 2019) and environmental management (Heikinheimo et al., 2020;Poom et al., 2017), characterizing tourist behavior (Campagna et al., 2015;Raun et al., 2016;Saluveer et al., 2020), or reflecting the lived experiences of people in case of disruptions (Shelton et al., 2014). Much of this research is conducted in countries where access to mobile Big Data has been relatively easy. For example, in Estonia (Silm et al., 2020b), the opportunities afforded by mobile Big Data were already recognized in the mid-2000s and applied to public planning, administration (Ahas and Mark, 2005) and tourism monitoring (Ahas et al., 2007). Mobile Big Data have also been used in health research to study how virus transmission is mediated by human mobility as well as the impact of accessibility on healthcare. For example, Wesolowski et al. (2012) tied the interregional spread of malaria to human travel in Kenya, and Finger et al. (2015) showed how mass gatherings became hotspots for cholera outbreaks in Senegal. Bengtsson et al. (2015) used mobile phone data to improve predictions on the spatial evolution of Haiti cholera epidemic, and Wesolowski et al. (2015) applied similar mobile phone data to map the uptake of preventive healthcare in Kenya. Kraemer et al. (2018) showed that virus transmission models that incorporated social media data resulted in similar epidemiological inferences as traditional models. In short, mobile Big Data can help us better understand the spatial dimensions of social and health phenomena. COVID-19 highlights the challenges of mobile Big Data Given this history of research with mobile Big Data, it is not surprising that a number of projects have worked to apply this knowledge to the COVID-19 pandemic. These include how the virus spreads (Chang et al., 2020), the efficiency of mobility restrictions (Kraemer et al., 2020), and the social acceptance of restriction measures (Statistics Estonia, 2020). However, the current crisis also highlights serious limitations in our readiness to use mobile Big Data and do so responsibly (Benton et al., 2017;Zook et al., 2017). For example, despite the long research tradition in Estonia, mobile Big Data was not accessible to researchers during the COVID-19 pandemic because of ongoing discussions between mobile network operators and data protection agency. These deliberations focused on differing interpretations of Estonia's Electronic Communications Act and the lack of clarity on a new EU ePrivacy Regulation. This meant that the previous well-functioning collaboration between network operators and researchers was no longer operating, rendering raw data inaccessible. Instead, the Estonian state, in collaboration with mobile network operators, developed an ad hoc solution to monitor the population's daily mobility, albeit at a relatively high level of aggregation, to track how well people followed instructions to avoid unnecessary mobility (Statistics Estonia, 2020). This was done via quickly developed methodological guidelines designed by the Estonian state and a data intelligence company Positium, applied by the network operators with undocumented methodological details. As a result there was no space for different data aggregation (needed for more sophisticated analysis) or for longer term follow-up of the situation. In sum, the Estonian case highlights how the lack of legal clarity around using mobile Big Data (held by private companies) can result in less useful applications than otherwise might be the case. On the other hand, in Finland, access to mobile phone data has been rather limited all the time due to strict interpretation of privacy related legislation. While mobile network operators have collaborated with researchers and statistics officials, the scale was exploratory rather than operational. Recently, however, a main mobile network operator, Telia, developed an aggregated and anonymized data product allowing mobility analysis at the scale of the entire population. When the COVID-19 pandemic started, the existence of this ready-made data product allowed governmental officials and researchers quick data to uncover changing mobility flows brought about by closing the borders of the capital region and instructing citizens to avoid visiting secondary homes (J€ arv et al., 2020aKotavaara et al., 2020). However, the relatively simple data product did not leverage or allow access to individual-level raw data necessary to create custom spatial and categorical aggregations. Moreover, because Telia's preconstructed data products were designed to answer specific questions, they could not always address the new questions resulting from COVID-19. Further complicating the application of these data products was that the methodology behind them was not transparent enough to understand fully how the resulting values are derived. Thus, even when access to mobile Big Data is available, it may not be structured in ways that fit the specific needs that arise during a crisis. In addition to lessons from working with national mobile network operators, it is also useful to understand how some global companies deployed their mobile Big Data capabilities. Large platform companies such as Apple, Google or Facebook produced ad-hoc data products and visualizations of mobility during COVID-19. This involved local reports based on the aggregated data of customers, including the use of travel modes or visits to various types of places (Apple, 2020;Google, 2020), or population maps for disease forecasting and prevention (Facebook, 2020). However, because methodologies behind these ad hoc data products were "black boxed" (Pasquale, 2015), it is difficult to evaluate their usefulness or potential for further use. Basic questions such as which population groups were represented remained unknown. As Google (2020) noted their reports ". . .shouldn't be used for medical diagnostic, prognostic, or treatment purposes. It also isn't intended to be used for guidance on personal travel plans." In a very real sense, application of mobile Big Data from these platforms was limited to insiders rather than officials or citizens seeking to identify hot spots or conduct contact tracing. This echoes the experiences in Estonia and Finland, and aptly illustrates boyd and Crawford's (2012) observation of how Big Data creates "new digital divides". This also results in very different analyses (profit vs. social good) and creates methodological disharmony ("black box" vs. open science) in processing and publishing results. In short, the semi-opaque methodologies of mobility data products from private companies frustrate efforts in using these data to create applications targeted at the public good. Moreover, lack of transparency about methods exacerbates privacy and surveillance concerns, a particularly important point given platform-or governmentled actions for population control during the containment phase of the COVID-19 crisis (see Kitchin, 2020). Improving mobile Big Data systems to promote social good These examples of mobile Big Data use during the COVID-19 pandemic demonstrate the need to re-evaluate the public-private relationship with mobile Big Data, particularly those associated with individual level mobilities. If we have to accept the production, collection, and monetizing of personal digital footprints in the Age of Surveillance Capitalism 1 (Zuboff, 2019), how might also we increase the social good of these data? As outlined here, current practices are occasional, ad hoc and opaque, organizationally fragile, and lack an overall strategic approach to key questions around privacy and surveillance. Towards this goal, we propose two strategic pathways to apply mobile Big Data for social good. Just as the COVID-19 pandemic is spatial, so too are many other important social phenomena including gentrification, segregation and accessibility, and understanding differences in mobility can provide welcome insight. First, we call for transparent and sound mobile Big Data products that provide relevant up-to-date longitudinal data on the mobility patterns of dynamic populations. To help increase their usefulness, data products should be transparent about their production methodology, and ensure easy access and stability. While much of the data in statistical offices are transparent, accessible and stable, they are less useful for studying the mobility and activity spaces of people especially in fastchanging phenomenon like the COVID-19 pandemic. Instead, the dynamics of mobility are more easily studied via mobile Big Data that are mostly collected and processed by private companies. Not surprisingly, in the recent months, there have been several calls for privately owned large-scale mobile Big Data to be shared for public health purposes (Buckee, 2020;Ienca and Vayena, 2020;Oliver et al., 2020). While we agree with these calls, we would extend them and argue that availability of these types of data should extend beyond the needs of health sector and this particular pandemic of COVID-19. Of course, given the sensitivity of mobile Big Data, respecting personal privacy is paramount. Possible approaches might include in-house aggregation by the data providers and testing for deanonymization before sharing data products with strict accessibility rules. Products should be developed, tested, and used during normal times (i.e. non-crisis situation), to provide a base for their quick application when needed. To facilitate international comparisons and analysis, data products should use coordinated methodologies and joint data access platforms such as offered by Eurostat, the EU-level statistical office. Second, building from the idea of ready-made, aggregated data products, we also see the need to develop trustworthy platforms for collaborative use of raw individual level data. Secured and privacy-respectful access to near real-time raw data is needed for developing and testing sound methodologies for the abovementioned data products. This would help bridge the Big Data digital divide, enable scientific innovation, and offering needed flexibility in responding to unanticipated questions on changing locations and mobilities in case of crises. Bottom-up initiatives of data donations and individual data control like MyData 2 are useful, but do not yet solve the problem. These initiatives tend to involve people with higher knowledge, energy, and capacity to manage their personal data and generally miss more marginalized groups resulting in biased conclusions about society. Models for allowing vetted researchers to work with anonymized individual level data at firewalled data centers include the US Census Bureau's Research Data Center or the research services of Statistics Finland/ Statistics Estonia. Incentivizing or compelling corporations to contribute data would be challenging but might be achieved via social responsibility programs, or legislation requiring data contributions to be allowed to operate within a legal jurisdiction. To be clear, we do not view this as simple to achieve, particularly as we weigh what kind of institution might best fill this role. National Libraries? Academies of Science? United Nations? An independent non-profit with representation from stakeholder communities (users, governments, business, etc.) akin to ICANN 3 ? And this is but the first of many questions. How might any of these institutions avoid capture by powerful players? Equally, how is "social good" defined and operationalized in practice when granting access to researchers or state actors? While these questions remain to be answered, we argue that addressing them via public debates and academic discourses will leave us better prepared for the next crisis even if progress on these two pathways falls short. Four axioms for moving forward Summing up, there are important lessons to take from the current pandemic and about the challenges of accessing and making useful applications of mobile Big Data. While we have sketched out two pathways forward, we recognize that these are not the only options available. Therefore we will end this commentary by sketching out four axioms we believe to be fundamental in creating a common framework for gathering and using mobile Big Data. First, we need harmonized and representative data about human mobility for better crisis preparedness and social good in general. While an ad-hoc analysis strategy in Estonia and Finland has been rather satisfactory in case of the COVID-19 pandemic, it also suffered from a limited ability to address specific actions and questions. Second, methodological transparency about mobile Big Data products (particularly coming from private companies) are vital for open societies and for capacity building. The present trend in which "corporate secrecy expands as the privacy of human beings contracts" (Pasquale, 2015: 26) must be countered so that mobile data is used for social good rather than simply corporate profit. Third, access to mobile Big Data to develop feasible methodologies and baseline knowledge for public decision-making is needed before the next crisis occurs. As our examples outline, solving data access issues can provide new opportunities for increasing the expertise and capacity of researchers working on human mobility and other socio-spatial phenomena. It is vital that the related developments and discussions happen in "normal" times rather the high-pressure and compressed timelines of a crisis. Fourth, and the most relevant of all is recognizing the fundamental spatiality of the current COVID-19 crisis and crises more generally. The COVID-19 pandemic (and every other social phenomenon) has deep and important spatial dimensions that spatial data can help us better understand and address in ways that promote social good. The challenge, of course, is doing so responsibly (Zook et al., 2017) via sound and transparent methods and collaborations across trustworthy platforms that do not normalize a lack of spatial privacy.
4,310.6
2020-07-01T00:00:00.000
[ "Geography", "Sociology", "Computer Science" ]
From Insulating PMMA Polymer to Conjugated Double Bond Behavior: Green Chemistry as a Novel Approach to Fabricate Small Band Gap Polymers Dye-doped polymer films of Poly(methyl methacrylate) PMMA have been prepared with the use of the conventional solution cast technique. Natural dye has been extracted from environmentally friendly material of green tea (GT) leaves. Obvious Fourier transform infrared (FTIR) spectra for the GT extract were observed, showing absorption bands at 3401 cm−1, 1628 cm−1, and 1029 cm−1, corresponding to O–H/N–H, C=O, and C–O groups, respectively. The shift and decrease in the intensity of the FTIR bands in the doped PMMA sample have been investigated to confirm the complex formation between the GT dye and PMMA polymer. Different types of electronic transition could be seen in the absorption spectra of the dye-doped samples. For the PMMA sample incorporated with 28 mL of GT dye, distinguishable intense peak around 670 nm appeared, which opens new frontiers in the green chemistry field that are particularly suitable for laser technology and optoelectronic applications. The main result of this study showed that the doping of the PMMA polymer with green tea dye exhibited a strong absorption peak around 670 nm in the visible range. The absorption edge was found to be shifted towards the lower photon energy for the doped samples. Optical dielectric loss and Tauc’s model were used to estimate the optical band gaps of the samples and to specify the transition types between the valence band (VB) and conduction band (CB), respectively. A small band gap of around 2.6 eV for the dye-doped PMMA films was observed. From the scientific and engineering viewpoints, this topic has been found to be very important and relevant. The amorphous nature of the doped samples was found and ascribed to the increase of Urbach energy. The Urbach energy has been correlated to the analysis of X-ray diffraction (XRD) to display the structure-properties relationships. Introduction Polymer materials are broadly used in photonic device fabrication. Dye-doped polymers have grown to be very popular for their diverse advantages. Moreover, they can be used in linear and nonlinear photonic devices [1]. Recent studies reveal that lasers created out of such dye-doped polymers have several applications in sophisticated nanoscale lasers, optical telecommunication devices, and novel chip-integrated photonic biosensors [2]. The dye-doped polymers are also known as unique photoconverters. Based on their structure, they can possibly absorb and emit light in the visible and near-infrared (NIR) regions of the electromagnetic spectrum [3]. Poly (methyl methacrylate) (PMMA) is a high-strength commercially available amorphous thermoplastic polymer. PMMA exhibits prominent mechanical, dimensional, and thermal stabilities, as well as a high optical transparency with a relatively low glass transition temperature [4,5]. PMMA is resistant and stable to acid and alkaline media, owing to its rigid behaviour [6]. It is well reported that the optical characterizations of solid polymer films are crucial to obtain knowledge regarding their energy gap, refractive index, and dielectric constant, which are vital for various optical applications [7]. The prepared dye-doped polymeric materials that exhibit suitable optical properties are found to be promising candidates for the applications of solar cells, photonic devices, optical fibres, laser media, and electronic sensors [8,9]. The natural and synthetic dyes are compounds of great interest as they play a significant role in our everyday life [10]. The dye-doped polymers are considered to be potential materials in optoelectronics particularly in making devices, employment in organic light emitting diodes (OLED), liquid crystal (LC) displays, quantum electronics, electroluminescence, solar cells, and energy storage [10,11]. Triphenylmethane, azo, anthraquinone, perylene, and indigoid dyes are more interesting among the large number of dye categories [10]. Several dye-doped polymers were reported in previous studies. A maximum absorption peak at around 564 nm for the doped PMMA polymer with a well-known rhodamine B/chloranilic acid (Rho B/CHA) has been observed in [8]. They have achieved a bandgap of 3.1 eV after γ-irradiation. Hamdy et al. [6] have used methylene blue (MB) as a doping dye material and a distinguishable peak at around 654 nm has been achieved in their study. Sun et al. [12] have studied the phenanthrenequinone (PQ)-doped PMMA as a photopolymer material for fast response in optoelectronics applications. In photonic networks, fast uncomplicated and economical fabrication process are required to achieve a successful application of solid-state dye lasers that can reliably produce a large number of lasers with tunable wavelengths, configuring at almost any time [2]. In this study, a natural dye, which is extracted from green tea (GT) leaves, was used as a doping dye. It is well known that tea derived from Camellia sinensis leaves is the most widely consumed drink globally. It can be classified, in accordance with the level of oxidation, into three major types: green (unoxidized), oolong (partially oxidized), and black (fully oxidized) tea [13]. Previous studies confirmed from the high pressure liquid chromatography (HPLC) observations that theanine, theobromine, gallic acid, gallocatechin, caffeine, epigallocatechin, catechin, epicatechin, epigallocatechingallate, gallocatechingallate, epicatechingallate, and catechingallate are the major components of GT extracts [13,14], which contains a very large number of OH/NH functional groups and their conjugated double bonds. Thus, the dye of green tea holds many conjugated and functional groups, which are found to be considerably important in the dye-doped polymer preparation. The intensive and extensive survey of previous studies reveals that absorption peaks at high wavelength cannot be exhibited from most of the dye-doped polymers. The primary objective of the present study is to fabricate a dye-doped polymer with an absorption peak at high wavelength, using a natural dye obtained from environmentally friendly materials. The results can also provide more knowledge in the field of dye-doped polymers. To the best of our knowledge, our findings reveal the suitability of dye-doped PMMA polymer for photonics and solar cell applications due to its small band gap. Preparation of Dye-Doped PMMA Solid Polymeric Films The PMMA polymeric material used in this study was supplied by Sigma-Aldrich (Saint Louis, MO, USA). The well-known solution casting technique was used to prepare the dye-doped PMMA polymer films. First, 1 g of PMMA powder was dissolved in 30 mL of acetone at room temperature. The mixture was then stirred using a magnetic stirrer for approximately 4 h. Natural colorant tea extract was derived from green tea leaves. For this purpose, 30 g of green tea leaves was added to 60 mL of tetrahydrofuran (THF) solvent at 60 • C for 3 h, without exposing the solution to direct sunlight. The solution was left to be cooled down to room temperature. Whatman filter paper (Whatman 41, cat. No. 1441, Maidstone, UK) with a pore size of 20 µm was then used to remove the residues. Then, 14 mL and 28 mL of GT extract solution were added to the homogeneous PMMA solutions and continuously stirred for 5 h. The solutions were cast into different Petri dishes and dried at room temperature to form the films. The thickness of the films ranged from 120 to 121 µm was controlled by casting the same amount of PMMA. Prior to optical characterization, the films were kept in a desiccator with blue silica gel for further drying. The samples were coded as GT 0, GT 14, and GT 28 for PMMA incorporated with 0, 14, and 28 mL of extracted GT solution, respectively. Figure 1 shows the flowchart of the experimental work undertaken. continuously stirred for 5 h. The solutions were cast into different Petri dishes and dried at room temperature to form the films. The thickness of the films ranged from 120-121 μm was controlled by casting the same amount of PMMA. Prior to optical characterization, the films were kept in a desiccator with blue silica gel for further drying. The samples were coded as GT 0, GT 14, and GT 28 for PMMA incorporated with 0, 14, and 28 mL of extracted GT solution, respectively. Figure 1 shows the flowchart of the experimental work undertaken. UV-VIS Measurement The optical absorption spectra of the solid polymer films have been collected using an ultraviolet-visible near-infrared (UV-VIS-NIR) spectrophotometer (Jasco SLM-468, Tokyo, Japan) in the absorbance mode. FTIR and X-ray Diffraction Analysis The complex formation between the GT extract and PMMA polymer was investigated using Fourier transform infrared (FTIR) spectroscopy. The FTIR spectra were collected using a Thermo Fischer Scientific (Waltham, MA, USA) Nicolet iS10 FTIR spectrophotometer in the wavenumber region 400-4000 cm −1 with a resolution of 2 cm −1 . The X-ray diffraction (XRD) was recorded at room temperature using an X-ray diffractometer (NL-7602 EA PANalytical B.V., Almelo, The Netherlands) with an operating voltage and current of 40 kV and 45 mA, respectively. The samples were scanned with a monochromatic X-ray beam of wavelength λ = 1.5406 Å and glancing angles of 5 • ≤ 2θ ≤ 90 • with a step size of 0.05 • . The required experimental techniques for sample characterization are shown in Figure 1. Figure 2 shows the FTIR spectrum of the GT extract solution. Recent studies have exposed great interest in the use of natural dyes. This is a result of the fact that they are recognized as being environmentally friendly, along with having other properties, such as deodorizing, being lower in toxicity, and showing anti-allergenic, anti-bacterial, and anti-cancer properties [15,16]. An intense broad band appearing at 3401 cm −1 is found to be attributed to the N-H and O-H stretching modes of polyphenols [17,18]. A strong band at 1628 cm −1 can also be assigned to the C=C stretch in the aromatic ring and the C=O stretch in polyphenols [18,19]. The C-H and O-H stretches in alkanes and carboxylic acid have been found to appear at 2917 and 2848 cm −1 , respectively [18]. The C-O stretching in amino acid has also caused a band at 1029 cm −1 [18,19]. Earlier studies have established that the FTIR bands of tea extracts containing polyphenols have appeared at 3388 cm −1 , 1636 cm −1 , and 1039 cm −1 , which are referred to O-H/N-H, C=C, C-O-C stretching vibrations, respectively [18][19][20][21]. Therefore, from the IR spectrum, one can observe that carboxylic acid, polyphenols, and amino acid are the main functional groups in the green tea sample. UV-VIS Measurement The optical absorption spectra of the solid polymer films have been collected using an ultraviolet-visible near-infrared (UV-VIS-NIR) spectrophotometer (Jasco SLM-468, Tokyo, Japan) in the absorbance mode. FTIR and X-ray Diffraction Analysis The complex formation between the GT extract and PMMA polymer was investigated using Fourier transform infrared (FTIR) spectroscopy. The FTIR spectra were collected using a Thermo Fischer Scientific (Waltham, MA, USA) Nicolet iS10 FTIR spectrophotometer in the wavenumber region 400-4000 cm −1 with a resolution of 2 cm −1 . The X-ray diffraction (XRD) was recorded at room temperature using an X-ray diffractometer (NL-7602 EA PANalytical B.V., Almelo, The Netherlands) with an operating voltage and current of 40 kV and 45 mA, respectively. The samples were scanned with a monochromatic X-ray beam of wavelength λ = 1.5406 Å and glancing angles of 5° ≤ 2θ ≤ 90° with a step size of 0.05°. The required experimental techniques for sample characterization are shown in Figure 1. Figure 2 shows the FTIR spectrum of the GT extract solution. Recent studies have exposed great interest in the use of natural dyes. This is a result of the fact that they are recognized as being environmentally friendly, along with having other properties, such as deodorizing, being lower in toxicity, and showing anti-allergenic, anti-bacterial, and anti-cancer properties [15,16]. An intense broad band appearing at 3401 cm −1 is found to be attributed to the N-H and O-H stretching modes of polyphenols [17,18]. A strong band at 1628 cm −1 can also be assigned to the C=C stretch in the aromatic ring and the C=O stretch in polyphenols [18,19]. The C-H and O-H stretches in alkanes and carboxylic acid have been found to appear at 2917 and 2848 cm −1 , respectively [18]. The C-O stretching in amino acid has also caused a band at 1029 cm −1 [18,19]. Earlier studies have established that the FTIR bands of tea extracts containing polyphenols have appeared at 3388 cm −1 , 1636 cm −1 , and 1039 cm −1 , which are referred to O-H/N-H, C=C, C-O-C stretching vibrations, respectively [18][19][20][21]. Therefore, from the IR spectrum, one can observe that carboxylic acid, polyphenols, and amino acid are the main functional groups in the green tea sample. The FTIR spectra of pure PMMA polar polymer and PMMA doped with 28 mL of extract GT solution are shown in Figures 3 and 4. FTIR spectroscopy has long been recognized as a powerful tool for the elucidation of structural information. The position, intensity, and shape of vibrational bands are useful in clarifying conformational and environmental changes of polymers at the molecular level [22]. It was well established that functional groups in organic compounds have absorptions which are characteristic not only in position, but also in intensity [23]. The strong band appearing at 1726 cm −1 in the spectrum (see Figure 3) of the pure PMMA sample can be attributed to the carbonyl (C=O) group [22] and shifts to 1712 cm −1 with lower intensity and broad character in the doped PMMA sample (see Figure 4). Thus, the shift in peak position, decrease in intensity, and broadening of the peak due to the C=O in the doped PMMA sample clearly indicates the miscibility between the PMMA and GT extract solution. The FTIR bands appearing from 950-481 cm −1 in Figures 3 and 4 are due to the bending of C-H [24]. The peak at 2935 cm −1 (Figure 3) can be ascribed to -CH stretching and shifts to 2943 in the doped PMMA sample ( Figure 3) and a new peak at 2847 cm −1 appeared, which is attributed to carboxylic acid groups of the GT extract solution (see Figure 2) [18]. The band appeared at 3433 cm −1 in the FTIR spectra of pure PMMA is related to the N-H stretching vibration [25,26], and shifts to 3432 cm −1 with a significant decrease in intensity as depicted in Figure 3. The valuable change in intensity of N-H band is an evidence for a large amount of N-H functional groups in GT extract. Additionally, the FTIR spectra of GT extract solution (see Figure 2) shows the existence of a N-H group with strong intensity at 3401 cm −1 . Thus, the decrease in intensity of the IR band at 3432 cm can be ascribed to the complex formation between the GT extract solution and PMMA polymer. The FTIR spectrum of pure PMMA obtained in the present work is very similar to that reported by Soman and Kelkar [27]. The shifting in the FTIR bands and the decrease in intensity is evidence for the occurrence of miscibility between the PMMA polymer and the GT extract solution. The FTIR spectra of pure PMMA polar polymer and PMMA doped with 28 mL of extract GT solution are shown in Figures 3 and 4. FTIR spectroscopy has long been recognized as a powerful tool for the elucidation of structural information. The position, intensity, and shape of vibrational bands are useful in clarifying conformational and environmental changes of polymers at the molecular level [22]. It was well established that functional groups in organic compounds have absorptions which are characteristic not only in position, but also in intensity [23]. The strong band appearing at 1726 cm −1 in the spectrum (see Figure 3) of the pure PMMA sample can be attributed to the carbonyl (C=O) group [22] and shifts to 1712 cm −1 with lower intensity and broad character in the doped PMMA sample (see Figure 4). Thus, the shift in peak position, decrease in intensity, and broadening of the peak due to the C=O in the doped PMMA sample clearly indicates the miscibility between the PMMA and GT extract solution. The FTIR bands appearing from 950-481 cm −1 in Figures 3 and 4 are due to the bending of C-H [24]. The peak at 2935 cm −1 (Figure 3) can be ascribed to -CH stretching and shifts to 2943 in the doped PMMA sample ( Figure 3) and a new peak at 2847 cm −1 appeared, which is attributed to carboxylic acid groups of the GT extract solution (see Figure 2) [18]. The band appeared at 3433 cm −1 in the FTIR spectra of pure PMMA is related to the N-H stretching vibration [25,26], and shifts to 3432 cm −1 with a significant decrease in intensity as depicted in Figure 3. The valuable change in intensity of N-H band is an evidence for a large amount of N-H functional groups in GT extract. Additionally, the FTIR spectra of GT extract solution (see Figure 2) shows the existence of a N-H group with strong intensity at 3401 cm −1 . Thus, the decrease in intensity of the IR band at 3432 cm can be ascribed to the complex formation between the GT extract solution and PMMA polymer. The FTIR spectrum of pure PMMA obtained in the present work is very similar to that reported by Soman and Kelkar [27]. The shifting in the FTIR bands and the decrease in intensity is evidence for the occurrence of miscibility between the PMMA polymer and the GT extract solution. Figure 5 shows the absorption spectra of pure PMMA and PMMA doped samples. Here, from the absorption spectra of the doped samples, it is achievable to obtain almost all of the different types of electronic transition. The absorption of light or photon energy, in the UV and visible regions, by polymeric materials involves the σ, π, and n-orbitals electrons to be promoted from the ground state to higher energy states that are described by molecular orbital [28]. The electronic transitions involved in the ultraviolet region, 160-260 nm, can be ascribed to n→σ* transition [27], while π→π* and n→π* transitions require relatively low energy and, hence, occur at higher wavelengths, as shown in Figure 5. The absorption peaks that were observed at high wavelengths, 400-700 nm, for the PMMA doped samples are related to the existence of π electrons [28][29][30]. Similar absorption spectra, for extracted GT in ethyl acetate solvent, have been reported [31]. It was well established that conjugated systems comprising alternating double bonds are considered to be a central class of materials for the applications of optoelectronic devices due to their π-excessive nature [32]. The shifting towards the longer wavelengths indicates the small band gaps of the doped samples [33]. It was reported that strong shifts towards the longer wavelengths can be attributed to the existence of π-delocalization along the polymer chain. This postulation is further supported by the absence of absorption peaks in absorption spectra of pure PMMA polymer [32]. The source of π-delocalization in the doped samples is found to be related to the structure of the extracted GT solution containing polyphenols, amino acids, alkaloids, proteins, glucides, minerals, volatile compounds, and trace elements [14]. Polyphenols comprise the most interesting group of GT leaf components [34]. The most determined chemicals or molecular structures of the components of the extracted GT solution can be observed elsewhere [13,14,34,35]. Earlier studies confirmed that the extracted GT solution contains adequate conjugated double bonds, hydroxyl (OH), carboxylic (C=O) groups, polyphenols, and polyphenol conjugates which are convenient for the formation of complexes with functional (polar) groups of polymeric materials [13,14,[34][35][36]. The results of FTIR clearly showed the complex formation between the GT dye and PMMA polymer (see Figure 3). Dye-doped PMMA as a polymer optical waveguide has received considerable attention for its usage in optoelectronics devices and optical components, owing to its low cost and volume productivity [37]. Figure 6 shows the absorption spectra of pure PMMA and dye-doped PMMA samples at longer wavelengths. One can see from the figure that the GT 28 sample exhibits a distinct and intense peak at 670 nm, which reveals its suitability for photonics and optoelectronics applications. Utilization of natural dye is the novelty of this study in comparison to previous studies of other researchers. Furthermore, the intensity of the peak (3.460) is higher than those reported in previous studies for dye-doped PMMA polymer. Figure 5 shows the absorption spectra of pure PMMA and PMMA doped samples. Here, from the absorption spectra of the doped samples, it is achievable to obtain almost all of the different types of electronic transition. The absorption of light or photon energy, in the UV and visible regions, by polymeric materials involves the σ, π, and n-orbitals electrons to be promoted from the ground state to higher energy states that are described by molecular orbital [28]. The electronic transitions involved in the ultraviolet region, 160-260 nm, can be ascribed to n→σ* transition [27], while π→π* and n→π* transitions require relatively low energy and, hence, occur at higher wavelengths, as shown in Figure 5. The absorption peaks that were observed at high wavelengths, 400-700 nm, for the PMMA doped samples are related to the existence of π electrons [28][29][30]. Similar absorption spectra, for extracted GT in ethyl acetate solvent, have been reported [31]. It was well established that conjugated systems comprising alternating double bonds are considered to be a central class of materials for the applications of optoelectronic devices due to their π-excessive nature [32]. The shifting towards the longer wavelengths indicates the small band gaps of the doped samples [33]. It was reported that strong shifts towards the longer wavelengths can be attributed to the existence of π-delocalization along the polymer chain. This postulation is further supported by the absence of absorption peaks in absorption spectra of pure PMMA polymer [32]. The source of π-delocalization in the doped samples is found to be related to the structure of the extracted GT solution containing polyphenols, amino acids, alkaloids, proteins, glucides, minerals, volatile compounds, and trace elements [14]. Polyphenols comprise the most interesting group of GT leaf components [34]. The most determined chemicals or molecular structures of the components of the extracted GT solution can be observed elsewhere [13,14,34,35]. Earlier studies confirmed that the extracted GT solution contains adequate conjugated double bonds, hydroxyl (OH), carboxylic (C=O) groups, polyphenols, and polyphenol conjugates which are convenient for the formation of complexes with functional (polar) groups of polymeric materials [13,14,[34][35][36]. The results of FTIR clearly showed the complex formation between the GT dye and PMMA polymer (see Figure 3). Dye-doped PMMA as a polymer optical waveguide has received considerable attention for its usage in optoelectronics devices and optical components, owing to its low cost and volume productivity [37]. Figure 6 shows the absorption spectra of pure PMMA and dye-doped PMMA samples at longer wavelengths. One can see from the figure that the GT 28 sample exhibits a distinct and intense peak at 670 nm, which reveals its suitability for photonics and optoelectronics applications. Utilization of natural dye is the novelty of this study in comparison to previous studies of other researchers. Furthermore, the intensity of the peak (3.460) is higher than those reported in previous studies for dye-doped PMMA polymer. Previous studies have confirmed the promising role of dye-doped polymer films for erasable/rewritable optical discs, developed by optical data systems. A considerable number of patents have reported the combinations of polymer and dye for optical data storage [38]. Previous studies have confirmed the promising role of dye-doped polymer films for erasable/rewritable optical discs, developed by optical data systems. A considerable number of patents have reported the combinations of polymer and dye for optical data storage [38]. Figure 7 represents the absorption coefficient variation with photon energy for the pure and doped PMMA samples. The absorption edge investigation is found to be significant in interpreting the novel changes that occur in the electronic structure of doped materials [39]. It is obvious from the spectra that, upon addition of extracted GT solutions to the pure PMMA sample, the absorption edge are shifted towards lower photon energy sides. The absorption edge is a region in which an electron is excited, from a lower energy state to a higher energy state, by an incident photon. The optical absorption coefficient has been obtained from the transmittance and reflectance spectra of the films by applying the following relationship [40]: Absorption and Absorption Coefficient Study where t, T, and R are the thickness, transmittance, and reflectance of the sample, respectively. The presence of the slow rising of the absorption coefficient with applying photon energies indicates the amorphous nature of the samples [41]. The estimated values of the absorption edge for the samples Previous studies have confirmed the promising role of dye-doped polymer films for erasable/rewritable optical discs, developed by optical data systems. A considerable number of patents have reported the combinations of polymer and dye for optical data storage [38]. Figure 7 represents the absorption coefficient variation with photon energy for the pure and doped PMMA samples. The absorption edge investigation is found to be significant in interpreting the novel changes that occur in the electronic structure of doped materials [39]. It is obvious from the spectra that, upon addition of extracted GT solutions to the pure PMMA sample, the absorption edge are shifted towards lower photon energy sides. The absorption edge is a region in which an electron is excited, from a lower energy state to a higher energy state, by an incident photon. The optical absorption coefficient has been obtained from the transmittance and reflectance spectra of the films by applying the following relationship [40]: where t, T, and R are the thickness, transmittance, and reflectance of the sample, respectively. The presence of the slow rising of the absorption coefficient with applying photon energies indicates the amorphous nature of the samples [41]. The estimated values of the absorption edge for the samples Figure 7 represents the absorption coefficient variation with photon energy for the pure and doped PMMA samples. The absorption edge investigation is found to be significant in interpreting the novel changes that occur in the electronic structure of doped materials [39]. It is obvious from the spectra that, upon addition of extracted GT solutions to the pure PMMA sample, the absorption edge are shifted towards lower photon energy sides. The absorption edge is a region in which an electron is excited, from a lower energy state to a higher energy state, by an incident photon. The optical absorption coefficient has been obtained from the transmittance and reflectance spectra of the films by applying the following relationship [40]: where t, T, and R are the thickness, transmittance, and reflectance of the sample, respectively. The presence of the slow rising of the absorption coefficient with applying photon energies indicates the amorphous nature of the samples [41]. The estimated values of the absorption edge for the samples were obtained from the intersection of the extrapolation of the linear part of the absorption coefficient to the photon energy axis (see Figure 7). The results are tabulated in Table 1, in which a wide shift of the absorption edge from 4.9 eV for pure PMMA to 2.66 eV for PMMA incorporated with 28 mL GT has been obtained. This reveals the small band gap nature of the doped samples. were obtained from the intersection of the extrapolation of the linear part of the absorption coefficient to the photon energy axis (see Figure 7). The results are tabulated in Table 1, in which a wide shift of the absorption edge from 4.9 eV for pure PMMA to 2.66 eV for PMMA incorporated with 28 mL GT has been obtained. This reveals the small band gap nature of the doped samples. Band Gap Study The absorption coefficient (α) and the optical band gap (Eg) are expected to be related with each other through the well-known Tauc's relationship, given by [42,43]: where A is an energy-independent constant and Eg is the optical band gap. Here, the optical band gap energy can be determined by applying Equation (2) to the observed UV-VIS spectra of the samples. Furthermore, the nature of the electronic transition can be determined by specifying the value of γ. For direct transitions, γ takes the values 1/2 or 3/2, whereas γ is equal to 2 or 3 for indirect transitions based on whether they are allowed or forbidden, respectively [44]. In general, insulators/semiconductors are classified into two types of materials: direct and indirect band gaps. In the direct band gap materials, the valance band maximum (VBM) and the conduction band minimum (CBM) coincide at the same zero crystal momentum point (i.e., wave vector k = 0) [45]. In this case, γ takes the value of 1/2. In some materials, when the quantum selection rule does not allow the direct transition between the VBM and CBM, the transition is called forbidden direct transition and γ = 3/2. Indirect electron transition occurs when the VBM and the CBM do not lie at same wave vector. In this case, absorption or emission of phonon energy will always be associated to the electron transition from VB to CB with a right magnitude of crystal momentum [46]. To accurately estimate the energy band gap, from the plots of (αhυ) 1/γ versus the photon energy hυ, it is necessary to extrapolate the linear portion of the curve to intersect the photon energy axis (x-axis) as shown in Figures 8-10. As a consequence, it is difficult to decide the dominant type of electronic transition in the samples. Earlier studies revealed Band Gap Study The absorption coefficient (α) and the optical band gap (E g ) are expected to be related with each other through the well-known Tauc's relationship, given by [42,43]: where A is an energy-independent constant and E g is the optical band gap. Here, the optical band gap energy can be determined by applying Equation (2) to the observed UV-VIS spectra of the samples. Furthermore, the nature of the electronic transition can be determined by specifying the value of γ. For direct transitions, γ takes the values 1/2 or 3/2, whereas γ is equal to 2 or 3 for indirect transitions based on whether they are allowed or forbidden, respectively [44]. In general, insulators/semiconductors are classified into two types of materials: direct and indirect band gaps. In the direct band gap materials, the valance band maximum (VBM) and the conduction band minimum (CBM) coincide at the same zero crystal momentum point (i.e., wave vector k = 0) [45]. In this case, γ takes the value of 1/2. In some materials, when the quantum selection rule does not allow the direct transition between the VBM and CBM, the transition is called forbidden direct transition and γ = 3/2. Indirect electron transition occurs when the VBM and the CBM do not lie at same wave vector. In this case, absorption or emission of phonon energy will always be associated to the electron transition from VB to CB with a right magnitude of crystal momentum [46]. To accurately estimate the energy band gap, from the plots of (αhυ) 1/γ versus the photon energy hυ, it is necessary to extrapolate the linear portion of the curve to intersect the photon energy axis (x-axis) as shown in Figures 8-10. As a consequence, it is difficult to decide the dominant type of electronic transition in the samples. Earlier studies revealed that the value of γ can be achieved by using an analytical differentiation method, which was generally found to be an imprecise method [43,47]. For this purpose, d(ln(α))/d(hυ) versus photon energy (hυ) was plotted and a maximum peak was achieved. Furthermore, a perpendicular line from the maximum peak to the photon energy axis is drawn to obtain the E g value. The value of γ was then estimated from the slope of the ln(αhυ) versus ln(hυ − E g ) curve. This procedure needs considerable time and is not a precise method [43,47]. In this work, optical dielectric loss and Tauc's model were used to estimate the optical band gap and the electronic transition types, respectively. This is related to the fact that the optical dielectric function hardly depends on materials band structure. At the same time, investigations of the optical dielectric function using UV-VIS spectroscopy are also found to be considerably useful in predicting the overall band structure of the materials [48]. Recent studies have confirmed that the imaginary part of the optical dielectric function, ε", can mainly be used to describe the electronic transition between occupied and unoccupied states [49][50][51]. The optical dielectric loss spectra obtained for pure and doped PMMA samples are shown in Figure 11. It can be seen that all the samples exhibit a linear behavior at higher photon energies. The imaginary part is seen to be related to the absorption coefficient [52]. It is clear that the optical band gap achieved from optical dielectric loss (see Figure 11) is almost equal to those estimated from Tauc's model (see Figure 10) for the doped samples. On the other hand, for the pure PMMA sample, the optical band gap estimated from Tauc's model (see Figure 8) is found to be approximately 5.04 eV, which is considerably close to that achieved (4.97 eV) from the optical dielectric loss plot (see Figure 11). Thus, the type of electronic transition is the allowed direct transition for the pure PMMA sample and the forbidden direct transition for the doped samples. Consequently, it is understood from these results that the complex optical dielectric function can be successfully used for studying the band structure and estimation of optical band gaps. Reducing the optical band gap from 5.04 eV for pure PMMA sample to 2.6 eV for the doped PMMA (GT 28) sample reveals that the extracted GT solution can modify the electronic structure of the host PMMA polymer; in particular, the energy states between the valence and conduction bands. Here, the achieved band gap for the dye-doped PMMA (GT 28) sample is found to be smaller than the recently-reported band gap of Alq3 (2.83 eV) and has gained large popularity among researchers due to its wide applications in photo-detectors, photovoltaic cells, flat and flexible colour displays, and organic light-emitting diodes (OLEDs) [53]. found to be an imprecise method [43,47]. For this purpose, d(ln(α))/d(hυ) versus photon energy (hυ) was plotted and a maximum peak was achieved. Furthermore, a perpendicular line from the maximum peak to the photon energy axis is drawn to obtain the Eg value. The value of γ was then estimated from the slope of the ln(αhυ) versus ln(hυ − Eg) curve. This procedure needs considerable time and is not a precise method [43,47]. In this work, optical dielectric loss and Tauc's model were used to estimate the optical band gap and the electronic transition types, respectively. This is related to the fact that the optical dielectric function hardly depends on materials band structure. At the same time, investigations of the optical dielectric function using UV-VIS spectroscopy are also found to be considerably useful in predicting the overall band structure of the materials [48]. Recent studies have confirmed that the imaginary part of the optical dielectric function, ε″, can mainly be used to describe the electronic transition between occupied and unoccupied states [49][50][51]. The optical dielectric loss spectra obtained for pure and doped PMMA samples are shown in Figure 11. It can be seen that all the samples exhibit a linear behavior at higher photon energies. The imaginary part is seen to be related to the absorption coefficient [52]. It is clear that the optical band gap achieved from optical dielectric loss (see Figure 11) is almost equal to those estimated from Tauc's model (see Figure 10) for the doped samples. On the other hand, for the pure PMMA sample, the optical band gap estimated from Tauc's model (see Figure 8) is found to be approximately 5.04 eV, which is considerably close to that achieved (4.97 eV) from the optical dielectric loss plot (see Figure 11). Thus, the type of electronic transition is the allowed direct transition for the pure PMMA sample and the forbidden direct transition for the doped samples. Consequently, it is understood from these results that the complex optical dielectric function can be successfully used for studying the band structure and estimation of optical band gaps. Reducing the optical band gap from 5.04 eV for pure PMMA sample to 2.6 eV for the doped PMMA (GT 28) sample reveals that the extracted GT solution can modify the electronic structure of the host PMMA polymer; in particular, the energy states between the valence and conduction bands. Here, the achieved band gap for the dye-doped PMMA (GT 28) sample is found to be smaller than the recently-reported band gap of Alq3 (2.83 eV) and has gained large popularity among researchers due to its wide applications in photo-detectors, photovoltaic cells, flat and flexible colour displays, and organic light-emitting diodes (OLEDs) [53]. Urbach Energy and Materials Structure It was established that Urbach energy can be used to investigate the structure of polymeric materials through the detection of the defect level within the forbidden band gap [36]. The Urbach tail width was estimated through the following relation [54,55]: where α o is constant and E t is the Urbach tail, which refers to the band tails width of the localized states. One can determine E t from the reciprocal of the slope of the straight lines obtained from the plots of ln(α) vs. photon energy hυ (see Figure 12). The determined value of E t for the pure PMMA sample is found to be 157 meV, while it increased to 298 meV for the doped PMMA sample (GT 28). This increase of Urbach energy can be indirectly attributed to the increase of the amorphous nature within the dye-doped PMMA samples. The larger energy tails indicate the creation of disorder and imperfection in the band structure of the host material [56]. Prasher et al. have also confirmed that the increase of Urbach energy is an indication of the increase of the amorphous portion [57]. Figure 13 shows the XRD pattern of pure (GT 0) and dye-doped (GT 28) PMMA samples. It is evident from the figure that the PMMA polymer exhibits two broad peaks. The broad peaks appearing around 2θ = 30 • and 2θ = 43 • reveals the amorphous structure of the pure PMMA polymer [58]. The disappearance of the broad peak in the GT 28 sample reveals the amorphousness of the sample. From the Urbach energy study and XRD analysis, it is understood that the structure of the materials and the optical electronic properties are strongly correlated. The XRD results confirmed the fact that the samples are transferred to complete amorphous phase material after the addition of the extracted GT solution. The achieved Urbach energy values have strongly supported the XRD results. Urbach Energy and Materials Structure It was established that Urbach energy can be used to investigate the structure of polymeric materials through the detection of the defect level within the forbidden band gap [36]. The Urbach tail width was estimated through the following relation [54,55]: where αo is constant and Et is the Urbach tail, which refers to the band tails width of the localized states. One can determine Et from the reciprocal of the slope of the straight lines obtained from the plots of ln(α) vs. photon energy hυ (see Figure 12). The determined value of Et for the pure PMMA sample is found to be 157 meV, while it increased to 298 meV for the doped PMMA sample (GT 28). This increase of Urbach energy can be indirectly attributed to the increase of the amorphous nature within the dye-doped PMMA samples. The larger energy tails indicate the creation of disorder and imperfection in the band structure of the host material [56]. Prasher et al. have also confirmed that the increase of Urbach energy is an indication of the increase of the amorphous portion [57]. Figure 13 shows the XRD pattern of pure (GT 0) and dye-doped (GT 28) PMMA samples. It is evident from the figure that the PMMA polymer exhibits two broad peaks. The broad peaks appearing around 2θ = 30° and 2θ = 43° reveals the amorphous structure of the pure PMMA polymer [58]. The disappearance of the broad peak in the GT 28 sample reveals the amorphousness of the sample. From the Urbach energy study and XRD analysis, it is understood that the structure of the materials and the optical electronic properties are strongly correlated. The XRD results confirmed the fact that the samples are transferred to complete amorphous phase material after the addition of the extracted GT solution. The achieved Urbach energy values have strongly supported the XRD results. Conclusions In this work, FTIR spectroscopy was used to investigate the miscibility of green tea (GT) dye and PMMA polymer. In the FTIR spectra of GT extract, obvious absorption bands at 3401 cm −1 , 1628 cm −1 , and 1029 cm −1 , corresponding to O-H/N-H, C=O, and C-O groups were observed, respectively. The FTIR bands shifting and intensity reduction in the doped PMMA sample confirm the complex formation of the host PMMA polymer with the GT dye. The results of this study were promising and revealed the possibility of modification of the insulating wide band gap PMMA polymer to a conjugated small band gap PMMA by addition of extracted GT solution, which is an environmentally friendly material. The absorption edge was found to be 4.9 eV for the pure PMMA and shifted to 2.61 eV for the dye-doped PMMA (GT 28) sample. This reveals that the wide band gap of PMMA was reduced to a narrow energy band gap. Such a noticeable decrease in the optical band gap of PMMA upon the addition of extracted GT solution makes it possible to consider this work as a base to modify other polar polymers to meet our needs. Modified polar polymers with a small band gap and good film formation are crucial for solving the problems, such as lifetime, cost, and flexibility associated with conjugated polymers. The Urbach energy was found to increase from 187 meV for pure PMMA to 298 meV for the dye-doped PMMA (GT 28) sample. This increase was attributed to the dominant of amorphous phase in the dye-doped PMMA samples as supported by XRD results. Conclusions In this work, FTIR spectroscopy was used to investigate the miscibility of green tea (GT) dye and PMMA polymer. In the FTIR spectra of GT extract, obvious absorption bands at 3401 cm −1 , 1628 cm −1 , and 1029 cm −1 , corresponding to O-H/N-H, C=O, and C-O groups were observed, respectively. The FTIR bands shifting and intensity reduction in the doped PMMA sample confirm the complex formation of the host PMMA polymer with the GT dye. The results of this study were promising and revealed the possibility of modification of the insulating wide band gap PMMA polymer to a conjugated small band gap PMMA by addition of extracted GT solution, which is an environmentally friendly material. The absorption edge was found to be 4.9 eV for the pure PMMA and shifted to 2.61 eV for the dye-doped PMMA (GT 28) sample. This reveals that the wide band gap of PMMA was reduced to a narrow energy band gap. Such a noticeable decrease in the optical band gap of PMMA upon the addition of extracted GT solution makes it possible to consider this work as a base to modify other polar polymers to meet our needs. Modified polar polymers with a small band gap and good film formation are crucial for solving the problems, such as lifetime, cost, and flexibility associated with conjugated polymers. The Urbach energy was found to increase from 187 meV for pure PMMA to 298 meV for the dye-doped PMMA (GT 28) sample. This increase was attributed to the dominant of amorphous phase in the dye-doped PMMA samples as supported by XRD results.
10,182.2
2017-11-01T00:00:00.000
[ "Materials Science", "Chemistry" ]
A Novel Dual-Mode Dual Type Hysteresis Schmitt Trigger and its Applications using Single Differential Voltage Current Conveyor Transcon-ductance Amplifier : This paper introduces a novel Schmitt trigger circuit that can operate in two modes: voltage and trans-imped-ance mode using a sole Differential Voltage Current Conveyor Transconductance Amplifier (DVCCTA) within the same topology with external grounded resistors. The suggested designs enable dual-type hysteresis (clockwise (CW) and counter-clockwise (CCW)) simultaneously within the same circuit topology. Additionally, the proposed design includes the unique ability to control threshold levels through the transconductance parameter ( g m ) of DVCCTA via a grounded re-sistor. The proposed Schmitt trigger is extended for the application of a square/triangular waveform generator and pulse width modulator to illustrate the utility of the given Schmitt trigger circuit. All the proposed designs are appropriate for IC integration due to the available grounded passive attributes. Moreover, the design comes with a feature of independent control of oscillation frequency using a grounded capacitor eliminates the highest level of parasitics, and lessens the circuit’s sensitivity to noise immunity. The maximum absolute deviation of output amplitude is observed to be less than 0.062 % (for CW mode) and 0.038 % (for CCW mode), while for threshold voltages, it is below 0.528 % (CW) and 0.321 % (CCW), respectively against for temperature variations of 0-100 0 C. Realization of DVCCTA uses 20 MOS transistors with 0.18 µm TSMC CMOS technology parameter, which is used to authenticate the workableness of the proposed design through PSPICE. Additionally, Monte Carlo simulations, temperature-dependent variations, non-ideal analysis, schematic layout with post-layout simulation, and also experimental results using ICAD844 are presented to validate the proposed design. The simulated responses correlate with the theoretical prediction. Nov dvonivojski histerezni Schmittov prožilec in njegova uporaba z uporabo enojnega diferencialnega napetostnega tokovnega transkonduktančnega ojačevalnika 1 Introduction The present world of the electronics environment is augmented with filters, rectifiers, amplifiers, A/D converters, comparators, oscillators, and many more signal-processing circuits.Among these, a comparator circuit accompanied by positive feedback is named as Schmitt trigger [1] which plays a vital role in the province of both analog and digital.Schmitt trigger circuit transforms any irregularly formed input signal into a square waveform and is commonly used to improve the circuit's immunity to noise.In addition, it is an essential block used in distinct applications like a square waveform generator [2], versatile modulator [3], relaxation oscillators [4], function generators [5], monostable multivibrator [6], pulse width modulator [7], switching power supplies [8], etc. Initially, a Schmitt trigger has been presented with a traditional Op-Amp and passive components [1] but suffers from a finite gain-bandwidth product, high power dissipation, low slew rate, lesser dynamic range, etc. [9].As an attractive strategy to waver the constraints of conventional Op-Amp [10], Various current mode analog active blocks (AAB) have been reported in the literature namely (second generation current conveyor) CCII [11] , (third generation current conveyor) CCIII [12], (operational transconductance amplifier) OTA [13], (operational trans resistance amplifier) OTRA [14], (differential voltage current conveyor) DVCC [15], (dual X current conveyors) DXCCII [16], (dual X current conveyor transconductance amplifier) DXCCTA [17] and many more.An active block namely DVCCTA is chosen from the above-cited current mode active blocks due to its prominent feature of electronically adjustable transconductance in comparison to Op-Amp.The usage of DVCCTA can be extended to the field of signal processing for designing various circuits namely analog filters [18] - [19], oscillators [20] - [21], simulator [22]and so many. Numerous Schmitt trigger circuit implementations using distinct current mode AAB are reported in the literature .Some circuits based on CCII are discussed in [23]- [27], but these implementations require either a higher number of active or passive elements and are incapable of providing a dual-type hysteresis mode of operation.In [27], a non-inverting Schmitt trigger is demonstrated, employing only a CCII and three passive components.This circuit functions as a zero-voltage comparator and is capable of adjusting the threshold voltage levels.However, a Schmitt trigger with independent current control of amplitude and frequency using two OTA's along with two grounded resistors is cited in [28].In [29], current input dual hysteresis mode OTRA Schmitt trigger has been delivered with the possibility of changing its type of hysteresis with the help of a switch.Although, it uses a floating resistor which is not an advisable feature for IC Implementation.The improvement in circuit design using DXCCTA is also given in [30] without any passive elements.This configuration reduces the influence of temperature variation on output amplitude levels.However, it doesn't offer the capability to demonstrate both CW and CCW hysteresis.Another prominent configuration with DVCC and two grounded resistors [31] has the capability to adjust the hysteresis by varying the values of a resistor.Two more designs with (differential difference current conveyor) DDCC and (current differencing transconductance amplifier) CDTA with two resistors are demonstrated in [32] and [33] but unable to exhibit dual type hysteresis.Aside from the above mentioned circuits, dual-type hysteresis, independent and electronic control of threshold and amplitude levels are viable through (current follower differential input transconductance amplifier) CFDITA [34], (current differencing buffered amplifier) CDBA [35], (current controlled current differencing transconductance amplifier) CCCDTA [36] based designs but circuit in [34] is unable to give both type of hysteresis simultaneously.The Schmitt trigger circuit mentioned in [37] that uses a single (voltage differencing transconductance amplifier) VDTA and one resistor is unable to exhibit dual type hysteresis, but it does have the ability to independently control the output amplitude levels.Circuit cited in [38] uses single (second generation current controlled current conveyor) CCCII, a capacitor and two resistors for generating a square waveform with the maximum power consumption of 600 µW but unable to exhibit dualtype hysteresis.Square wave generators based on (second generation differential current conveyor) DCII, two resistors and a sole capacitor in [39] and [40] have the advantage of reducing noise caused by parasitics through the use of a grounded capacitor.A recent proposal introduces a Waveform generator utilizing commercially available ICs along with (extra X second generation current conveyor) EXCCII prototype [42] and five passive components.The circuit benefits from the capability of independently controlling oscillation frequency via passive elements.Also, an attempt was made to design a Schmitt trigger with (second generation voltage conveyor) VCII [43] comprising of two active blocks and five resistors exhibiting an average power consumption of 328 µW and 365 µW for transitions at non-inverting and inverting mode outputs, respectively.Table .1 demonstrates a comparative study with the earlier described Schmitt trigger circuits, which is summarized in the comparison section in great detail. Henceforth, this paper presents a Schmitt trigger with dual-type hysteresis within the same topology available in two modes specifically voltage and trans-impedance mode with less number of grounded passive components and its application as a square/triangular wave generator and pulse width modulator.The design works with an AAB named DVCCTA for the Schmitt trigger operation.Additionally, the proposed design comes with an attractive feature of availability of both modes (voltage and transimpedance) within the same circuit topology.Notably, the topologies provide the benefit of independent control of threshold levels and oscillation frequency.The simulation results using PSPICE along with CMOS based DVCCTA and the experimental verification using IC AD844 are examined to authenticate the theory.Also, the feasibility of Schmitt trigger for different type of input voltages, temperature dependence, non-ideal analysis, and Monte Carlo analysis is illustrated. The remaining sections of the paper are structured as follows.Section 2 discusses the circuit representation and analysis of the basic building block DVCCTA and the proposed dual mode (voltage mode and transimpedance mode) Schmitt trigger.Section 3 focuses on how the proposed circuits can be extended for the application as a square/triangular waveform generator and pulse width modulator.Section 4 gives the effect of non-ideal current and voltage transfer gains on the performance of the proposed Schmitt trigger.Section 5 presents the functional verification of the proposed circuits through simulation results.Moving on to Section 6 gives further validation of the proposed designs through experimental analysis, followed by a comparative analysis with existing models in Section 7. Finally, Section 8 addresses the conclusion. Circuit Representation and Analysis 2.1. DVCCTA The DVCCTA, a simple active block was first given in [44], which is a combination of DVCC [15] resides at input phase and OTA [13] remains at the output phase.Fig. 1 depicts the hierarchical block, while Fig. 2 presents the commercially available IC AD844 implementation and CMOS implementation [49] of DVCCTA, respectively. The following characteristic equations define the analog block as: where g m can be defined as the transconductance of DVCCTA.This parameter is electronically tunable via an external biasing current (I B ) which is described by the following equation (2). DVCCTA based Voltage mode Schmitt Trigger The proposed voltage mode Schmitt trigger uses a single DVCCTA and only one grounded external resistor with inputs V in1 , V in2, and output V out is shown in Fig. 3.With the selection of input, we can avail dual type hysteresis operation likely CW and CCW which are clearly disclosed below. CCW Schmitt Trigger: Here, to enable the CCW mode of operation, the input V in1 is driven through the Y 1 terminal of DVCCTA by keeping Y 2 terminal grounded.The output V out is taken across the O-terminal of DVCCTA.Depending upon the input signal level, square wave output either saturates at (positive saturation level) +V sat or at the (negative saturation level) −V sat . From the routine analysis of the design, Adopting the port relation of DVCCTA (V X = V Y1 − V Y2 ), and V Y2 = 0; V X = V Y1 = V in1 .Equation ( 2) can be written as Here the currents passing through the terminals Z+ and O-are equal (I Z+ = I O− ) because of the short circuit connection.From the port relations (I Z+ = I X ; I O− = −g m V Z+ ) given in equation ( 1), we can write as follows Using equations (4 and 5) and from circuit V Z+ = V out , the input voltage (V in1 ) is expressed as: Where g m is taken as 1/R m .The upper threshold voltage (V TH ) is calculated with the assumption that the initial value of output is at −V sat . As V in1 increases from zero, V out remains at −V sat until V in1 reaches V TH .When it satisfies the condition (V in1 >V TH ), output level changes from −V sat to +V sat .Subsequently, the low threshold voltage (V TL ) is given as The output level (+V sat ) is maintained for the input V in1 >V TL .The corresponding value for the hysteresis is calculated as CW Schmit Trigger: This mode of operation is supervised by V in2 through the inverting Y 2 terminal of DVCCTA since V in1 is grounded.The O terminal comprises for output V out , as shown in Fig. 3.The circuit analysis is as same as CCW mode and the voltage V in2 can be expressed as The hysteresis operation is observed to be adverse of CCW operation with the assumption that the initial value of output is at +V sat .Therefore, V TH and V TL are observed to be same as in equations ( 7) and ( 8) respectively. Transimpedance mode Schmitt Trigger: The transimpedance mode Schmitt trigger circuit is realized using the same topology as in Fig. 3 by adding an external grounded resistors at Y terminals with a current input (I in ).Dual-type hysteresis is available within same topology with the selection of input as either I in1 or I in2 within the same design and is clearly disclosed in this section.CCW transimpedance mode Schmitt trigger is enabled through the input I in1 at Y 1 terminal of DVCCTA when Y 2 terminal is grounded and the output V out is observed at O terminal of DVCCTA which is shown in Fig. 4. The principle operation is same as the voltage mode Schmitt trigger by considering the initial value of output is at −V sat .By considering the ideal characteristics from equation ( 1), the simple calculations can be obtained below: The input current (I in1 ) is expressed as Where g m is taken as 1/R m . The upper and lower threshold currents (I TH , I TL ) can be expressed to be CW transimpedance mode Schmitt trigger is enabled through the input I in2 at Y 2 terminal of DVCCTA while the other input terminal Y 1 is grounded.The O terminal incorporates the output V out where the operation of hysteresis is observed to be adverse of CCW type.Likewise, the input current (I in2 ) is given in equation ( 14) and I TH , I TL are the same as of CW type since the initial assumption of output is settled at +V sat .To illustrate the usefulness and practical application of the introduced work, Schmitt trigger-based square/triangular waveform generator is depicted in Fig. 5.The waveform generator incorporates the proposed voltage mode Schmitt trigger design along with an integrator using CFOA, a resistor, and a capacitor.The proposed scheme adopts only grounded passive components.Mostly, the grounded capacitor reduces the level of parasitics during fabrication.It is expanded to the waveform generator as a square wave at V out1 and triangular wave at V out2 output terminals, that can be mathematically characterized as: The Schmitt trigger either saturates at +V sat or −V sat .Initially, by assuming that the output V square is at +V sat , this voltage makes the capacitor C 0 to charge with current I Z initiating V out2 to linearly increase with a positive slope until V TH .Subsequently, the square waveform output changes to -V sat which makes the capacitor C 0 to discharge V out2 decreases linearly with a negative slope until it reaches V TL of V out1 .The succeeding relationships provide charging and discharging intervals of the capacitor. Whereas V TH and V TL are originated from Equations ( 7) and ( 8), the time period (T = T 1 + T 2 ) of the waveform and subsequently the frequency can be computed as 3.2.Pulse Width Modulator (PWM): PWM scheme is extensively used in voltage regulation, communication systems, power conversion control circuits, ADC, Instrumentation systems, and digital audio [45]- [48].In this technique, the pulse width of the modulated output is altered according to the voltage level of input modulating signal.PWM output signal can be more oftenly produced by comparing a modulating signal and a carrier waveform like triangular or sawtooth waveform.The proposed design is adequately suited for designing the Pulse Width Modulator (PWM) displayed in Fig. 6 consists of single DVCCTA which acts as a comparator and a resistor.The modulating signal V in is given through input terminal Y 2 , the voltage across Y 1 terminal V C operate as a carrier signal and the required PWM output V out is taken over the Z terminal of DVCCTA.The feasible saturation levels of V out are +V sat and −V sat .As the V C increases from zero and moves towards the input modulating signal V in , the PWM output voltage abides at −V sat and once V C reaches V in and satisfies the condition (V C > V in ), V out changes its state from −V sat to +V sat .It is maintained until the carrier voltage decreases and satisfies the condition of (V C < V in ), then the PWM output state changes to −V sat . Employing the terminal characteristics given in equation ( 1), the mathematical analysis is given below The current through X terminal can be written as As we know that, from equation ( 1), I Z+ = I X and I O− = −g m V Z .Therefore, by equating I Z+ and I O− because of the short circuit connection. Where g m is considered as 1/R m . By making use of equations (22,23), the output voltage V out of PWM is determined as Non ideal Analysis To determine the non-ideal response of the proposed Schmitt Trigger, including various non-idealities of DVCCTA.The tracking errors in the matrix below show the deviation from the ideal DVCCTA's properties. ( Here, α represents the current transfer gain, while 1 and 2 denote the voltage transfer gains, and γ represents the transconductance gain.The numerical relation between α and the current tracking error (ξ i ), as well as β and the voltage tracking error (ξ v ), can be expressed as: Further analysis of the proposed circuits using equation ( 25) is as follows: For CCW operation, By taking, V X = β 1 V in1 and V Y2 = 0, equation ( 3) can be written as Also considering, (I Z+ = αI X ; I O− = −γg m V Z+ ), equation ( 5) can be written as From equations (27 & 28), the input voltage V in1 is expressed as However, for CW mode, the circuit analysis is similar to CCW mode, and voltage V in2 can be expressed according to equation (10) . Simulation Results The proposed voltage mode Schmitt trigger design illustrated in Fig. 3 The aspect ratios of MOS transistor are provided in Table .2. Besides, the input and output characteristics for the proposed transimpedance mode schmitt trigger with a 50 Hz sinusoidal current input (I in ) of ±2 mA amplitude and R = R 1 = R 2 = 1 kΩ, R m = 10 kΩ is shown in Fig. 9 and Fig. 10.Furthermore, to check the proposed design's workability at higher frequencies, a 5 MHz sinusoidal voltage waveform with amplitude ±8 V is applied to voltage mode Schmitt trigger design.Fig. 11 depicts that the amplitude levels are not distorted at higher frequencies which further confirms the capability of Schmitt trigger circuits over a wide range of frequency.The CCW Schmitt trigger exhibits a -3 dB bandwidth at approximately 12.86 MHz, while the CW mode shows at around 10.82 MHZ.In order to evaluate the temperature stability of the proposed Schmitt triggers, the output (V out ) is observed at different temperature values specifically (27 °C, 50 °C, 75 °C and 100 °C).As a result, as shown in Fig. 12, it is observed that the amplitude and the threshold levels of a square wave are not adversely affected due to temperature variations.To further quantify the extent of deviation in amplitude and threshold levels, it is checked for temperature variations of 0-100 °C.Notably, the findings from Figs. 13 and 14 reveals that the maximum absolute deviation in output amplitude remains below 0.0625 % (for CW Schmitt trigger) and 0.0381 % (for CCW Schmitt trigger), while for threshold voltages magnitude is less than 0.5 % (for both CW and CCW modes).Moreover, the stability of output amplitude levels through Monte Carlo analysis at temperatures 27, 50, 75 and 100 °C, considering over 200 random points with a 5 % tolerance in resistor values is depicted in Fig. 15 Finally, the circuit in Fig. 6 is used for the generation of PWM, and its output is depicted in Fig. 24 with the selection of R = 1 kΩ and R m = 10 kΩ and input voltage V in of 50 Hz sinusoidal with an amplitude of 8V pp .The carrier waveform V C of about 500 Hz is set to be a triangular wave.It is obvious that the pulse width of V out is modulated according to the input-modulating sinusoidal signal. Conclusion Novel dual-mode Schmitt trigger employing a single DVCCTA and its application to square/triangular wave generator constructed using an additional CFOA, a grounded capacitor, and a grounded resistor which comprises an integrator and pulse width modulator (PWM) within the same topology is presented.The proposed design avails of two modes specifically voltage and transimpedance modes where the CW and CCW type of operation is acquired within the same topology on the basis of the selection of input.It uses only grounded passive elements and also a single CMOS-based DVCCTA, which is suitable for IC integration.Additionally, independent control of threshold levels is available.Tunability of grounded components is the prominent of the design where the operating frequency can be made adjustable using a grounded capacitor and reduces the level of parasitics which sets the proposed design insensitive to noise.The highest absolute deviation of output amplitude and threshold voltage is less than 0.062 % and 0.528 %, respectively, over temperature variations ranging from 0-100 0 C. The design brings dual-mode dualtype hysteresis operation, an excellent operational frequency range, and also insensitivity to temperature variations.Monte Carlo simulations, non-ideal analysis, and experimental results as well as the schematic layout with post-layout simulation results are depicted to justify the considered structure.The unique characteristics of the proposed designs make them applicable for bio-medical and other signal processing applications and can be extended for designing relaxation oscillators, versatile modulators, monostable multivibrators, etc. Figure 3 : Figure 3: Proposed voltage mode Schmitt trigger circuit Figure 5 :Figure 6 : Figure 5: Square/triangular wave generator is examined with both CMOS and IC AD844 based DVCCTA using PSPICE with 0.18 µm CMOS technology parameter from TSMC.The passive attributes are selected as R=500 Ω and R m =1 kΩ with 50 Hz sinusoidal input voltage of amplitude ±8 V. Fig. 7 depicts the simulation responses of input and output characteristics for the (current feedback operational amplifier) CFOA based implementation of the proposed design.In addition, the transient response of a proposed Schmitt trigger circuit utilizing CMOS implementation is illustrated in Fig. 8.The CMOS-based DVCCTA is biased with a supply voltage of V DD = -V SS = 1.4V,VB = -0.4Vand I B = 60 µA (g m = 0.9961 mS), with R = 500 Ω.Here, gm is calculated to according to equation (2). Figure 16 : Monte Carlo simulations of output amplitude for CW mode Schmitt Trigger (a) 27 °C (b) 50 °C (c) 75 °C (d) 100 °C.Fig.17depicts the simulated responses of both the CW and CCW structures of the proposed voltage mode circuit for a triangular wave input with a frequency of 50 Hz and an amplitude of ± 4 V. Subsequently, the output is observed to be a square wave, regardless of the input waveform type.This key characteristic demonstrates the versatility and feasibility of the circuit, as it can effectively process and convert different types of input signals. Figure 17 : Triangular wave V in and V out waveform (a) CW (b) CCW From Figs. 18 and 19, it is observed that the threshold levels of the proposed ICAD844 based voltage and transimpedance mode Schmitt trigger circuits can be electronically controlled by adjusting the transconductance parameter (g m ) through the relationship g m = 1 R m , without disturbing the output's amplitude.Figs.20 and 21 interprets the illustration for theoretical and simulated threshold voltages against R m variation of dual type voltage mode Schmitt trigger, respectively.It is observed that the simulated threshold values concur well with the theoretical anticipation.Overall, the analysis highlights the controllability of threshold levels through R m . Figure 18 : Figure 18: DC transfer characteristic of voltage mode Schmitt trigger for different R m Figure 19 :Figure 20 :Figure 21 : Figure 19: DC transfer characteristic of transimpedance mode Schmitt trigger for different R m 3. Figure 22 :Figure 23 : Figure 22: The operating frequency variations with changes in C o Figure 27 :Figure 29 : Figure 27: Modulating input V in and PWM output waveform Table 3 : summary of key attributes of proposed Schmitt trigger circuits.
5,465.6
2024-05-27T00:00:00.000
[ "Engineering", "Physics" ]
Ultra-Short-Term Photovoltaic Power Prediction Model Based on the Localized Emotion Reconstruction Emotional Neural Network Due to the intermittency and randomness of photovoltaic (PV) power, the PV power prediction accuracy of the traditional data-driven prediction models is difficult to improve. A prediction model based on the localized emotion reconstruction emotional neural network (LERENN) is proposed, which is motivated by chaos theory and the neuropsychological theory of emotion. Firstly, the chaotic nonlinear dynamics approach is used to draw the hidden characteristics of PV power time series, and the single-step cyclic rolling localized prediction mechanism is derived. Secondly, in order to establish the correlation between the prediction model and the specific characteristics of PV power time series, the extended signal and emotional parameters are reconstructed with a relatively certain local basis. Finally, the proposed prediction model is trained and tested for single-step and three-step prediction using the actual measured data. Compared with the prediction model based on the long short-term memory (LSTM) neural network, limbic-based artificial emotional neural network (LiAENN), the back propagation neural network (BPNN), and the persistence model (PM), numerical results show that the proposed prediction model achieves better accuracy and better detection of ramp events for different weather conditions when only using PV power data. Introduction In response to reducing carbon emission caused by the fossil fuels and following the trend of global environmental protection, photovoltaic (PV) generation has been widely used as one of the environmentally friendly power generation alternatives. However, PV power shows high intermittency and randomness due to the impacts of various meteorological factors, which hinders the development of the grid-connected PV power system [1]. Ultra-short-term PV power prediction is considered for intra-hour prediction. The accurate prediction of PV power from a few seconds to one hour is important to assure grid quality and stability and can effectively help the grid to perform power smoothing [2]. Therefore, an effective and accurate prediction model for PV power is of great importance. Physical methods and statistical methods can be used for ultra-short-term PV power prediction [3,4]. Physical methods are based on physical equations describing the laws of solar radiation and the operation of PV modules, as well as the detailed data from numerical weather prediction (NWP) [5]. The cloud image-based prediction method in the physical method can achieve high precision ultra-short-term PV power prediction by monitoring cloud movement [6][7][8]. The physical method does not require a lot of historical data, but it is difficult to simulate some extreme weather neural network. So far, ENNs have not been applied to the field of PV power prediction. In this paper, an ultra-short-term PV power prediction model with localized emotion reconstruction in the LiAENN is proposed, which is combined with the idea of phase space reconstruction in chaotic time series analysis. The contributions of this paper are as follows: (a) The chaos theory is first combined with neuropsychological theory of emotion to improve the LiAENN-based model; the proposed LERENN-based prediction model provides a new direction for ultra-short-term PV power prediction. (b) By mining the hidden information of PV power time series and deriving the single-step cyclic rolling localized prediction mechanism, the influence of human subjective factors in the prediction process can be reduced. (c) The reconstructed extended signal and emotional parameters according to the derived single-step rolling localized prediction mechanism makes the correlation between the prediction model and the characteristics of the PV power time series more accurate, which can further improve prediction accuracy. The Neuropsychological Aspect of Emotion The emotion plays an essential role in human cognition and perception process, which happens in the limbic system. Figure 1 shows the schematic diagram of the amygdala interaction with other brain systems in the limbic system. The limbic system commonly includes amygdala, orbitofrontal cortex (OFC), thalamus, sensory cortex, hypothalamus and hippocampus. It can be clearly seen that the amygdala is highly connected with other limbic system components, such as thalamus, sensory cortex and OFC. The amygdala is responsible for dealing with emotional stimulus which comes from two pathways: one is directly transmitted by the thalamus, which is short and inaccurate; the other is derived from the sensory cortex, which is long but accurate. The important position of the amygdala in emotional processing indicates that it is the centerpiece of neuroeconomic decision-making. In the LiAENN from Figure 2, the amygdala and OFC module are expanded into two layers with two hidden neurons and single output neuron by using biases bs and activation function f to introduce the anxiety-confidence emotional states into the network [30]. The emotional stimuli P q (q = 1,...,n) as the input patterns enter into the thalamus and then go to the sensory cortex. Amygdala receives the input information from the sensory cortex. The thalamus also maps the expanded signals P n+1 extracted from input information directly to the amygdala. The emotional output of amygdala is Ea. The OFC produces the emotional output Eo, which is to inhibit the inaccurate emotional response of the amygdala and determine the final emotional output E. The dashed line in Figures 1 and 2 represent the feedback effects of resultant emotional response. In the LiAENN, the target value of input pattern T which controls the feedback effects is employed to adjust the amygdala weights vs and the OFC weights ws. Additionally, to control the effects of using targets, the network uses a decay rate to simulate the oblivious characteristic of amygdala. The LiAENN is trained with the anxious confident decayed brain emotional learning rules (ACDBEL), wherein the added anxiety-confidence emotional state and their attentional effects are used in learning in the amygdala. The emotionally derived concepts in LiAENN provide a new direction for its application in the ultra-short-term prediction of PV power. The Limitations of the Expanded Signal The existence of the short path not only allows the model to react faster to a wide range of stimuli, but also provides another pathway for emotional learning if the long path is damaged. However, inappropriate expanded signals also bring greater challenges to the application of prediction such as interfering with other input information or leading to the redundancy of information. The two-layer architecture of LiAENN has two short paths, which increases the inaccuracy of the information transmitted through the short paths. The expanded signal, which is transmitted through the short path, is usually calculated using a nonlinear function such as a mean operator or a max operator. A mean operator represents the average value of the input signals, which is used to simulate the average trend of input signals. A max operator, which is the maximum value of input signals, is chosen to simulate the expanded signal in most ENNs. Although the applications of ENNs are becoming more and more mature, the expanded signal is still not clearly defined as to whether it is a uniform regulation or a choice based on a particular application. However, the precise definition of the extended signal is critical to the accuracy of the prediction. In this paper, the expanded signal is considered according to the chaotic characteristics of PV power time series and the prediction mechanism. The Limitations of the Emotional Parameters The LiAENN is distinctive because more emotional concepts are involved in the emotional computing. The added emotional parameters referring to the anxiety and confidence more closely mimic the attentional behavior of human learning. The confidence and anxiety variables are influenced by the perceived objects. Emotional psychology theory holds that the new learning task will bring a high initial anxiety level and low confidence level. Rather, the proficiency of practice will lead to a lower level of anxiety and a higher level of confidence. Confidence makes the previous update occupy a dominant position. Anxiety has an effect on enhancement, including latest errors, which effectively slows down the learning of new tasks. Hence, anxiety can be seen as a feature of attention focusing on learning about new and "interesting" data. The choice of these data should be considered in terms of the interaction mechanism between emotion and attention. The amygdala is responsible for attentional behavior, which can eliminate interference items from desired target objects to obtain the salience. The interaction mechanism between emotion and attention can be summarized as follows: the attention is the first step in emotion processing, and on the contrary, emotional function helps to guide attention to a great extent. The important role of the amygdala in attention and memory suggests that the quick low-level automatic emotional responses are derived from the most important stimuli associated with survival [31]. Hence, these new and "interesting" data should be determined according to the specific application. On the facial detection and emotion recognition experiments, the new and "interesting" data are resourced from the average value of the global input signals, whose goal is to mimic trends in human emotional judgments and preferences based on general impressions, rather than precise details of perceived objects. The fluctuation of wind power is mainly reflected in the hourly fluctuation, whereas the PV power has stronger fluctuation in a few minutes. Therefore, for the ultra-short-term PV power prediction, the global average of the input signals does not provide the most direct stimulus to the emotional learning of the network. It is especially important to pick out the most important information about the prediction from the input signals. The improvement of LiAENN concentrates on tracking the detailed information of input signals, which is crucial for ultra-short-term PV power prediction. The construction of emotion parameters has a relatively certain local basis based on the chaos theory. Chaotic Time Series Analysis If the behavior of the observed time series data is chaotic, it can be assumed that the behavior follows a certain deterministic law in the high-dimensional phase space. Considering the chaotic characteristics of PV power helps to better explore the relationship between emotion and attention. The key point of this approach lies in the phase space reconstruction of the dynamics, which is aimed at mapping these historical time series into high-dimensional phase space, and then extracting and restoring the original law. The original law is a kind of trajectory in high-dimensional space, which is called chaotic attractor [32]. For a chaotic system, the phase space is defined as a vector space R m , where each point is represented by an m-dimensional vector r(t), which is expressed as: where t is the index of the time series and m is the dimension of the vector space. According to Taken's embedding theorem, the value of r(t) and its related components r 1 (t), r 2 (t), ..., r m (t) are unknown in the chaotic system. However, the evolution of any component of the system can be determined by other components interacting with it, so the information of these related components is implicit in the development of any components. This means that if a single quantity or variable x(t) can be observed from a chaotic system, the chaotic attractor can be recovered from the reconstructed dynamics of a system X(t) = [x(t), x(t + τ), x(t + 2τ)...] after a certain time delay τ, which are geometrically similar to the original attractor. Therefore, the reconstructed phase space X(t) → X(t + τ) can be used to reflect the unknown dynamics of the actual system r(t) → r(t + τ) [33]. The future value of system at time t + τ can be determined by the following equation with the nonlinear function f : R m → R m , which describes the system: where the arrows appearing in the text represent a mapping from one-dimensional space to multi-dimensional space. Thus, although the PV power time series is random, its deterministic behavior can be described in the embedded phase space. The first step in reconstructing the PV power chaotic time series into phase space points is to determine the embedding dimension m and delay time τ based on the embedding theorem. Due to the small amount of calculation and strong anti-noise performance, the C-C method is used to calculate the phase space reconstruction parameters [23]. It is a kind of time delay window technique based on time series. The delay time τ is obtained by multiplying the delay amount l and the sampling time ∆t. Taking into account the discreteness of the sampling data, we use the delay amount l instead of delay time τ. First, the correlation integral of PV power time series x(i)( i = 1, 2, ..., N) is given as follows: where N is the length of time series; M is the number of delay vectors; r d (r d > 0) is defined as the spatial distance; and H(a) is a step function, i.e., H(a)=0 if a < 0; 1 otherwise. X e and X f are the random point vectors of the PV power output time series in the reconstructed phase space. Infinite norm is used to calculate the Euclidean distance between X e and X f . The BDS (Brock-Dechert-Scheinkman) statistic is applied to obtain the appropriate estimation of m and r d. Choose: where d ∈ (1, 2, 3, 4) and σ is the standard deviation of the time series. Second, the PV power output test statistics are computed. Considering the limited sequence length and the possible relationship among the time series data points, we divide the PV power output time series x(i) into l sequences with a length N/l. We define CC = 1 as an intermediate variable. The system test statistics S(l) can be found when N is large enough (or approaches the infinity in theory): The first zero crossing of S(l) is selected as the optimal delay amount l opt of PV power output time series phase space reconstruction. Get the difference between the maximum value and the minimum value of S(l) for r d given the same m and l. ∆S(l) is defined as the average value of the difference with different dimension m, i.e., On account of the finite length and noise effect of the time series data, S(l) may not reach a zero-crossing point. Then, the first local minimum value of ∆S(l) can be chosen to determine optimal delay amount l opt in the time series phase space reconstruction. A new statistic S cor (l) is then defined as: Determine the global minimum value of S cor (l), which corresponds to the average trajectory cycle optimal estimate l * . We have the best dimension m opt . Then, the PV power time series x(i) can be embedded in m-dimensional space by plotting the delay vector X: The Single-Step Cyclic Scrolling Localized Prediction Mechanism The single-step cyclic scrolling localized prediction mechanism [34] is described as follows: x(t 1 ) is assumed as the first power to be predicted and the x(t 0 ) is the known quantity. When the power x(t 1 ) is to be predicted at time t 1 , the correspondence between X(t 0 ) and x(t 1 ) is as follows: where the arrow represents the corresponding relationship between input and output when the model predicts the power value x(t 1 ) at time t 1 . Then, the delay vector X(t 0 ) is imported into the trained model, and the single-step prediction is performed. Hence, the predicted power x(t 1 ) pre at t 1 time is obtained. When the power at t 2 time is to be predicted, considering that at t 2 time, the actual PV power x(t 1 ) real at the t 1 time is available, the x(t 1 ) real can be added to the last position of the phase space vector X(t 1 ) (x(t 1 )=x(t 1 ) real ). Based on the phase space reconstruction, a new chaotic phase space based on x(t 1 ) is constructed as follows: Then, the delay vector X(t 1 ) is imported into the trained model, and the power x(t 2 ) at t 2 time can be predicted. The effect of rolling forward one step is realized, and the cycle is used to realize the ultra-short-term prediction of every moment in the future day. The pattern-target samples extracted from PV power chaotic time series are shown in Table 1 and the whole prediction mechanism is shown in Figure 3. It is worth noting that each update forms a new set of chaotic phase space points, and the only unknown value in the actual prediction process refers to the last phase space point of each delay vector, which can be defined as prediction center point. The mapping relationship is more precise. This prediction mechanism ensures that the model can be adjusted by pattern-target samples and the next predicted value is not affected by the previous predicted value, which avoids the problem of error accumulation in rolling prediction. Time Pattern Target A. Expanded Signal Based on aforementioned analysis, the actual prediction process can be described as a localized rolling prediction between a single delay vector with m input components and one output component. The mapping relationship shown in Table 1 indicates that the prediction center point is critical to the prediction value, which can be chosen as the expanded signal. The expanded signal is expressed as follows: B. Emotional Parameters The motivation for modifying these emotional parameters is our human cognitive process of new learning tasks. For the ultra-short-term prediction of PV power, the delay vector X is a time-dependent sequence transformed from the initial observation x(i) after stretching and folding. According to the prediction mechanism and the embedding theorem, the network tracks one pattern at a time. The anxiety level is affected by each pattern-target sample which is exposed to the network, and the effect of each component of single pattern on anxiety level increases with time. The predicted value is determined by x(t j ) to a large extent. Based on the above analysis, the emotional parameters can be successfully modeled within the network configuration by paying attention to the details of each pattern-target sample instead of the general impression. The anxiety coefficient and confidence coefficient can be expressed as µ and k. The initial values of the anxiety coefficient and the confidence coefficient are set to "1" and "0", respectively, which means that a new learning task such as first iteration needs more attention to be devoted to the learning of prediction model. With the deepening of learning or the increase of iteration steps, the decrease of anxiety level means that the derivative of the error of the training patterns is less and less valued by the network. On the contrary, increasing attention has been attached to the previous changes of network that confidence level made to weights. Therefore, the minimization of the error brings about a high level of confidence and a low level of anxiety. The anxiety and confidence maintain a balance of attention between previous iteration and subsequent iteration. The anxiety coefficient µ(t j ) at each time can be expressed as follows: The err feedback of each pattern-target sample at each time is defined as: Then the final anxiety coefficient at the ς-th iteration can be calculated as: The value of confidence coefficient at the ς-th iteration is defined as: where µ 0 is the value of anxiety coefficient at the first iteration. After the localized emotion reconstruction, the prediction model is trained to capture the functional relationship among given phase space points. Finally, the weights and biases of the trained model are maintained to predict the future values of the phase space points. The future values of time series are obtained when the unknown phase space points are predicted. Only the amygdala involves the emotional states, and the above process is presented in Figure 4. Feed Forward Computations For the amygdala, the detailed steps are as follows: i. Input Layer to Hidden Layer In Figure 4, the delay vector X(t j ) (j = 0,1,...,M-1) as the n inputs P q (q = 1,...,n) enter the amygdala, which come from the sensory cortex; meanwhile, the expanded signal P n+1 as another input enters the amygdala. For the amygdala, h i is the i-th (i = 1, 2) neuron in the hidden layer. ba 1 i is the i-th bias neuron in the hidden layer, which is set to "+1." Ea hi is the weighted sum of the inputs to the i-th neuron in the hidden layer, which can be expressed as in Equation (17). f 1 a is the activation function of the hidden layer. Ea i is the activated value of the i-th neuron as the final output of hidden layer, which can be expressed as in Equation (18). v 1 q.i is the amygdala weight associated with the connection between the q-th neuron in the input layer and the i-th neuron in the hidden layer. ii. Hidden Layer to Output Layer Similarly, the output value of output layer in amygdala is calculated as follows: where v 2 i.1 (i = 1, 2) is the amygdala weight in the output layer located between the i-th neuron in the hidden layer and the output neuron; ba 2 1 is related to the bias neuron in output layer; f 2 a is the activation function of the output layer. In the same way, the output Eo of OFC can be obtained, and the final output can be calculated by the following equation: Backward Learning Computations The backward learning computations are aimed at updating the learning weights of the amygdala and the OFC, which is similar to the error back propagation algorithm. As can be seen from Figure 2, the output error of the amygdala is err, as shown in Equation (22): where T is the target value, and the err actually has a result of the feed forward calculations, and the amygdala is Ea. The aim of the training process is to minimize this error over training patterns. For the output layer neuron, a quantity called the error signal is represented by ∆J a , which is expressed as: For the first hidden neuron, an error signal definition is as follows: Then, the learning weights of the first hidden neuron are calculated by the following equation: Particularly, due to the expanded single from the thalamus, where µ and k are updated based on Equations (13)-(16) at each iteration, η is the learning coefficient, and γ is the decay rate in amygdala learning rule. The v 2 1.1 and ba 2 1 are adjusted as follows: The updating between the second hidden neuron and the output neuron is similar to the backward learning computations of the OFC, and so details are no longer given here. The Ultra-Short-Term PV Power Prediction Framework Based on the Localized Emotion Reconstruction Emotional Neural Network After collecting the PV power time series data, the prediction can be implemented with the following steps: (a) After data normalization, the phase space reconstruction of the obtained PV power time series is performed. (b) Construct the localized emotion reconstruction emotional neural network (LERENN)-based model; the overall frame structure of the prediction model, especially the number of input nodes and output nodes, is determined based on the data matrix of phase space point vector. Additionally, the initial values of the emotional parameters are set. (c) Import the phase space points to the model; the proposed model is trained with the pattern-target pairs to capture the functional relationships among the given phase space points. The total training process includes the feed forward computations, emotional parameter settings, and backward learning computations. Among them, the setting of emotional parameters is carried out in accordance with 2.3.3. (d) The weights and biases of the trained model are maintained to predict the future values of the phase space points. (e) Repeat the above steps to perform prediction. The corresponding prediction process of PV power based on the proposed model is shown in Figure 5. Description of Dataset The grid-connected PV power station built by the National Institute of Standards and Technology (NIST) in Gaithersburg, MD campus can provide the high-resolution, low uncertainty, comprehensive PV output power data for extended, continuous time periods. There is a single inverter at the station that is connected to the local grid via the NIST campus grid [35]. In this paper, the data of 70 days in the third quarter of 2015 were selected for simulation. Sampling was done daily from 6:00 am to 7:00 pm every 5 min, and 157 sampling points were included in one day set. In order to obtain an appropriate prediction accuracy with an affordable computation burden, historical data of 62 days were used as the training dataset, and 8 days of data under different weather conditions were chosen as the forecasting dataset. The training dataset includes different weather conditions, and all the dataset only includes historical PV power data. To reflect the prediction performance of the proposed model, the selected forecasting dataset include 2 sunny days, 2 cloudy days, 2 overcast days, and 2 abrupt weather days like sunny to cloudy and cloudy to sunny weather [36]. For this dataset, the ultra-short-term PV power prediction was carried out with the step length of 5 min. To quantify errors, the mean absolute percentage error (MAPE) and the root-mean-squared error (RMSE) were used as the main two metrics. In particular, MAPE and RMSE are defined as follows: where P s p and P s a are the s-th value in the predicted time series and the actual series of measured PV power, respectively, and N denotes the number of samples in test set. However, since in some extreme weather conditions or at certain points in time, the actual PV power may fall to zero, the sum of squares due to error (SSE) defined by Equation (32) was used to represent the error in PV power prediction. The above three evaluation metrics give the prediction information of point-wise error, however, they are not sufficient to distinguish the prediction behavior between different prediction methods. In the variability of PV power, repercussions from large ramping events are of primary concern. Hence it is useful to use ramp metric to quantify the ability of prediction methods to capture the ramp events. In this paper, we use the Ramp score proposed by Vallance et al [37] as another metric. The Ramp metric is defined as follows: where SD(T(t)) and SD(R(t)) are the slopes of the test series and real series ramps, respectively, and the t max and t min are the bounds of the period to be predicted. Benchmark Models for Numerical Comparison For comparison, the proposed model was compared with a persistence model (PM) [38] commonly used as a benchmark model for ultra-short-term PV power prediction. In addition, the performance of the LSTM-based model, LiAENN-based model and the BPNN-based model were also compared to the proposed model. It is noteworthy to mention that for a fair comparison, the setting of key parameters was tested in the search of optimum values. For the proposed model, the statistic curve obtained with the C-C method is shown in Figure 6. As can be seen from Figure 6, since S(l) has no zero crossing, the first local minimum value of ∆S(l) can be chosen to determine optimal delay amount l opt in the time series phase space reconstruction. Determine the global minimum value of Scor(l), which corresponds to the average trajectory cycle optimal estimate l * . From it, we have l opt = 12 and l * = 36. We then calculate the optimal dimension m opt = 5 via Equation (8). For decay rate γ, due to the sensitivity of the PV power chaotic system to the initial value, the γ should be set as a relatively small value. Take values at intervals of 0.05 within 0 to 1, and each training is repeated 10 times. γ= 0 is unreliable, meaning that the model barely learns new pattern-target samples during each training iteration. With the continuous increase of γ, the error jump range increases, and finally, the performance of the model tends to be unstable. When γ = 0.55, the training fails. Then, constrain the range of γ from 0 to 0.05 with step size 0.01. Finally, the value 0.01 is achieved as the optimum decay rate. For the BPNN-based model, we choose BPNN with three-layer network structure, the logsig function is used for the neural-transfer function of hidden layer, and the purelin function is used for the neuron transfer function of output layer. The weights and thresholds of the network are initialized by rand function. The number of neurons in the hidden layer is determined by trail according to the empirical formulas [39]. Namely, that, where G, l, H are the number of neurons in the input layer, the hidden layer, and the output layer, respectively; and a is a constant between 0-10. As can be seen from Table 2, the best architecture of the BPNN-based model for PV power prediction is 5-11-1 (5 inputs, 11 hidden neurons, and 1 output). Table 3 lists the final parameters of the successfully trained models, including the BPNN-based model, the LiAENN-based model, and the proposed model. Numerical Results and Analysis The simulations were carried out, aimed at testing the performance of the proposed model and comparing its performance with the benchmark models. Training and testing of the prediction models were implemented in MATLAB. For a fairer comparison, each model was run 30 times independently. Figure 7 shows the prediction results of PV power under five typical weather conditions. It is clear that the five prediction models coincide well with the actual value in sunny weather from Figure 7b,c. It can be seen from the Figure 7b that the actual power curve is not completely smooth, so the prediction curves of each prediction model have different degrees of deviation throughout the prediction interval. Between 6:00 am to 7:00 am and 15:00 pm to 19:00 pm, the prediction results of the LiAENN-based model and the BPNN-based model both show significant deviations, and the BPNN-based model is the most significant. The prediction results of the PM and the proposed model are relatively close. Overall, the prediction curve of the proposed model is closer to the actual curve. However, the prediction error of the LSTM-based model, PM and the proposed model is mainly reflected in the stage of steep rise and fall of power. In order to further compare the prediction performance of the three prediction models, the prediction curve of the stage with large power fluctuation between 11:00 and 12:00 was selected to be enlarged. From the partial enlarged drawing, it can be seen that each prediction model has a certain lag when tracking PV output. During the power climbing phase, the predicted value is generally lower than the actual value, and during the power decline phase, the predicted value is generally higher than the actual value. The strong inertia effect of the PM model in a short period of time makes the dislocation between the predicted curve and the actual curve most obvious. Compared with the proposed model, the prediction error increases significantly. Compared with the LSTM-based model, the prediction ability of the proposed model at the power inflection point is better than that of LSTM model, which can detect ramp events better. The PV output power curve of Figure 7c is smoother than that on the first sunny day. The large prediction deviation of the benchmark models appears near the peak value. Combining two sunny test days, the proposed model outperforms all of the benchmark models in sunny weather. In abrupt weather, the clouds change suddenly, and the PV power suddenly rises or falls with large fluctuation. The prediction results of each model fluctuated to a large extent. In the power smoothing phase, each model coincides well with the actual value. In the stage of large power fluctuations, as shown during 11:00 am to 16:00 am in Figure 7a and 9:00 am to 11:00 am in Figure 7f, both the LiAENN-based model and the BPNN-based model have large prediction errors. From the partial enlarged drawings, it can be seen that when the power rises and falls sharply, the prediction curve of the LSTM-based model is smoother than that of the proposed model, and the ability to detect slope events is poor. Although the prediction results based on the PM can reflect the overall trend of PV power, due to the inertia effect of the PM, when the PV power sharply rises and falls, especially at the inflection point, the tracking effect is obviously inferior to the proposed model. The proposed model can still track the original power curve well, although its prediction curve has some fluctuations. This shows that reconstructing the chaotic phase space to extract the original PV power information, and reconstructing the extended signal and emotional parameters, makes the model more sensitive to abrupt changes and fluctuations of PV power. On cloudy days, effected by the randomness behavior of the clouds, power fluctuates greatly as the PV output is large and the prediction performance of each model is the worst in cloudy weather. From Figure 7d,e and the partial enlarged drawings, it can be seen that the proposed model still outperforms all of the benchmark models, and the BPNN-based model performs the worst. It is shown that the proposed model successfully eliminates large prediction errors, especially when the PV power fluctuates sharply. On overcast days, the PV output is low, and the PV power fluctuation is relatively small as the cloud cover is relatively uniform. From Figure 7g,h, it can be clearly seen that the predicted values of the four models are generally smaller than the actual values in the power climbing stage. The predicted values of the four models are generally larger than the actual values in the power downhill stage. The prediction deviation is mainly reflected near the peak points and valley points; the BPNN-based model is the worst, followed by the LiAENN-based model. From the partial enlarged drawings, overall, the prediction curves of the proposed model, LSTM-based model and PM are close to each other. The prediction curve of the proposed model is closer to the actual curve means that the proposed model can improve the prediction accuracy of PV output on overcast days, but the accuracy is limited. There is still room for improvement. To closely compare the effectiveness of the proposed model and the benchmark models, the comparison of prediction errors among different models under different weather conditions is summarized in Table 4. As can be seen from Table 4, the prediction performance of each model has the least difference in sunny weather. The proposed model outperforms all of the benchmark models under different weather conditions in general, except for individual metrics that are slightly higher than those of the LSTM-based model and PM, which are shown in bold font in the table. Focusing on the average of four metrics under various weather conditions in Table 5 Note: The definitions of abbreviations in Table 5 are the same as those in Table 4. As a comparison, the distributions of relative error for the proposed model and benchmark models over an 8-day period are depicted in Figure 8. The percentage of the relative error is divided into 10 bins and the reduction in prediction error is highlighted in the figure. The largest proportion of reduction in prediction errors associated with the proposed model lies in the first bin; compared with the LSTM-based model, PM, LiAENN-based model and BPNN-based model, it has 9.24%, 5.34%, 14.89% and 20.39% improvement, respectively. This result validates the effectiveness of the LERENN-based model in reducing large prediction errors. At present, the minimum time resolution of power dispatching is 15 min. To verify prediction performance comprehensively, the three-step prediction of the first, second, fourth, sixth and seventh test days was implemented. In order to analyze the performance of each model for the three-step prediction, the prediction errors of the four models under different typical weather conditions are given in Table 6. As can be seen from Table 6, except on overcast days, the proposed prediction model has individual metrics slightly higher than the LSTM model. Overall, the proposed prediction model has the highest prediction accuracy, and it can still detect ramp events well. Comparing Tables 4 and 6, the prediction accuracy of all five of the models deteriorates, along with the increase of the prediction steps. The deterioration of each model is different. Compared with single-step prediction, in three-step prediction the RMSE mean value of the proposed model, the LSTM-based model, PM, the LiAENN-based model, and the BPNN-based model are increased by 17.30%, 19.67%, 21.32%, 20.29% and 23.98%, respectively. Overall, the prediction performance of the BPNN-based model is the worst. The proposed model is less affected by the increase of the prediction steps. Namely, the proposed model can improve the prediction accuracy, and it is still robust to power fluctuations and weather changes. Note: The definitions of abbreviations in Table 5 are the same as those in Table 4. Conclusions In this paper, a prediction model based on the localized emotion reconstruction emotional neural network for ultra-short-term prediction of PV power was proposed. Based on chaotic time series analysis, the chaotic phase space reconstruction method was used to draw the hidden characteristics of PV power time series, and the single-step cyclic rolling localized prediction mechanism was derived. The extended signal and emotional parameters were determined by the reconstructed phase space points, which have relatively sure local foundations. Compared to the BPNN-based model, the more emotionally derived concepts in the neural network make the learning of the model more intelligent. Compared to the LiAENN-based model, the reconstructed emotional parameters and expanded signals based on the chaotic time series analysis make the model pay more attention to track each input pattern and pick out the most useful information of input pattern, with the result that the mapping relationship is more precise. Compared with LSTM-based model, the combination of chaos theory and emotion theory makes the proposed model has stronger prediction ability of ramp events. Simulation results validate that the proposed model has certain adaptability under different weather conditions. Although in a real-world application, the utility company may argue that five minutes power prediction is less applicable since smoothing the power quality is not an easy task. In consideration of point-wise accuracy, other metrics should be considered in the next step to provide more comprehensive information in PV power prediction. In addition, meteorological factors such as solar radiation intensity and aerosol index can be used as new model inputs to further correct the prediction results. All above those are useful for the future research on the smoothing control strategy of the grid-connected PV generation system's power output using the prediction results combined with the energy storage system.
9,014.6
2020-06-01T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Association between quality domains and health care spending across physician networks One of the more fundamental health policy questions is the relationship between health care quality and spending. A better understanding of these relationships is needed to inform health systems interventions aimed at increasing quality and efficiency of care. We measured 65 validated quality indicators (QI) across Ontario physician networks. QIs were aggregated into domains representing six dimensions of care: screening and prevention, evidence-based medications, hospital-community transitions (7-day post-discharge visit with a primary care physician; 30-day post-discharge visit with a primary care physician and specialist), potentially avoidable hospitalizations and emergency department (ED) visits, potentially avoidable readmissions and unplanned returns to the ED, and poor cancer end of life care. Each domain rate was computed as a weighted average of QI rates, weighting by network population at risk. We also measured overall and sector-specific per capita healthcare network spending. We evaluated the associations between domain rates, and between domain rates and spending using weighted correlations, weighting by network population at risk, using an ecological design. All indicators were measured using Ontario health administrative databases. Large variations were seen in timely hospital-community transitions and potentially avoidable hospitalizations. Networks with timely hospital-community transitions had lower rates of avoidable admissions and readmissions (r = -0.89, -0.58, respectively). Higher physician spending, especially outpatient primary care spending, was associated with lower rates of avoidable hospitalizations (r = -0.83) and higher rates of timely hospital-community transitions (r = 0.81) and moderately associated with lower readmission rates (r = -0.46). Investment in effective primary care services may help reduce burden on the acute care sector and associated expenditures. Introduction Achieving high-value health care requires simultaneously improving population health, improving the individual's experience of care, and reducing per capita costs of care. [1] The Triple Aim framework developed by the Institute for Healthcare Improvement recognizes that these components are interdependent, requiring a balanced focus on improving the quality and efficiency of services. Organizations often find it challenging to improve patient quality of care and health outcomes even with sufficient resources. [2] If we are to achieve a high-value health care system, we must understand how spending and quality are related in order to know where increased spending is likely to improve quality, but also where savings are possible without adversely affecting, and preferably improving, quality. We used naturally-occurring Ontario multispecialty physician networks as our unit of performance measurement. [3] These virtual networks reflect groups of primary care and specialist physicians who are associated by virtue of sharing care for a common set of patients and admitting patients to the same hospital so that the networks mimic the populations served by Accountable Care Organizations (ACOs). They are small enough to detect meaningful variations in rates of processes and outcomes but large enough to have relatively stable rates over time. The characteristics of these networks, panel size, physician supply and assignment mechanism have been previously described. [3] With the passage of Ontario's Patients First Act in 2016, responsibility for planning and performance improvement for the primary health care system will devolve to smaller regional levels to better address the unique health care needs of the province's diverse urban, rural and remote communities. Much of the groundwork for this localized planning was undertaken by two of the authors using Ontario health administrative data and our physician network patient assignment mechanism. In a Chartbook, we reported the performance of Ontario multispecialty physician networks on 65 quality indicators that reflect health care delivery in primary, specialty, acute, and longterm care, as well as timely transitions from the hospital or emergency department (ED) to the community. [4] The indicators chosen are amenable to intervention, measureable across the continuum of care from population health to primary care to tertiary care, and based on validated definitions derived from Ontario health administrative databases. While the Chartbook reported wide variability in quality indicators across physician networks, associations between quality and spending were not investigated. The current study examines the association between health care quality and spending across physician networks within Ontario's universal health care system. We also assessed associations between overall and outpatient primary care spending and potentially avoidable admissions, readmissions, and timely hospital-community transitions since previous work has shown associations between high rates of primary care supply and lower rates of mortality and hospitalizations for ambulatory care sensitive conditions. [5][6][7][8][9] Physician networks A total of 77 networks, serving 98.5% of the population, were included in the analyses. Two children's hospital networks were excluded from non-paediatric indicators, the psychiatric hospital network was excluded from non-mental health indicators, and one remote network was excluded from all indicators due to extremely small population size. In this ecological study, the unit of analysis was the physician network since this is the natural functional and organizational locus of accountability for population-based care, as networks comprise large groups of physicians that share patients, and are therefore more conducive to system interventions and accountability than are individual physicians or practices. Quality indicators and quality domains Quality Indicators (QIs) were based on events occurring during the two-year period between April 1, 2010 and March 31, 2012. Details on the definitions, data sources, diagnostic and procedure billing codes as well as the clinical guidelines used in the development of each indicator are reported in the Chartbook. [4] Timely transitions were measured as the percentage of patients with a follow-up visit to a primary care physician or relevant specialist within seven days of discharge, and shared care as follow-up visits with both a primary care physician and a relevant specialist within 30 days of discharge. Timely hospital-community transitions can result in fewer medical errors, improved communication between care providers, and improved health promoting behaviors at home. [10][11][12][13] QIs were aggregated into six quality domains or clinical composites of screening and prevention, evidence-based medications, timely hospital-community transitions, potentially avoidable hospitalizations and emergency department (ED) visits, potentially avoidable readmissions and unplanned returns to the ED, and poor cancer end of life (EOL) care. [4] Domain rates for each physician network were calculated as the weighted average of the constituent indicator rates, weighting each by its denominator, the target population, as in other studies. [14] Rates of screening and poor cancer end-of-life care were not adjusted since they apply to the entire target population. Rates of hospitalization and readmissions were fully risk-adjusted using previously validated methods. [15][16][17][18][19][20] The remaining rates were indirectly standardized for age and sex. Health care spending Costs of insured health care services were computed based on standardized provincial prices to reflect resources used. [21] Costs were those paid by Ontario's Ministry of Health and Long-Term Care; patient out-of-pocket costs were not included. Mean per capita costs were calculated for each network over a two-year period (2010-2011), adjusted for age and sex, annualized, and expressed in 2011 Canadian dollars. Health care spending was computed for hospital, physician (overall and separately for primary care physicians and specialists), prescription (for those over age 65 years), and long-term care sector. Spending for outpatient primary care services was computed based on primary care physician claims for office visits, seeing patients in long-term care facilities, home visits and consultations through phone calls. In exploratory analyses, we decomposed primary care outpatient spending per capita into comprehensive primary care physician full time equivalents (FTEs) per capita (primary care supply) and outpatient primary care billings per primary care physician (primary care intensity). [22] Network characteristics We explored network characteristics of rurality and marginalization to determine their association with healthcare quality. Network rurality was measured using the Rurality Index of Ontario (RIO), which accounts for population size and travel time, to categorize networks as urban (RIO 0-9), nonurban (RIO 10-39) and remote (RIO ! 40). [23] Population marginalization was measured using a census-based, empirically derived, theoretically-informed tool. [24] Briefly, marginalization is a process that creates inequalities along multiple axes of social differentiation. We report two dimensions, material deprivation (education, lone-parent families, receipt of government transfer payments, unemployment, low-income status, and dwellings in need of major repair) and dependency (proportion of the population aged 65 and older, dependency ratio, and proportion of population not participating in the labour force) to capture different aspects of marginalization. Both were calculated at the level of the census dissemination area, neighbourhoods with populations between 400 and 700 people. Data sources Residents' records were linked using unique, anonymized, encrypted identifiers across multiple Ontario health administrative databases containing information on all publicly insured, medically necessary hospital and physician services. Databases included the Discharge Abstract Database for hospital admissions, ICU admissions, procedures and transfers and includes the most responsible diagnosis for length of stay, secondary diagnosis codes, comorbidities present upon admission, complications occurring during the hospital stay, and attending physician identifier; the Ontario Mental Health Reporting System database for admissions to mental health-designated hospital beds; the National Ambulatory Care Reporting System for ED visits; the Ontario Health Insurance Plan (OHIP) for physician billings that includes diagnosis codes and procedures, and location of visit; the Ontario Drug Benefits for outpatient drug prescriptions for those over age 65 years; the Ontario Marginalization Index for multiple dimensions of marginalization in urban and rural Ontario; the Registered Persons Database for patient demographic information and deaths; and the Institute for Clinical Evaluative Sciences Physician Database which contains yearly information on all physicians in Ontario. Analysis We report median and 10th and 90th percentile quality domain rates, weighted by target network populations. We considered a domain rate to have low variability if the ratio of the weighted 90th to 10th percentile across networks was less than 1.25, moderate variability if this ratio was between 1.25 and 2.0, and high variability if this ratio was greater than 2.0. The associations between quality of care and per capita population costs were evaluated using Pearson correlation coefficients, weighting by physician network denominators. For each domain, we computed the intraclass correlation coefficient (ICC) using multilevel logistic regression models, with response to the individual quality indicators as the dependent variable, adjusting for patient risk factors and individual quality indicators as fixed effects, and including random effects for physician networks to account for the clustered nature of the data since patients are nested within networks. [25] Since the domain ICCs were small, there was negligible attenuation of the correlations between domain rates. [25] All analyses were performed using SAS version 9.3. Research ethics approval was obtained from the institutional review board at Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada. Results Individual quality indicators, domain rates and their variations across physician networks are reported in Table 1. The quality indicator rates and their variations across physician networks were discussed extensively in the Chartbook, so we provide a brief overview only. [4] Rates of prescribing of evidence-based medications were very good with little variation across networks. Rates of receipt of recommended screening and preventive care were good except for HbA1c testing for those with diabetes. Timely hospital-community transitions demonstrated moderate to high variability across physician networks. About half of patients discharged from hospital with a cardiac condition or pediatric asthma, and one-third of those with a psychiatric or non-cardiac chronic condition were seen by a physician within seven days. Rates of shared care were low. The highest rates of readmission and return to ED after discharge were Correlation coefficients between the quality domain rates are reported in Table 2 and the relationships displayed in Fig 1. Many relationships were as expected such as strong associations between rates of avoidable admissions, readmissions and poor EOL care. However, we also found that rates of timely hospital-community transitions were inversely associated with rates of admissions (r = -0.89), readmissions (r = -0.58) and poor EOL care (r = -0.52) ( Table 2, Fig 1). Networks with higher rates of physician spending had lower rates of avoidable admissions; the strongest association, however, was with outpatient primary care physician spending (r = -0.83) ( Table 3, Fig 2). Networks with higher outpatient primary care spending also had lower readmission rates (r = -0.46) and more timely hospital-community transitions (r = 0.81) ( Table 3, Figs 2 and 3). Networks with higher rates of timely hospital-community transitions had lower spending on prescription drugs and long-term care (r = -0.56 and -0.61, respectively). As expected, there were strong relationships between spending and hospital admission rates; however, we found little association between spending and rates of prescribing of evidence-based medications, and screening and prevention. In exploratory analyses, we found that primary care ambulatory spending per capita was more highly related to primary care intensity (r = 0.77) than to primary care supply(r = 0.20). Furthermore, primary care intensity was also associated with lower admission and readmission rates (r = -0.64 and r = -0.32, respectively) and higher rates of timely transitions (r = 0.64), whereas overall primary care supply was unrelated to these domains. Rurality, dependency and material deprivation were associated with higher rates of potentially avoidable admissions and readmissions, and inversely associated with timely hospitalcommunity transitions, as expected. Greater dependency and material deprivation were also associated with poor EOL care (Table 4). Discussion We found that physician networks with higher rates of timely hospital-community transitions had lower rates of potentially avoidable readmissions, and avoidable admissions. We found that outpatient primary care spending was strongly associated with higher rates of timely hospital-community transitions and lower rates of avoidable admissions, and moderately associated with lower readmissions rates. In addition, timely transitions were related to lower spending on pharmaceuticals and long-term care. The costs and savings associated with quality improvements are multifaceted and complex in nature, and are spread out across stakeholders and across time. Systems need to ensure that healthcare providers have the incentives and support to implement quality improvement initiatives that span sectors. [2] Policies that encourage a link between cost and quality in only one sector, like primary care or hospitals, are unlikely to be successful in realizing those savings, which has stimulated the need for cross-sectoral integrated networks. High quality, lower cost care has been achieved by large U.S. multispecialty physician group practices through the redesign of care to meet the needs of chronic disease patients by strengthening primary care, implementing chronic disease management programs, and integration of care. [26][27][28][29] The U.S. is experimenting with promising initiatives in integrated delivery systems such as Accountable Care Organizations (ACOs), groups of providers that are accountable for the quality of care of a defined population and collectively share in the savings of more efficient delivery of services. [30] There is evidence that such reforms may contribute to increasing quality while slowing spending growth, although there are many challenges to achieving these objectives. [31][32][33][34] While these formal associations are uncommon in Canada, health care providers form informal networks, such as those in our study, based on sharing patients and, often, information. [3,35] The finding that primary care outpatient spending was associated Health care quality and spending across physician networks with lower preventable hospital care is consistent with recent findings from ACOs showing better cost performance with primary care-run ACOs. [36] Best practices recommend seeing a primary care physician shortly after discharge to allow for monitoring and evaluating patients' progress during this vulnerable and high-risk period. [37] Care coordination in the primary care setting has been identified as a key strategy to improve the effectiveness, efficiency and safety of the health care system, and includes improved transitions of care, communicating and knowledge sharing, monitoring and followup, and assessing patients' needs and goals. [38] Improving transitions through pre-discharge interventions (patient education, discharge planning), post-discharge interventions (timely follow-up), and provider continuity may reduce 30-day readmissions. [37,39] Other work has found that hospitalizations for ambulatory care sensitive conditions might be prevented if outpatient care were provided in an effective and timely manner in an ambulatory care setting. [40,41] This study suggests that the effect of primary care supply on outcomes may be driven more by primary care intensity than primary care supply (headcounts), thereby extending the findings of Starfield et al. and underscoring the need to identify what aspects of primary care practice lead to better outcomes. [5][6][7][8] Current indicators are crude measures of primary care performance, and others have suggested that these traditional quality improvement indices may not be useful for identifying changes in quality or variations in outcomes. [14,42] As more meaningful measures are developed, there will be a need to assess the relationship of the new measures to the outcomes we examined. In addition, one-size-fits-all measures may not be appropriate for all patients and there is a need to align measures with patient goals and preferences. For example, tight diabetes control in a frail elder may increase the risk of adverse outcomes, and some cancer screening measures may not be appropriate in those with limited life expectancy. This resonates with many primary care physicians who are suspect of linear disease-specific targets when their patients are highly complex and often make idiosyncratic choices. Investing in primary care may require increased time spent conversing with patients, especially those with multimorbidity, which may not improve technical quality but can motivate patients to make better decisions about their health, adhere to treatment plans, increase use of outpatient interdisciplinary team care, and increase communication among physicians. [43][44][45] Additionally, our findings suggest that increased investment in primary care may be needed to optimize individual and population health outcomes. [46] Prior research on the relationship between health care quality and overall spending has produced inconsistent results. [47][48][49][50][51][52][53] The Commonwealth Fund reported widespread variability across US Hospital Referral Regions (HRRs) and suggested that better access to care was associated with higher quality of care and better patient outcomes. [54,55] A systematic review that appraised the evidence for an association between health care costs and quality among 61 USbased studies reported that associations were inconsistent, and that the impact of spending on quality was small to moderate. [56] It concluded that future studies should focus on which types of spending are most effective in improving quality. A large US longitudinal cohort study showed large, persistent variations in health care quality and spending across HRRs but found that higher spending regions had neither better quality of care nor increased survival. [57,58] In contrast, a similarly designed longitudinal cohort study in Canada showed that higher spending Ontario hospitals had lower mortality and readmission rates, and higher quality of care. [59] Our study has a number of strengths. The study is population based and is unique in its breadth of indicators evaluated and their associations with sector-specific spending. We investigated the association between quality and spending across Ontario physician networks, which reflect populations of patients that share physicians similarly to US Accountable Care Organizations (ACOs) and are, therefore, a potential locus of accountability for chronic disease care. Several limitations should be considered. The study design is ecological so that causal relationships cannot be inferred. This study was meant to reveal patterns and not to demonstrate causality; such associations would need to be confirmed in longitudinal cohort studies using the individual patient as the unit of analysis. This study may be generalizable to the Canadian universal health care system, but these relationships may differ in other countries' health care systems. As in all observational studies, residual confounding due to unmeasured patient risk factors could have influenced the results. We also could not investigate patient experience of care. Reducing spending without decreasing quality involves targeting poor hospital-community coordination, wasteful spending, and ineffective care through programs that provide incentives for value-based care provision, such as bundled payments and integrated health care systems, which encourage coordination and integration, and more aggressively targeting preventable hospitalizations by bolstering primary and ambulatory care. [60] Preventing hospital admissions and readmissions, improving continuity of care and managing health care spending are complex issues requiring multi-faceted care and action from all levels of the health care system. Strengthening outpatient primary care and developing integrated models of primary care that extend beyond the medical home to the medical neighborhood with linkages between population health and community services are key elements to optimizing patient health and reducing health care costs. Future research should focus on studying the effects of timely transitions on reducing adverse events using longitudinal cohort studies.
4,683
2018-04-03T00:00:00.000
[ "Economics", "Medicine" ]
Integrated, Speckle-Based Displacement Measurement for Lateral Scanning White Light Interferometry Lateral scanning white light interferometry (LSWLI) is a promising technique for high-resolution topography measurements on moving surfaces. To achieve resolutions typically associated with white light interferometry, accurate information on the lateral displacement of the measured surface is essential. Since the uncertainty requirement for a respective displacement measurement is currently not known, Monte Carlo simulations of LSWLI measurements are carried out at first to assess the impact of the displacement uncertainty on the topography measurement. The simulation shows that the uncertainty of the displacement measurement has a larger influence on the total height uncertainty than the uncertainty of the displacing motion itself. Secondly, a sufficiently precise displacement measurement by means of digital speckle correlation (DSC) is proposed that is fully integrated into the field of view of the interferometer. In contrast to externally applied displacement measurement systems, the integrated combination of DSC with LSWLI needs no synchronization and calibration, and it is applicable for translatory as well as rotatory scans. To demonstrate the findings, an LSWLI setup with integrated DSC measurements is realized and tested on a rotating cylindrical object with a surface made of a linear encoder strip. Motivation Rising demands regarding the quality of optically smooth surfaces of consumer goods and industrial intermediate products necessitate metrology that is able to quantify the topography of these surfaces in a quick and accurate manner. Systems capable of inprocess measurements are especially interesting for manufacturers, as early detection of defects reduces production costs [1,2]. For delicate surfaces, such as optical components or highly reflective functional and decorative surfaces, a contactless method for topography measurement is desired. In addition, many manufacturing processes involve continuously moving, rotating materials or tools, such as the rolling of sheet metal. Therefore, a strong demand for precise topography measurements on curved surfaces of continuously rotating objects exists. State of the Art Due to its areal measurement capabilities and low measurement uncertainties of <1 nm, white light interferometry (WLI) has become one of the standard techniques for topography measurement. The term "white light interferometry" itself was coined in the 1970s by Fluornoy et al., who applied the principle in film thickness gauging [3]. WLI or coherence scanning interferometry, as it is referred to in DIN EN ISO 25178 [4], first appeared in its today most commonly implemented form as vertical scanning white light interferometry (VSWLI) in the late 1980s [5][6][7]. Comprehensive information on VSWLI can be found in the review papers of de Groot and Wyant [8,9]. VSWLI requires the objects to stand still during vertical scanning. Therefore, it is not usable for measurements on continuously moving objects, such as the rollers for sheet metal production. For such measurements, lateral scanning white light interferometry (LSWLI) can be used. LSWLI is a variant of white light interferometry that was first described by Olszak [10]. It uses a straight or curved scan path to record lateral and axial spatial information in one single motion. The consequence of the lateral scanning is that LSWLI requires a relative lateral movement between optics and object. This puts LSWLI at an advantage for in-process measurements on continuously moving objects. LSWLI was originally applied on planar objects with straight, translatory scan motions [10]. To enable the measurement of cylindrical rollers for sheet metal production, LSWLI has recently been advanced to also work for rotatory scanning motions by taking the curvature of the scan path into account for the topography calculation [11]. The topographical height h 12 between two surface points i = 1, 2 on a translatory scan path can be calculated by using the lateral positions x i in the field of view, at which each point intersects the plane of zero optical path length difference between the light paths of the interferometer, and the common tilt angle Θ of the surface points' scan paths: Both terms-the lateral positions x i and the tilt of the scan path Θ-are extractable from the scan's interference signal, the so-called correlogram. The lateral positions x i correspond to the maxima positions of the correlogram's envelopes, and the tilt angles Θ to its fringe frequency, as demonstrated by Munteanu [12]. As recently shown by Behrends et al. in [11], the height equation for rotatory scan paths becomes: cos(Θ 1 ) − cos(Θ 2 ) ·   cos(Θ 1 ) sin 2 (Θ 2 ) cos 2 (Θ 1 ) where the trigonometric properties of the rotatory (circular) scan path have been incorporated into the equation by considering the changing local tangent surface angles Θ 1 and Θ 2 of the two surface points at the positions x 1 and x 2 to be compared. In both translatory and rotatory LSWLI, all quantities needed for the height calculation are extracted from the correlogram. Therefore, the uncertainty of h 12 depends strongly on the accuracy of the correlogram reconstruction. Figure 1 depicts the recording and correlogram reconstruction process for rotatory LSWLI. The correlograms of the surface points are reconstructed from an image series, which is recorded with a known temporal frequency. What is unknown, however, is the displacement of the observed surface between the images according to the lateral movement of the object surface through the camera's field of view (FOV). Since the lateral displacement is crucial for the correlogram reconstruction, an accurate tracking of the surface movement during the measurement process is necessary. A fundamental design requirement for a displacement measurement system for (translatory and rotatory) LSWLI is the necessary displacement accuracy to reconstruct the correlograms from the recordings and to enable surface topography measurements with a minimal measurement uncertainty. Considering the ideal case, the effect of the displacement measurement uncertainty on the total topography measurement uncertainty should be negligible in comparison with other uncertainty components. While the VSWLI technology can be assumed to operate close to the ideal case, the influence of displacement is considerably larger in LSWLI. This is the price for the LSWLI's advantageous capability of measuring on surfaces in motion. For this article, the limit for the disadvantage of the real LSWLI due to this position uncertainty is based on internal project requirements, stating the height uncertainties should not be higher than twice the height uncertainty for an ideally scanning (LS)WLI without positioning uncertainty, but with the same recording and evaluation methods. Another design requirement concerns the combination of the displacement measurement system and the LSWLI in conjunction with the moving object surface. In order to be flexibly usable in different applications, the displacement measurement system should be independent from the data provided by the movement stage itself. Finally, a displacement measurement is desirable that works on both translatory and rotatory scan motions. A fundamental design requirement for a displacement measurement system for (translatory and rotatory) LSWLI is the necessary displacement accuracy to reconstruct the correlograms from the recordings and to enable surface topography measurements with a minimal measurement uncertainty. Considering the ideal case, the effect of the displacement measurement uncertainty on the total topography measurement uncertainty should be negligible in comparison with other uncertainty components. While the VSWLI technology can be assumed to operate close to the ideal case, the influence of displacement is considerably larger in LSWLI. This is the price for the LSWLI's advantageous capability of measuring on surfaces in motion. For this article, the limit for the disadvantage of the real LSWLI due to this position uncertainty is based on internal project requirements, stating the height uncertainties should not be higher than twice the height uncertainty for an ideally scanning (LS)WLI without positioning uncertainty, but with the same recording and evaluation methods. Another design requirement concerns the combination of the displacement measurement system and the LSWLI in conjunction with the moving object surface. In order to be flexibly usable in different applications, the displacement measurement system should be independent from the data provided by the movement stage itself. Finally, a displacement measurement is desirable that works on both translatory and rotatory scan motions. The importance of accurate positioning has been recognized since the inception of LSWLI. Olszak [10] noticed washing out of contours due to deviations in scanning speed, which was assumed to be at a constant speed of 1 pixel per frame. Munteanu [12] used a translation stage with an accuracy of 0.2 µm/mm, which was also set to a scanning speed of 1 pixel per frame. With this setup, a height uncertainty of 40 nm on a 8.69 μm high step height was achieved. Vibrations in the translation stage were claimed as a main contributor to the uncertainty. Guo et al. [13] combined an LSWLI-based measurement system with a nano-measuring-machine with an accuracy of 0.1 nm that tracked the position of the scanned object along the x-and z-axis. The scanning speed was set to 1-4 pixels per frame. Behrends et al. [11] used rotatory LSWLI with a circumferential displacement of 1 pixel per frame, which was assumed to be constant. The importance of accurate positioning has been recognized since the inception of LSWLI. Olszak [10] noticed washing out of contours due to deviations in scanning speed, which was assumed to be at a constant speed of 1 pixel per frame. Munteanu [12] used a translation stage with an accuracy of ± 0.2 µm/mm, which was also set to a scanning speed of 1 pixel per frame. With this setup, a height uncertainty of 40 nm on a 8.69 µm high step height was achieved. Vibrations in the translation stage were claimed as a main contributor to the uncertainty. Guo et al. [13] combined an LSWLI-based measurement system with a nano-measuring-machine with an accuracy of 0.1 nm that tracked the position of the scanned object along the x-and z-axis. The scanning speed was set to 1-4 pixels per frame. Behrends et al. [11] used rotatory LSWLI with a circumferential displacement of 1 pixel per frame, which was assumed to be constant. As a result, the positioning uncertainty of the movement stages in the reported studies, if mentioned, ranges from several hundred down to a tenth of a nanometer. However, a quantitative assessment of the influence of the positioning uncertainty on the topography height uncertainty is missing. The displacement of the object is either assumed with a constant rate or is measured with additional external sensors on the movement stage. Therefore, the displacement measurement is currently not independent from the movement stage. In particular, all realized displacement measurement systems are for translatory LSWLI and are not transferable to rotatory LSWLI. A universal displacement measurement system for both translatory and rotatory LSWLI is missing. Aim and Structure The aim is to propose an integrated displacement measurement system for LSWLI that enables precise topography measurements on continuously rotating objects. At first, the influence of the measurement uncertainty of a displacement measurement system on the topography measurement uncertainty is determined. Then, a displacement measurement with sufficient precision is realized by means of digital speckle correlation (DSC), which is integrated in a dedicated region of interest of the same camera that records the WLI signal. The displacement measurement works completely independent from the control of the rotational movement, is in perfect sync and alignment with the WLI signal, and does not require any unit conversions or calibrations to enable a precise reconstruction of the correlogram. Note that the approach is applicable for rotatory and translatory LSWLI topography measurements alike. The measurement principle of the merged measurement system is introduced at the beginning of Section 2. The principles of both the DSC and the LSWLI sub-system are also explained briefly, including the used signal evaluation methods. Furthermore, a Monte Carlo simulation for the measurement chain is described to assess the uncertainty of the topography measurement resulting from the displacement measurement uncertainty and the noise in the interferometer signal. In the last part of Section 2, the considered experimental setup is explained. The results of the Monte Carlo simulation and an experimental validation with rotatory LSWLI measurements on a cylindrical test object are presented and discussed in Section 3. The conclusions of the article are presented together with an outlook in Section 4. Measurement Principles The measurement principle of the whole system is based on a DSC displacement measurement system integrated into the optics of an LSWLI setup by dividing the camera's field of view into two regions of interests (ROI), a smaller one for DSC, and a larger one for WLI. Integrating both systems into one optical path and using the same camera chip for signal recording almost eliminates the influence of timing and deviations caused by sudden accelerations, as displacement and topography are measured synchronously. Using the same sensor also eliminates alignment issues between the LSWLI and the displacement measurement, as both are recorded on the same pixel grid. A third advantage of using the same sensor for both measurements is that the displacement is measured in the unit 'pixels', which are of the same scale as the WLI data. There is no need to convert or calibrate the systems in order to work together, which makes the whole system a flexible topography measurement system for both translatory and rotatory moving objects. A schematic sketch of the whole measurement system is illustrated in Figure 2. The optical setup is based on a standard WLI, with a light emitting diode as light source and a Mirau objective creating the interference effect. The ROI for the DSC measurement is illuminated with an infrared laser diode. Both ROIs are captured with one digital camera with a global shutter. In order to optically separate the scattered light from both light sources, optical filters are inserted in the light path before the camera. Digital Speckle Correlation (DSC) After recording a scan, the first step of the signal processing is determining the object displacement between consecutive frames. This information is needed to reconstruct the correlograms of the moving surface points from the image series. For this purpose, one section of the camera sensor is used for displacement measurements by means of digital speckle correlation (DSC), see Figure 2c, which enables robust, highly precise displacement measurements [14,15]. The underlying principle of DSC is digital image correlation. Assuming a rigid surface, the translatory shift between two images can be determined by finding the maximum of the cross-correlation function. The image correlation algorithm used for this article is 'efficient subpixel registration by cross correlation' written and published by Guizar-Sicairos et al. [16]. This algorithm is optimized regarding the computation time and memory usage, and it is able to evaluate the displacement with subpixel resolution. According to Zhang et al. [17], digital image correlation will succeed as long as the displacement is smaller than the evaluation window. The complete DSC ROI is used here as an evaluation window, which means the DSC system will not restrict the measurement capabilities, as Digital Speckle Correlation (DSC) After recording a scan, the first step of the signal processing is determining the object displacement between consecutive frames. This information is needed to reconstruct the correlograms of the moving surface points from the image series. For this purpose, one section of the camera sensor is used for displacement measurements by means of digital speckle correlation (DSC), see Figure 2c, which enables robust, highly precise displacement measurements [14,15]. The underlying principle of DSC is digital image correlation. Assuming a rigid surface, the translatory shift between two images can be determined by finding the maximum of the cross-correlation function. The image correlation algorithm used for this article is 'efficient subpixel registration by cross correlation' written and published by Guizar-Sicairos et al. [16]. This algorithm is optimized regarding the computation time and memory usage, and it is able to evaluate the displacement with subpixel resolution. According to Zhang et al. [17], digital image correlation will succeed as long as the displacement is smaller than the evaluation window. The complete DSC ROI is used here as an evaluation window, which means the DSC system will not restrict the measurement capabilities, as its displacement measurement range is larger than the largest surface displacements with which LSWLI is still possible. Digital image correlation works best with images that contain clearly defined features. In the field of surface topography, these features could be for example scratches, tool marks or intentional marks or edges. The measurement system in this article is intended to be applicable on smooth and thus practically featureless surfaces with a surface Digital image correlation works best with images that contain clearly defined features. In the field of surface topography, these features could be for example scratches, tool marks or intentional marks or edges. The measurement system in this article is intended to be applicable on smooth and thus practically featureless surfaces with a surface roughness in the range of 2 nm < Sq < 10 nm. To ensure that the displacement measurement works regardless of the surface texture or the WLI light signal, the white light in the DSC-ROI is blocked using an optical filter in front of the camera and the image correlation is conducted on an infrared-laser-illuminated surface image, which contains speckles. Speckles appear in the observer plane due to interference of laser rays, which were reflected by a surface with nano-scale roughness. A central requirement is that the change of the evaluated speckle pattern in the two successive images remains negligibly small. According to Goodman [18], the average size of speckles depends on the aperture of the optics. While the axial size is proportional to N A −2 , the lateral size is proportional to N A −1 , meaning the speckle is much larger axially than laterally. In a typical LSWLI setup, the lateral movement of the surface is usually more than 50× larger than the axial height change due to the tilted scan path. Therefore, the DSC displacement measurement method for the intended application in this article is not restricted by changes of the speckle pattern. The larger of the two ROIs on the specimen's surface is reserved for the LSWLI topography measurement of the specimen, see Figure 2b. A crucial aspect of the WLI setup is the position and the orientation of the WLI sensor in relation to the scan path. This aspect is illustrated in Figure 3. According to Goodman [18], the average size of speckles depends on the aperture of the optics. While the axial size is proportional to , the lateral size is proportional to , meaning the speckle is much larger axially than laterally. In a typical LSWLI setup, the lateral movement of the surface is usually more than 50x larger than the axial height change due to the tilted scan path. Therefore, the DSC displacement measurement method for the intended application in this article is not restricted by changes of the speckle pattern. White Light Interferometry (WLI) The larger of the two ROIs on the specimen's surface is reserved for the LSWLI topography measurement of the specimen, see Figure 2b. A crucial aspect of the WLI setup is the position and the orientation of the WLI sensor in relation to the scan path. This aspect is illustrated in Figure 3. The width of the interference fringe structure (correlogram) in the field of view depends on the maximum height difference observed over the complete scan path for each surface element. The fringes appear in a fixed distance range from the objective and are centered around the zero-th order fringe, which marks the zero optical path length difference of the Mirau objective. The (vertical) distance range of the fringes depends on the coherence length of the used white light. For a circular, rotatory scan path this means that the fringes would appear to be broader the closer the WLI is positioned to the apex of the scan path. The apex of the scan path is here defined as the position, where the scan path is perpendicular to the observation direction of the WLI optics. A consequence of the fixed vertical range given by the fringes in combination with the lateral scanning motion is that a compromise between measurement range and measurement resolution must be made: On the one hand, measuring close to the apex enables a finer sampling of the correlogram in terms of pixels per fringe, because of the broader fringes (see Figure 3a). On the other hand, measuring further from the apex extends the measurement range since the vertical change of the scan path within the field of view is larger (see Figure 3c). In practice, an interference region spanning about one third to a half of the field of view in scan direction ( Figure 3d) has been proven to be a well-fitting compromise between range and resolution for most measurement tasks. Also, to avoid additional post-measurement image rotation operations, the specimen should move as parallel to the sensor's pixel rows as possible. To obtain the topographical height h 12 between two surface points 1 and 2, two quantities per point need to be extracted from the correlogram: the local surface tangent angle and the position of the maximum of the envelope, see. Equation (2). Point 1 is generally chosen as the first measurement point without limiting generality. The local surface tangent angle can be calculated from the frequencies of the correlogram using Munteanu's method [12], as demonstrated by [11]. The frequency is evaluated using a continuous wavelet transform with Morse wavelets. The used Matlab (The Math-Works, Natick, MA, USA) implementation of the continuous wavelet transform is based on Olhede and Walden's work on generalized Morse wavelets [19]. The position of the maximum of the correlogram's envelope is determined in two steps. First, the envelope is calculated by applying a moving root mean square (rms) averaging algorithm on the raw correlogram. In the next step, the position of the envelope's maximum is determined with a Gaussian fit. For the present article, the standard nonlinear least squares approach of Matlab's fitting toolbox is applied. Experimental Setup The experimental setup is realized according to the principle shown in Figure 2a. A light emitting diode with a central wavelength of 520 nm is used as the illumination source for the WLI. The speckle illumination is achieved by directing an 850 nm laser diode with a line generating lens at the DSC region of interest. The scattered light from the surface is imaged through a 10×/0.3 Mirau objective on a 2.3 MP CMOS camera. The camera is equipped with a Sony IMX174 sensor, which has a well depth of 32,406 e − sampled at 8 bit. The quantum efficiency is rated at 76% at 525 nm wavelength. The surface observed by a camera pixel under the total magnification factor of the setup was determined to be 0.557 µm/pixel. All pixel values stated in this article can be converted using this conversion factor. The captured surface is divided into two ROIs by two optical filters in front of the camera. The speckles are removed from the WLI-ROI with a short-pass filter with a cut-off wave length of 750 nm. As the interference fringes are disrupting the speckle pattern, the fringes are removed from the DSC ROI using a 650 nm long-pass filter. The WLI-ROI occupies an area of 1620 × 1200 pixels = 902.34 × 668.4 µm 2 on the camera chip and the DSC-ROI occupies the remaining 300 × 1200 pixels = 167.1µm × 668.4 µm of sensor area. The measurement object is moved by a rotation stage with a rated unidirectional repeatability of u(α motion,stage ) = ± 3.5 µrad. To ensure the highest possible correlogram sampling density, the stage is set to a constant rotation speed that is equivalent to an apparent surface speed of one pixel per frame. The image recording is triggered using the serial connection of the rotation stage for relaying the trigger signal sent by a Matlab script. The measurement object for experimental validation is a section of a linear encoder strip that has been fastened on a cylinder section resulting in a surface radius of 65.3 mm. The topography has been referenced with a commercial VSWLI. The VSWLI used is from the company GBS mbH, Ilmenau, Germany, of the type smart WLI with a 10×/0.3 and a 50×/0.55 Mirau objective. With its 0.5× tube and the 50× objective it captures an area of 365.7 × 228.5 µm 2 per FOV and 1797.2 × 1122.9 µm 2 with the 10× objective. It has a vertical scanning range of 400 µm. The linear encoder strip's surface consists of alternating smooth and rough regions, manufactured by etching. The encoder strip is chosen, because its smooth regions are similar to the surfaces; the LSWLI is intended to be used for, e.g., the rolled sheet metal and the rollers mentioned in Section 1. The alternation between smooth and rough regions allows for easy visual confirmation and that lateral displacement is correctly measured and used in evaluation. Based on the object's radius, the rotation stage's angular positioning uncertainty can be converted to pixels, yielding u x motion,stage = ±0.065 pixels. Experi- 365.7 x 228.5 µm per FOV and 1797.2 x 1122.9 μm with the 10x objective. It has a vertical scanning range of 400 μm. The linear encoder strip's surface consists of alternating smooth and rough regions, manufactured by etching. The encoder strip is chosen, because its smooth regions are similar to the surfaces; the LSWLI is intended to be used for, e.g., the rolled sheet metal and the rollers mentioned in Section 1. The alternation between smooth and rough regions allows for easy visual confirmation and that lateral displacement is correctly measured and used in evaluation. Based on the object's radius, the rotation stage's angular positioning uncertainty can be converted to pixels, yielding u x , 0.065 pixels. Experimental setup and the measurement object are depicted in Figure 4. It is a direct realization of the schematic setup illustrated in Figure 2a. Uncertainty of Rotatory Lateral Scanning White Light Interferometry Uncertainty propagation of h 12 The uncertainty of the height h 12 as derived from Equation (2) for rotatory LSWLI measurements and including the covariances between x i and Θ i reads Since the covariance terms cannot be assumed as zero, because both quantities are obtained from the same correlogram signals, the uncertainty u(h 12 ) is not estimated analytically or semi-analytically by using Equation (3) but numerically with a Monte Carlo simulation of the signal processing chain using synthetic correlograms. Signal model The core of the Monte Carlo simulation for the estimation of u(h 12 ) is an idealized correlogram signal model including the sources of uncertainty. The correlogram signal model is the undisturbed light intensity I z j following the known model for WLI [20] where l c is the coherence length, λ 0 is the central wavelength of the white light illumination, and z ref is the height, at which the optical path length difference is zero. In rotatory LSWLI, the vertical coordinates z j change during the lateral scan, which follows a curved path and therefore combines the movements in x-and z-direction. For a circularly curved scan path, the geometric relationship is and every x j , j = 1, . . . , n corr , represents one position in the correlogram. While the correlogram positions x j are confined to the coordinate systems of the field of view, the camera itself has its own offset position with respect to the apex of the rotatory scan path, which is taken into account by simple addition of the offset x offset to the x-coordinates of the correlogram, yielding the x-coordinate of the scan path x j = x j + x offset . The z-coordinates of the simulated cylinder observed in the field of view are calculated from the absolute coordinates x j of the scan path and the radius r using the geometric relationship z j = r − r 2 − x j 2 . Firstly, to include apparent irregularities of the scan motion, e.g., due to a jittery rotation stage or due to fluctuations in the triggering of the camera, each x j is superposed with additive white Gaussian noise that has a mean value of zero and the standard deviation u(x motion ). With the resulting x-positions, the intensity signal is calculated according to Equations (4) and (5). As the second source of uncertainty, the image noise originating from the camera and natural light fluctuations are modeled by superposing the calculated intensity signal I z j with additive white Gaussian noise that has a zero mean and a standard deviation u(I). Thirdly, the uncertainty u(x measurement ) of the displacement measurement is included by drawing the intensity values of the correlogram signal at random positions with the mean value x j and the standard deviation u(x measurement ) using a Gaussian distribution. Finally, the effect of the sampling resolution due to the pixel grid of the camera is realized by rounding the sampling positions x j to full pixels. The resulting correlogram, which now contains the uncertainty contributions u(I) from the image noise, u(x motion ) from the positioning and u(x measurement ) from the displacement measurement, is ready for the subsequent signal evaluation in the Monte Carlo simulation. Note further assumptions that are included in the signal model: -In the lateral direction, the influence of the varying surface gradient due to the curvature of the scan path on the imaging of the surface onto the camera pixels is negligible. Example: If the surface is tilted 0 • at one edge of the sensor and 5 • on the other, the pixel at the 5 • edge records 100.38 % of the area captured at the 0 • edge. - The change in intensity due to Lambertian reflection at different angles is deemed insignificant for the observed angle range. - The surface moves strictly in x-direction. There is no movement in y-direction, perpendicular to the scan direction. - The ideal correlograms are always centered in the field of view. Monte Carlo simulation setup During the Monte Carlo simulation, synthetic correlograms with preset uncertainty terms are calculated and evaluated by the signal processing methods described in Section 2.1.2. The simulated object is an ideal cylinder with a radius of r = 65.3 mm, which is equal to the object's radius used in the experiment. The correlograms are synthesized with a maximum amplitude of 80 counts, mimicking the experimental setup. The added intensity noise has an uncertainty of u(I) = 4 counts, which is also comparable to the noise levels observed in the experiment. The simulated sampling of the correlogram is carried out with a resolution of 1 pixel. The simulation is set to run 5000 repetitions with randomized uncertainty terms with a range of u(x motion ) = 0.1 . . . 0.4 pixel and u(x measurement ) = 0.02 . . . 0.05 pixel, each with the other uncertainty term set to zero. The physical size of a pixel is set to 0.557 µm, based on the camera and magnification of the experimental setup. Each combination of uncertainties was simulated for 14 apex distances ranging from 0.1-2.5 mm with regard to the edge of the field of view closest to the apex. Monte Carlo Simulation The height uncertainty u(h 12 ) calculated with the Monte Carlo simulation is presented in Figure 5 for different levels of the motion uncertainty u(x motion ) and the displacement measurement uncertainty u(x measurement ) over the distance of the field of view to the apex of the circular scan path. The top diagram in Figure 5 shows the influence of the displacement measurement uncertainty u(x measurement ) on the height uncertainty u(h 12 ) without any motion uncertainty u(x motion ). The bottom graph shows the estimated height uncertainty for the opposite case, no u(x measurement ) but various levels of u(x motion ). For both simulated cases, the standard deviation of height increases with an increasing apex distance. As discussed in [11], the measurement uncertainty and the measurement range are connected in LSWLI. There is always a compromise to be made between the two, which is why measuring at a minimal apex distance with low height uncertainty is not always the optimal position for all measurement tasks, as sometimes a bigger measurement range is required. The simulation results show that there is a lower uncertainty limit determined by image noise (dark grey in Figure 5), which in reality is unavoidable, but can be minimized by a careful setup of the optics and the camera. As stated in Section 1.2, a requirement for an appropriate displacement measurement system set for this article is that the total height uncertainty should not exceed the light grey areas in the figures, marking a total height uncertainty twice the intensity noise contribution. In the case presented at the top of Figure 5, showing the influence of the displacement measurement, the set uncertainty limit is only fulfilled for all considered apex distances with u(x measurement ) ≤ 0.02 pixels. Displacement measurement uncertainties beyond 0.05 pixels are only usable at very low apex distances, which are impracticable due to the low measuring range. The graph for u(x measurement ) = 0.04 pixels intersects the upper border of the set uncertainty limit at an apex distance of 1.5 mm, which marks the upper limit for compromising the height uncertainty for measurement range in this article. In the case presented at the bottom of Figure 5, the same uncertainty limit can be fulfilled by height measurements with u(x motion ) ≤ 0.2 pixels. Comparing the two simulated cases, it can be concluded that the displacement measurement uncertainty has a significantly larger impact on the height uncertainty than the motion uncertainty. Especially, it is essential that the proposed DSC displacement measurement system to be integrated in the LSWLI setup must have an uncertainty u(x measurement ) of significantly below 0.04 pixels. The simulation results show that there is a lower uncertainty limit determined by image noise (dark grey in Figure 5), which in reality is unavoidable, but can be minimized by a careful setup of the optics and the camera. As stated in Section 1.2, a requirement for an appropriate displacement measurement system set for this article is that the total height uncertainty should not exceed the light grey areas in the figures, marking a total height uncertainty twice the intensity noise contribution. In the case presented at the top of Figure 5, showing the influence of the displacement measurement, the set uncertainty limit is only fulfilled for all considered apex distances with u x 0.02 pixels. Displacement measurement uncertainties beyond 0.05 pixels are only usable at very low apex distances, which are impracticable due to the low measuring range. The graph for u x 0.04 pixels intersects the upper border of the set uncertainty limit at an apex distance of 1.5 mm, which marks the upper limit for compromising the height uncertainty for measurement range in this article. In the case presented at the bottom of Figure 5, the same uncertainty limit can be fulfilled by height measurements with Figure 5. Resulting height uncertainty due to various levels of displacement measurement uncertainty and motion uncertainty. The dark grey area marks the uncertainty resulting from the intensity noise only. The light grey area marks the set limit of total height uncertainty equal to twice the intensity noise contribution. Top: The blue lines represent total height uncertainties, which are caused by varying amounts of displacement measurement uncertainty. Bottom: The red lines depict height uncertainties, which are caused by varying amounts of motion uncertainty. Experiment The enhanced LSWLI system with an integrated DSC displacement measurement system is tested on the measurement object introduced in Section 2.1. The sensor field of view is manually adjusted to an apex distance, which results in an interference region of 300 pixels length in scan direction. For the measured object, that means an apex distance of about 1.2 mm. In Section 3.2.1, the experimentally achieved displacement uncertainty of the DSC displacement measurement system is investigated. In Section 3.2.2, the obtained displacement values are applied to the LSWLI recordings in order to investigate the influence of the displacement measurement on the topography height uncertainty, enabling comparison between experiment and simulation. Displacement Measurement The laser speckles of the DSC ROI were evaluated according to the method described in Section 2.1.1 to investigate the displacement measurement uncertainty u(x) achievable with the experimental rotatory LSWLI setup introduced in Section 2.1.3. Before using the DSC measurement system as an aid for an actual LSWLI topography measurement, its capabilities were tested by carrying out standalone displacement measurements on a piezo linear stage with a linear position repeatability of ± 1 nm. For this, the WLI illumination was turned off, otherwise the setup of the LSWLI system is the same as in the topography measurement application. The stage was moved to 50 positions, spaced 557 nm (the size of the surface area imaged at a single pixel) apart, and at each position 100 DSC images were taken to statistically reduce the influence of vibrations. One hundred displacement measurement series were synthesized from the recorded 5000 images, whereby one measurement series consists of one displacement result for each of the 50 positions. The target displacement, which is 1 pixel for all displacements, was subtracted from all measured displacements, yielding the residual displacements for each single measurement. After filtering out outliers using an outlier-removal-algorithm applying the Grubbs test (based on [21]), the standard deviation of the residual displacements is calculated. It represents the empirical displacement measurement uncertainty of the DSC displacement measurement system, which reads u(x measurement ) = 0.02 pixel (see Table 1). Converted to nm using the pixel size magnified through the setups optics yields u(x measurement ) = 11.14 nm, which seems low at this magnification, but is plausible since the interrogation windows used for correlation have a large size of 1200 x 100 pixels, which provides a lot of correlatable data to calculate statistically sound displacement values. Assuming no influence of u(x motion ) is present, the integrated DSC system works well enough to fulfil the set uncertainty limit according to the simulation results for rotatory LSWLI measurements at all simulated apex distances. For topography measurements, a rotation stage for the movement of the measurement object is used, which has a unidirectional repeatability larger than that of the piezo linear stage. For these reasons, the measurement scan is subjected to both u(x measurement ) and u(x motion ). The motion uncertainty may even be increased if the object is not mounted perfectly centered, resulting in an unknown eccentric movement of the observed surface. Also, the object surface contains deviations such as waviness, which causes an apparent deviation in circumferential velocity due to local changes in object radius. The measured mean displacements are 0.9748 pixels/frame in x-direction and 0.003 pixels/frame in y-direction. As the y-displacement is well below the resolution of both the DSC system and the correlogram reconstruction algorithm, it is considered negligible for this study. Table 1 gives the standard deviation of displacement measurements for 3000 images taken for topography measurements. With a value of 0.26 pixels it is more than 10 times larger than the uncertainty obtained from the test on the piezo linear stage. As the illumination conditions of the DSC-ROI are the same in both DSC measurements, the displacement uncertainty of the rotatory LSWLI measurement can be mainly attributed to the influence of motion uncertainty. The displacement uncertainty calculated for the rotatory LSWLI measurement is a result of both u(x measurement ) and u(x motion ). As the recording conditions for the DSC did not change, it is expected that u(x measurement ) = 0.02 pixels is also the case in the rota-tory LSWLI measurement. This marks the lower limit for the height uncertainty, reading u(h 12 ) low = 11 nm at the apex distance of 1.2 mm in the simulation (cf. Figure 5, top). A simulation with coupled uncertainties using the parameters u(x measurement ) = 0.02 pixels in combination with u(x motion ) between 0.25-0.27 pixels at a distance of x offset = 1.2 mm yields an estimation for the height uncertainty in the range of u(h 12 ) = 21-24 nm. This is outside the predefined uncertainty limit for the apex distance x offset = 1.2 mm, which is u(h 12 ) limit = 16.7 nm according to the simulation. This means that the positioning conditions were not optimal in the experiment. As the height uncertainty of the measurement cannot be determined from the displacement measurements alone, an evaluation of the topography results will give insights into the height uncertainty achieved experimentally. Topography To showcase the influence of the displacement data on the calculation of the topography, the recording of the WLI ROI is evaluated with the DSC displacement data. The topography is evaluated with the same settings according to the methods described in Section 2. Figure 6 depicts a measured topography calculated using the DSC displacement measurement on the left and the topography of the same object recorded with the VSWLI with the 50× objective as a reference on the right. Note that the VSWLI topography shown in Figure 6 was captured without stitching. The curvature of the surface had to be subtracted from the VSWLI for the depiction in the right half of Figure 6 and the calculation following after. The curvature was removed using a parabolic fit over an averaged profile in scan direction. Additionally, LSWLI and VSWLI measurements were leveled using a linear fit over averaged profiles in both x-and y-direction. In both resulting topographies, the characteristic features of the measurement object are visible. The object surface is made up of smooth stripes surrounded by a rougher region, which was manufactured with an etching process. While the VSWLI was able to capture both the rough and the smooth part, the LSWLI only provided topography data on the smooth part. This is due to the different measurement ranges of the two measurement devices. The VSWLI has a range of a few hundred μm and is able to capture both surface parts. The LSWLI has a range of about 15 μm and could therefore only capture the smooth regions. However, there are spots in the rough regions of the surface that could be evaluated. These are mainly regions, which were not completely eroded during the etching process. These small regions in the rough area of the object are characterized by their steep edges. Steep edges pose a challenge for WLI evaluation due to the lower amount of light scattered back into the WLI optics. For surface points at these edges, it is especially important to have a low displacement uncertainty to be able to reconstruct correlograms that are evaluable despite the lower light intensities received from these regions. In both resulting topographies, the characteristic features of the measurement object are visible. The object surface is made up of smooth stripes surrounded by a rougher region, which was manufactured with an etching process. While the VSWLI was able to capture both the rough and the smooth part, the LSWLI only provided topography data on the smooth part. This is due to the different measurement ranges of the two measurement devices. The VSWLI has a range of a few hundred µm and is able to capture both surface parts. The LSWLI has a range of about 15 µm and could therefore only capture the smooth regions. However, there are spots in the rough regions of the surface that could be evaluated. These are mainly regions, which were not completely eroded during the etching process. These small regions in the rough area of the object are characterized by their steep edges. Steep edges pose a challenge for WLI evaluation due to the lower amount of light scattered back into the WLI optics. For surface points at these edges, it is especially important to have a low displacement uncertainty to be able to reconstruct correlograms that are evaluable despite the lower light intensities received from these regions. Finally, the rotatory LSWLI topography is quantitatively compared with a reference VSWLI measurement to assess the achieved height measurement uncertainty. As a figure of merit for the likeness between the results, the standard deviation of the height differences between the LSWLI result versus the VSWLI reference taken with a 50×/0.55 objective is calculated on the three smooth stripes visible in Figure 6. Additionally, the standard deviations, as a stand-in for the surface roughness parameter Sq, is calculated for eight of the smooth stripes, that were captured with the LSWLI with its 10×/0.3 objective and the VSWLI with a 10×/0.3 and a 50×/0.55 objective. At 50×, the topography was stitched with 40% overlap. The results are given in Table 2. Reference 50×/0.55 57.5 ± 6.9 Reference 10×/0.3 52.5 ± 10.7 The obtained standard deviation of the height differences of the LSWLI measurement with the VSWLI reference is below the calculated surface roughness values, which implies that the LSWLI could be used for roughness measurements on this kind of surface. Another indication for the validity of the LSWLI system for this kind of smooth surface is that the confidence intervals of the LSWLI and both VSWLI measurements that were taken at different magnifications, are overlapping. The LSWLI measurement was carried out about 1.2 mm away from the apex of the scan path, which, as derived in Section 3.2.1, should result in a height uncertainty of approx. u(h 12 ) ≈ 23 nm. For the topography obtained in the experiments, the height uncertainty is estimated with the height differences between rotatory LSWLI measurements and the VSWLI reference measurement, which was determined to be about s(∆h 12 ) = 46 nm, with a mean offset of ∆h 12 = 0.011 nm achieved by manually aligning the two topographies. There are a number of reasons why the experimental height uncertainty is higher than estimated by the simulation. Firstly, the VSWLI reference was recorded with a 50×/0.55 objective on a 0.5× tube, while the LSWLI measurement was carried out using a 10×/0.3 objective on a 1× tube. Even though the reference topography had to be downsampled to allow comparison with the LSWLI-based topography, its raw data was sampled with a higher lateral resolution and is therefore likely superior to the LSWLI regarding imaging of small surface features (<1 µm diameter) and steep edges, which affects the calculation of roughness. For the comparison itself, the two topographies were aligned manually with an alignment accuracy of ±0.5pixel, which is based on the resolution of the pixel grid, which may be improved with a different sampling of the VSWLI reference topography to the grid. Secondly, the recording perspectives of the LSWLI and the VSWLI differ. The LSWLI records the surface of the rotating object 1.2 mm away from the apex of the scan path, which means the surface is observed at an angle range of approx. 0.9-1.4 • . The resulting topography of the rotatory LSWLI is a development of the specimen's mantle surface, showing no curvature except the smaller scaled waviness and roughness of the surface. The VSWLI on the other hand records at a fixed position directly over the apex. Due to the curvature of the object's surface, the topography is recorded including the cylindrical shape of the object, which has to be numerically removed, introducing deviations into the resulting VSWLI topography. The aforementioned comparability issues of the rotatory LSWLI measurement results with the VSWLI reference topography may be explanations for the fact that the height uncertainty is worse than estimated by the simulation. Still, for an LSWLI setup in a developmental stage, on a real-world object with imperfections, (e.g. waviness, roughness) that were not considered in the simulations, the achieved height uncertainty of u(h 12 ) real < 50 nm is considered a success. Discussion The article aimed to propose an integrated displacement measurement system for LSWLI that allows for topography measurements on continuously rotating objects. The influence of the uncertainty of the displacement measurement and the uncertainty of the scan motion was investigated by means of Monte Carlo simulation. The simulation showed that the displacement measurement uncertainty should not surpass 0.02 pixels to avoid doubling of the height uncertainty compared to a measurement system free of displacement uncertainty. The simulations showed further that the measurement of the position has to be about one order of magnitude more accurate than the lateral motion of the object itself. The proposed integrated DSC system was tested in a standalone test on a piezo linear stage and in a rotatory LSWLI measurement. The standalone test under negligible influence of motion uncertainty revealed that the integrated DSC system is capable of displacement measurements with a sufficiently low uncertainty of u(x measurement ) = 0.02 pixels. At the 10× magnification used in this setup, this amounts to u(x measurement ) = 11.14 nm. As the DSC result is image-based, it can be assumed that at higher resolutions the DSC displacement measurement uncertainty will further decrease into the single nanometer range, provided the quality of the speckle pattern can be retained. In addition to the low measurement uncertainty and scalability to other magnifications, the integrated DSC displacement measurement system has further practical advantages as it works without calibration and requires no unit conversion or synchronization to the WLI recording, in contrast to, e.g., an external rotation encoder. In rotatory LSWLI measurements, a total displacement uncertainty of u(x) = 0.26 pixels was observed. As the measurement circumstances of the DSC system were the same in the standalone test and the full topography measurement experiment, it is assumed that motion uncertainty is the dominant contributor to u(x) in the rotatory LSWLI measurement. The achieved total displacement uncertainty was not sufficient to fulfil the set uncertainty requirement. Compared to VSWLI topography measurement results, the LSWLI topography measurements have a higher height uncertainty, which is tolerated as LSWLI is intended to be used on continuously moving objects that cannot be measured with VSWLI. Accounting for the developmental status of the LSWLI setup compared to a commercial VSWLI and the difficult comparability of the flat LSWLI surface topography with the VSWLI topographies, which had to be flattened numerically and were recorded at a higher magnification, the calculated height uncertainty of the rotatory LSWLI system is considered plausible. The experimentally achieved height uncertainty is two times higher than estimated by the Monte Carlo simulation. A reason for this could be the difference between the perfectly smooth simulated surface and the rough real-world object. The influence of surface texture on the height uncertainty in rotatory LSWLI is a topic to be considered for future work. Further studies should be conducted to assess the repeatability of the topography measurement result during the recording of the surface over multiple rotations. As the uncertainty of the integrated DSC displacement measurement is already sufficiently low, the most promising way to improve the height uncertainty is by reducing the influence of motion uncertainty. This issue could be tackled in future work by using superior rotation stages or by investigating strategies to compensate the negative effect of motion uncertainty during the evaluation process. Another field of study opened up by the integrated DSC system is improving the in-process capabilities and explore potential applications of the LSWLI technology. Having accurate information on the lateral location of the object may be used to tackle the issue of vibration in harsh industrial environments. Data Availability Statement: Data will be made available on request.
11,965.8
2021-04-01T00:00:00.000
[ "Engineering", "Physics" ]
IMAGE ENHANCEMENT ALGORITHM USING ADAPTIVE FRACTIONAL DIFFERENTIAL MASK TECHNIQUE . This paper addresses a novel adaptive fractional order image enhancement method. Firstly, an image segmentation algorithm is proposed, it combines Otsu algorithm and rough entropy to segment image accurately into the objet and the background. On the basis of image segmentation and the knowledge of fractional order differential, an image enhancement model is es-tablished. The rough characteristics of each average gray value are obtained by image segmentation method, through these features, we can determine the optimal fractional order of image enhancement. Then image will be enhanced using fractional order differential mask, from which fractional order is obtained adaptively. Several images are used for experiments, the proposed model is compared with other models, and the results of comparison exhibit the supe-riority of our algorithm in terms of image quality measures. 1. Introduction. Image enhancement is a meaningful and important task in digital image processing, which can be widely used in many fields, such as medical imaging processing, pattern recognition, traffic safety, machine vision, robotics, and remote sensing [16,15,14,37,41]. Basic algorithms of image enhancement are divided into two categories: the spatial domain method and the frequency domain method. The spatial domain method directly processes the image pixels, including point processing, template processing and some other methods. The frequency domain processing technology is based on improving the Fourier transform of image, the common processing methods are high-pass, low-pass filtering, homomorphic filtering and so on. These traditional algorithms of image enhancement use integerorder differential, which can enhance the image to some extent. Nevertheless, in the process of image enhancement, some details of the image may be lost or noise may be generated by these integer algorithms. It is necessary to improve conventional algorithms of image enhancement to reflect image information more clearly and accurately. In recent years, the introduction of fractional calculus into image processing has become a novel direction, and good results have been achieved [17,32,4,1,38,2,25,7]. Fractional calculus is also gradually used in image enhancement. Hungenahally and Suresh [13] introduce and outline generalized fractional discriminant functions, these functions can extract transitional information under noisy conditions and retain obvious perceptual details, so that the image enhancement effect is good. Then Liu [20] proposes an image enhancement algorithm based on fractional mask, which uses image convolution to realize fractional differential operation. Based on twodimensional digital fractional-order Savitzky-Golay differentiator, Chen and Xue [5] propose an image enhancement algorithm, which uses an unsupervised optimization algorithm to select fractional parameters. Gao and Zhou [8] also generalize fractional differential to quaternion, and propose a new concept of fractional directional differential of quaternion, which is used in image enhancement. In particular, Pu et al. [29,12,30] propose fractional mask, in which the fractional order is constant and is determined by human. In the process of image enhancement, fractional differential operator not only can enhance image edge, but also preserves weak texture. The fractional order enhancement algorithms improve the enhancement effect compared with the integer order algorithms, but the enhancement results are still unsatisfactory because they use the same fractional order to process images. By way of getting better application of fractional order in practical image enhancement, some adaptive fractional order models have been developed. The adaptive fractional order enhancement algorithms vary with regional characteristics, and they get more accurate results in actual image processing [18,11,6]. However, some adaptive fractional order functions have limitations in image processing. When dealing with different kinds of images, the enhancement results are sometimes good or bad. Therefore, it is still a problem to select the appropriate adaptive fractional order function. The rough set is a mathematical tool for describing incompleteness and uncertainty. It can analyze and infer data to discover hidden knowledge and reveal potential laws. The rough set is a promising method, which is widely used in image processing [34,23,21]. Therefore, the authors construct fractional order adaptive function by using rough set theory. In the interest of enhancing images effectively, a new image segmentation algorithm is proposed based on Otsu algorithm and the rough set, this segmentation algorithm makes the segmentation results accurate. Then, the rough entropy of each gray gradient calculated by image segmentation is taken as the fractional order of each gray value, and image is enhanced by fractional mask. The method of adaptive fractional order construction proposed in this paper uses rough entropy, which has adaptability to images with different characteristics. More importantly, the enhancement model proposed in this paper can enhance the image to a certain extent, and the display effect of image information is better than the traditional image enhancement model. The rest of this paper is organized as follows. In Section 2, the related theories of fractional differential and rough set theory are introduced. Section 3 proposes image segmentation and the selection of adaptive fractional order for image enhancement based on rough entropy. In Section 4, experiments and comparisons are discussed. Finally, the Section 5 draws a conclusion. 2. Formulation of related theories. 2.1. Related theories of fractional differential. The main purpose of this section is to introduce the basic contents of fractional differential. The theory of fractional differential in Euclidean space is more developed than its in Hausdorff space, and the definitions of fractional differential based on Euclidean measure are widely used in mathematical research. The classical fractional different definitions are the G-L definition, the R-L definition, and the Caputo definition [28]. The G-L definition can be converted into convolution form in numerical implementation, so it is very suitable for signal processing. The definition of the G-L fractional differential of order v is defined by where s(x) is the signal under consideration, [a, x] is the duration of s(x), v is a real number, and Γ(·) is the Gamma function. Based on the basic theory of signal processing and fractional differential, the Fourier transform [35,36] where i is the imaginary unit, and ω is the digital frequency. If s(x) is a causal signal, (2) simplifies to We can get amplitude-frequency figure shown in Fig. 1 based on different orders according to fractional Fourier transform. In the figure, we can see that the magnitude of different fractional order increases with the increase of frequency, and the increase rate is sharply strengthened nonlinearly with the increase of frequency and differential order. It reveals that the fractional differential operator has the ability to enhance signal. In addition, the fractional differential operator has the property of weak derivative, it enhances the high frequency component of the signal while preserving the very low frequency component of the signal nonlinearly. For a digital image, it has different regions such as boundary, weak texture and smooth areas, the corresponding signal frequencies of the relevant regions are also different. It can be seen from Fig. 1, if the integer order image enhancement operator is used, the texture and smooth region may be weakened when the image edge is enhanced. Fractional order differential mask just overcomes this problem, while enhancing the high frequency, it can also retain the low frequency part. Based on G-L definition, we introduce the basic knowledge of fractional mask. represents the integer part. On the Basis of G-L Definition and [31], we can get the v−order fractional differential of f (t): The numerical expressions of fractional partial differential operators defined by G − L along x− and y−coordinate can be obtained: According to the formulas (2) and (3), the fist three coefficients are (5) and (6) are extended to the other six directions of image, using the coefficients above, than an eight-based image enhancement mask is obtained in Fig. 2. 2.2. Rough set theory. Rough set theory was first proposed by Pawllak Z in 1982 [26]. The rough set is a kind of effective mathematical tool for dealing with vague descriptive objects, its advantage is that it does not need any additional relevant information data. The combination of the rough set and other theories can improve the mining ability of data, and they performance well in the practice of image processing [23,27]. For information systems S = (U, C), letting R be an equivalent relation on universe U, x i represents an object in U, [x i ] R represents a set of objects that are not distinguishable from x i . For arbitrary X ⊆ U , the upper and lower approximations of set X are the following respectively: The boundary region of rough set theory is BN R (X) = R(X) − R(X). If the upper and the lower approximations are equal, the boundary region is empty, then such a set is called an exact set. Roughness of the rough set is used to express the degree of roughness of set, which is defined as roughness reflects uncertainty of set to some extent. Rough entropy defined by roughness is obtained: Rough Entropy is a statistical form of features, which reflects the average amount of information in an image. On the premise of containing the information of image, the constructed image entropy highlights the comprehensive features of the gray information of the pixel position and the gray distribution in the neighborhood of the pixel. Thus, the introduction of rough entropy into image processing has attracted more and more scholars' attention [10,3,33,9]. 3.1. Image segmentation based on rough entropy. Otsu algorithm is an efficient algorithm proposed by Japanese scholar Otsu in 1979 [22], it determines the optimal segmentation threshold according to maximum criterion distance amongst class. In order to improve the effect of image segmentation, a new image segmentation algorithm is proposed by combining Otsu algorithm and rough entropy [24,39]. Reference [40] applies Monte-Carlo method and rough entropy standard to segment images. It uses rough set theory to divide the image into sub-blocks according to certain rules, then calculates the rough entropy with the sub-blocks as samples, finally uses the gray value corresponding to the maximum rough entropy to segment the image. The rough set method makes image segmentation more precise, and this segmentation method improves the speed of segmentation when smaller image sub-blocks are used to achieve better segmentation effect. Inspired by this algorithm, this paper combines rough entropy with Otsu algorithm to make image segmentation clearer. In order to make the boundary and texture regions better separated by the algorithm, replacing the traditional gray of each pixel f (i, j) with the average gradient M (i, j), the matrix M is composed of the above average gradients. Then the image is divided into blocks of suitable size, and their optimal segmentation thresholds are obtained by using Otsu algorithm for each sub-block. The optimal thresholds are combined with the knowledge of rough entropy to calculate the upper and the lower approximations of the object and the background of subblocks O τ , O τ , B τ , B τ . The use of optimal thresholds reduces the range of the upper and the lower approximations, which determines the segmentation results within more precise scope, thus it makes the result of segmentation more ideal to a certain extent. The segmentation process is as follows: Step 1. For an image f (i, j), its size is [m, n], calculate the average gradient M (i, j) of eight directions of every pixel by using the formula (11). Step 2. Divide the M into N sub-blocks according to the appropriate size, mark the sub-blocks as P r. Then we calculate the maximum gray value P r max and the minimum gray value P r min of each sub-block. In experiments, it is found that with the complexity of images, the thinner the sub-blocks division, the better the segmentation effect. Therefore, the number of sub-blocks should be suitable for the images as far as possible. Step 3. Calculate the best segmentation threshold T r for each sub-block using Otsu algorithm, and the maximum and minimum thresholds T max , T min are found. Step 4. For each average gradient level M (i, j) , we combine rough set theory and calculate the upper and the lower approximations of the object O τ , O τ and the background B τ , B τ respectively: Step 5. Find out the upper and the lower approximations of the object and the background, and calculate the roughness and entropy by formulas (9)- (10). Then the average gradient is found corresponding to the maximum entropy, it is the best threshold to segment the image. Step 6. We use the best threshold to segment M . Finally, the segmentation result is obtained. The segmentation algorithm proposed in this paper uses Otsu algorithm to calculate the maximum and minimum thresholds of matrix sub-blocks, then they are brought into the upper and the lower approximations of image, so that the image is segmented more accurately into the object and the background. This segmentation model is more accurate than method of [40] and traditional Otsu algorithm, but it takes too long to compute, it needs to be improved in terms of computing time in the future. The segmentation results of images, named 'Lena' and 'Fishing boat', are shown in Fig. 3 and Fig. 4, respectively. From the comparison of image segmentation results, the segmentation algorithm in this paper achieves better segmentation effects. determining where gray value or average gradient value is located (the boundary, weak texture or smooth areas), the frequency is obtained to determine the fractional order. However, a part of adaptive fractional order functions are conservative, using them to enhance images will appear inadaptable situations such as noise generation. Introducing knowledge of the rough set will solve this problem well, this paper unites rough entropy to establish adaptive fractional order function. Firstly, the image is segmented by the method of this paper. Then, rough entropy of each average gradient based on the segmentation is got to be as the fractional order of image enhancement, and fractional mask is used to enhance image. The detailed calculation process is as follows: Step 1. Segmentation of images using the above mentioned method. Step 2. The rough entropy of each average gradient is calculated by the above segmentation algorithm. Then the rough entropy is obtained as the fractional order of the corresponding average gradient. Step 3. Image enhancement using adaptive fractional mask. The block diagram for our model is shown in Fig. 5. Experiments and analysis. In this section, five kinds of images are used to evaluate the effectiveness of the algorithm proposed in this paper, their original images are shown in Fig. 6. The image enhancement effects of this method are compared with that of the AFDA method [19] and traditional fractional differential method at the order 0.2 and 0.8 respectively. The image enhancement results are evaluated by visual analysis, entropy of information and average gradient. Although the 0.8-order method obviously enhances the edge of image, it also produces significant noise. Contrasted the 0.2-order method and the AFDA method, the proposed method not only enhances image edge, preserves weak texture and smooth areas, but also the quality of image is not affected by noise. Fig. 8 is a moving head image. The AFDA method acts out inadaptable features in this image, it produces a amount of noise beside human hair, which seriously affects the reading of image information. The method in this paper has better adaptability to the image, and the effect of image enhancement is better than method of the 0.2-order and the 0.8-order methods. Fig. 9 shows a medical image. The AFDA method reflects serious inadaptability between the image and enhancement method, the contours and details of medical image produce a great deal of noise, and the quality of image is obviously worse. Compared with the other three methods, the image enhanced by the algorithm proposed in this paper is clearer, and its details are more vivid. Fig. 10 is an aerial image. It can be evidently seen that the results in this paper is better than the 0.2-order method and the AFDA method. The mountain of the image processed by the enhanced algorithm in this paper is clearer and more vivid, and it does not produce noise as the 0.8-order method do. Fig. 11 is an airplane image. Compared with the other three algorithms(the 0.2-order method, the 0.8-order method and the AFDA method), the algorithm proposed in this paper has better enhancement effect, which makes the details of the image clear. Information entropy and average gradient are used as criteria for image evaluation. The average gradient of the image refers to the change rate of image gray level, which can be used to express the image clarity. It embodies that the bigger the where M × N is the size of the image. Information entropy is a key consideration in evaluating information quality in images, the bigger the entropy, the richer the information of the image will be. The where p(k) is the frequency of the image gray values k, and L is the constant 255. The information entropy and average gradient of colorbluefive images in the experiments are shown in the Table. 1, Table. 2. They embody that our results are better than other researches, which further illustrate the effectiveness of the proposed algorithm in this paper. 5. Conclusion. This paper mainly builds a new image enhancement algorithm, which is combined fractional order differential mask and rough entropy based on the segmentation result. The enhancement results are more accurate than traditional algorithms. The advantages of enhancement algorithm in this paper are as follows: by adopting fractional order differential operator, the image preserves more texture details and sharp edges. By using rough entropy to establish adaptive fractional order, corresponding the adaptive fractional order differential mask can enhance various images very well. It overcomes the selectivity of general fractional order function to images and makes enhancement algorithm more practical. The experimental results show that the proposed algorithm is effective and progressive. In the future, we will strive to improve the speed of this algorithm, and more works need doing in order to enhance the image significantly.
4,176.2
2019-01-01T00:00:00.000
[ "Computer Science" ]
Ab initio Investigation of the Structure and Electronic Properties of Normal Spinel Fe 2 SiO 4 Transition metal spinel oxides have recently been predicted to create efficient transparent conducting oxides for optoelectronic devices. These compounds can be easily tuned by doping or defect to adapt their electronic or magnetic properties. However, their cation distribution is very complex and band structures are still subject to controversy. We propose a complete density functional theory investigation of fayalite (Fe 2 SiO 4 ) spinel, using Generalized Gradient Approximation (GGA) and Local Density Approximation (LDA) in order to explain the electronic and structural properties of this material. A detailed study of their crystal structure and electronic structure is given and compared with experimental data. The lattice parameters calculated are in agreement with the lattice obtained experimentally. The band structure of Fe 2 SiO 4 spinel without Coulomb parameter U shows that the bands close to Fermi energy appear to be a band metal, with four iron d -bands crossing the Fermi level, in spite of the fact that from the experiment it is found to be an insulator. Introduction Transparent conducting oxides (TCOs) are electrically conductive materials with comparably low absorption of electromagnetic waves within the visible region of the spectrum. They belong to an exceptional family of oxides, possessing two antagonistic physical properties, high optical transparency to visible light and high electrical conductivity carrier concentrations [1]. Due to these properties, the TCOs are technologically classified as an important class of materials in the field of optoelectronics [2,3]. TCOs have found a wide variety of technological applications, such as solar cells, low emissivity windows, window defrosters, flat-panel displays, blue or ultraviolet light-emitting diodes (LEDs), liquid crystal displays, dimming rear-view mirrors, semiconductor lasers, energy-conserving touch screens, light-emitting displays, invisible and security circuits [4,5]. Also, some new applications of TCOs have been suggested recently such as holographic recording media, write-once read-many-times memory chips (WORM), electronic ink etc. [1,6]. Most of the TCOs have n-type conductivity and the development of efficient p-type TCOs is one of the key goals of researchers. High conductivity p-type TCOs similar to the high-performance n-type TCOs would be a major breakthrough, facilitating advanced devices and applications [7]. Spinel structures have the common formula of TB2X4, where X can be oxygen (oxides) or a chalcogens element, such as sulfur (thio-spinels) and selenium (selenospinels) while T and B can be divalent, trivalent, or tetravalent cations. Spinel oxides have been recognized as promising p-type TCO semiconductors, which can be a substitute to the n-type indium doped tin-oxide (ITO) [8]. The perspectives of usage of the spinel oxides as p-type transparent conducting oxide semiconductors and for other applications have stimulated widespread experimental and theoretical studies on this exciting class of materials [9][10][11]. Spinel oxides are categorized by their robust properties, such as high strength, good electrical conductivity, high resistance to chemical attack, high melting temperature and large fundamental bandgap [10,11]. Fayalite (Fe2SiO4) is one of the promising transparent semiconducting for various technological applications. The spinel oxide Fe2SiO4 was synthesized and its crystalline structure was identified a long time ago [12]. Some of the fundamental physical properties of Fe2SiO4, such as structural and electronic properties, have been already investigated experimentally and theoretically by first-principles calculations [13][14][15][16]. However, the theoretical band gap value of Fe2SiO4 is still under debate. Fortuitously, there are many first-principles techniques that can be used to describe accurately the electronic structure of semiconductors and insulators [17][18][19][20]]. For instance, the recently proposed modified Becke-Johnson (MBJ) potential and thereafter the well-known GW approximation, but the major problem with these techniques is they are too expensive. However, there are other first-principles methods (GGA and LDA) that are cheaper and less time-consuming. Both the GGA and LDA involve the use of pseudopotentials (PPs). Here, we apply the latter methods to study systematically the structure and electronic band parameters of spinel Fe2SiO4 successively. In addition, we also focused not only on predicting the real value of the fundamental band gap in this material but also on other key properties in TCOs, like the second bandgap (between the two lowest conduction bands) [21]]. The results will resolve the discrepancies where exist on this spinel material and also indicate its suitability for optical devices. Computational details In this work, the electrical properties of the Fe2SiO4 were investigated using density functional theory (DFT) with the pseudopotential plane-wave method as implemented in Quantum Espresso (QE) package [22]. The electronic calculation was performed using the DFT-GGA/LDA and GGA+U/LDA+U. All calculations were spin-unpolarized and the exchange-correlation terms were described using the Perdew-Burke-Ernzerhof (PBE) and Perdew-Zunger (PZ) functional. For the DFT-GGA (GGA+U) calculation, the norm-conserving pseudopotentials including the Fe(3s,4s,3p , 3d), Si(3s,3p) and O(2s, 2p) state in its valence shell were used [23]]. The wave functions were expanded in-plane waves up to the kinetic energy cutoff of 350 Ry and convergence criteria for the energy of 10 $% eV were chosen while for the DFT-LDA (LDA+U) the Fe(3s,4s,3p,3d), Si(3s,3p) and O(2s, 2p) state were used. A planewave kinetic energy cut-off of 450 Ry was selected [24]]. These ensure a good convergence of the computed lattice constant (see Table 1). In order to take into account the on-site Coulomb interactions between 3d electrons, the values of the Coulomb integral U = 6.8 eV and Hund's exchange J = 0.89 eV (from Ref. [25]) for Fe 2+ in Fe2SiO4 in the GGA+U and LDA+U calculations were used. We performed our calculations on the 56-atoms unit cell of the Fe2SiO4 spinel structure. An 8 × 8 × 8 Monkhorst-Pack k-point mesh was used for the cubic structure used to obtain a well-converged sampling of the Brillouin zone. Crystal Structural The spinel Fe2SiO4 fayalite belongs to Fd-3m (227) space group symmetry and it crystallizes in an fcc (face-centered cubic) lattice. A unit cell of Fe2SiO4 is shown in Figure 1 At the initial step of our calculation, full structural optimization and convergence test of the considered material was performed to determine the equilibrium structural parameters, including the lattice parameter (a). To do that, total energy was calculated for a series of primitive cell volumes, where the atomic positions were allowed to relax for each volume. The resulting total energy-volume curve was fitted to the Birch-Murnaghan equation of state [26] to determine the equilibrium primitive cell volume, bulk modulus and its pressure derivative 8 . The optimized structural parameters using the GGA, GGA+U, LDA and LDA+U are displayed in Table 1 together with the existing experimental and theoretical data. One can appreciate from the Table 1 that, there is an excellent agreement between the GGA+U optimized lattice parameter value and the corresponding experimental one. The relative deviation ( ) of the calculated lattice parameter ( ) from the experimental @ G one which is defined as: Electronic properties The use of electronic structure methods, adept for correctly calculating the bandgap for optical transparency is useful in finding new or improved optoelectronics materials. The calculated electronic energy bands dispersions for the optimized crystal structure of the considered material along the particular high-symmetry points within the Brillion zone (BZ) via the DFT and DFT+U are illustrated in Figure 2. For both GGA-PBE and LDA-PZ the electronic band gap entirely disappears (see figure 2). This is due to the well-known band gap underestimation caused by the incorrect treatment of electron exchange in DFT [27,28]. As a result, the Fermi level falls within some long, extended bands, and the material appears to be metallic. Thus, DFT is insufficient for accurately predicting the electronic properties of the ternary oxide under study, and is not appropriate for screening this material. It can be observed from figure 3 that, the studied material using both GGA+U and LDA+U is a direct band gap semiconductor, where both the VBM (valence band maximum) and CBM (conduction band minimum) are located at the point W in the BZ. The main features (band dispersions) of the GGA+U and LDA+U band structures are basically identical, except that the GGA+U band gap is much higher than the LDA+U one. The GGA+U band gap is 3.11 eV while LDA+U is 2.88 eV. The value of the experimental band gap of the studied material was reported to be 4.2 eV obtained using spectroscopic technique. Thus, one can state that GGA+U provides a reasonable simple method that can be a viable alternative to the computationally expensive approaches for the calculation of band gaps of strongly correlated materials with an acceptable accuracy. It is worthy to note that the discrepancy between. experimental data reported by different researchers for the Fe2SiO4 band gap is probably due to the experimental errors that usually arises from the used measurement technique and sample quality. In addition, this inconsistency between experimental data can be explained by the fact that the band gap value depends on the temperature at which the measurement is done. Also it might be worthy revealing that, the minor differences between our GGA+U and LDA+U band gaps and the earlier reported results using other techniques could be attributed to the fact that these values are derived from calculations performed at slightly different values of the optimized structural parameters; the band gap value is sensitive to the structural parameter values. From the study of the energy band dispersions at the energy band extremes, qualitative data can be obtained about the ability of the studied material to easily conduct electricity. Figure 2 shows that the valence energy bands around the VBM are more dispersive than the conduction bands around the CBM, this indicate that the effective mass of the electron will be heavier than that of the hole [29]. This result suggests that the n-doped Fe2SiO4 should be more advantageous for optoelectronic devices performance than the p-doped ones, while the electrical conductivity by valence band electrons should be more promising than that by conduction band holes. The projected density of states (PDOS) of the bulk Fe2SiO4 calculated by means of both GGA+U and LDA+U methods are presented in Figure 3. The bands structure in Fig. 3 Conclusions We have successfully used density functional approach within the GGA and LDA approximations and demonstrated a good description of the structural and electronic properties of Fe2SiO4 spinel. The GGA and LDA energy bands structure of Fe2SiO4 fayalite is qualitatively incorrect since this mineral is described as a band metal by GGA and LDA, whereas it is experimentally an insulator with a band gap of 4.22 eV. However, when GGA +U/LDA+U was incorporated due to 3d electrons of Fe atom we found the band gap to be 3.11 eV (GGA+U and 2.88 eV (LDA+U). Also the bottom most CB for Fe2SiO4 spinel is well dispersive, which means that in these materials electrical current can be transported by CB electrons. In general, the results obtained suggest that, the Fe2SiO4 spinel is potential candidates for optoelectronic applications
2,517.2
2021-04-29T00:00:00.000
[ "Materials Science", "Physics" ]
Segmentation of THz holograms for homogenous illumination This paper investigates the feasibility of applying the hologram segmentation method for homogeneous illumination. Research focuses on improving the uniformity of the illumination obtained from diffractive optical elements in the THz range. The structures are designed with a modified Ping-Pong algorithm and a neural network-based solution. This method allows for the improvement of uniform illumination distribution with the desired shape. Additionally, the phase modulations of the structures are divided into segments, each responsible for imaging at different distances. Various segment combination methods are investigated, differing in shapes, image plane distances, and illumination types. The obtained image intensity maps allow for the identification of the performance of each combination method. Each of the presented structures shows significant improvements in the uniformity of imaged targets compared to the reference Ping-Pong structure. The presented structures were designed for a narrow band case—260 GHz frequency, which corresponds to 1.15 mm wavelength. The application of diffractive structures for homogenization of illumination shows promise. The created structures perform designed beamforming task with variability of intensity improved up to 23% (standard deviation) or 45% (interquartile range) compared with reference structure. for uniform illumination was previously reported 16 .Moreover, applying DOEs for top-hat illumination showed an overall improvement in complex optical system performance 17 . In beamforming with diffractive structures, two general subsets of methods for structure design emerge 18 : analytical transformations and numerical methods.Analytical approaches focus on finding a coordinate transformation of the input intensity distribution to the output intensity distribution.After defining the transformation, it can be realized as a phase modulation 18 .The term numerical methods encapsulates here a broad range of iterative optimization algorithms that modify the starting phase modulation in steps, working towards the improved reconstruction of the output intensity distribution. Numerical methods allow for the design of DOEs without knowledge of the analytically calculated phase modulation forming a particular shape at a defined distance from the structure.Their advantage is the capability of obtaining the target phase modulation without prior knowledge of the analytic description of the input wavefront.Throughout the history of diffractive optical design, a large number of iterative methods have been presented.After the introduction of the Gerchberg-Saxton algorithm for phase retrieval 19 , many related methods followed.At their root, numerical methods are optimization algorithms that also lead to other approaches such as simulated annealing 20 or genetic algorithms 21 . This paper investigates the feasibility of applying hologram segmentation to realize homogeneous illumination in the THz range of radiation.Segmentation is applied to reduce the effects of significant coherence of a THz source.The idea of the proposed approach revolves around the mitigation of unwanted speckle noise within the uniform distribution pattern.This is a known problem in coherent optical systems, and the proposed methods of reduction of this effect aim to reduce the coherence of an illuminating beam by introducing the angle, polarization, or wavelength diversity 22 .As the latter two are infeasible in the case of the narrow-band, linearly-polarized THz emitters, we focus on the spatial approach to decrease the coherence of the input beam.We proposed the spatial segmentation of the designed DOEs.In this way, the segments of the structure act as separate holograms placed at different distances from the source.Thus, the speckle pattern for each segment is different, allowing for the mitigation of the resultant speckle noise. The design process of the diffractive structures was based on the modified Ping-Pong algorithm.A singleplane DOE, designed with the same method, served as a reference, allowing for the verification of the influence of the segmentation on the obtained intensity pattern.Moreover, two additional structures were prepared with a previously investigated novel neural network-based algorithm 13,23 .Different approaches to segmentation and homogeneity improvement have been tested in numerical simulations.The best-performing structures have been chosen for manufacturing and experimental evaluation.The structures have been fabricated using the 3D printing method.The experimental results show a significant correlation with the simulations and suggest the potential feasibility of the method for achieving uniform illumination patterns. Materials and manufacturing Diffractive structures realize a desired beam shaping by an introduction of a specifically designed distribution of the complex amplitude.It can be accomplished through modification of an amplitude (in the case of opaque elements) or phase of the optical field (for transparent objects).The phase modulation is achieved by manufacturing an optical element with thickness distribution related to the particular phase shifts.The design procedure and the type of phase delay map coding define them.In this research, an iterative optimization method with scalar propagation was used for phase modulation design.At the same stage, simulations of structure performance were obtained.Such designed phase delay maps were then 3D modeled and 3D printed. Structure design Structures are designed with an iterative algorithm based on the modified Ping-Pong algorithm.This method operates within the area of scalar wave optics; therefore, it defines states of amplitude and phase at given optical planes.Two planes are defined: the hologram plane and the single image plane-the algorithm switches between those planes through modified Fresnel propagation 24 (a modified convolution approach implemented in Light Sword 6.0 software).A number of planes can be extended 25 .At each plane, the algorithm enforces predefined amplitude distributions (a uniform amplitude for the hologram plane and a desired shape for the image plane), transferring information into phase distribution by propagation between planes.The phase distribution obtained at the end of the algorithm at the hologram plane forms a phase delay map of the diffractive element.The modified ping-pong algorithm has been chosen both for the design of the investigated structures (segmented) and the reference one (a single plane).This is a standard for designing computer-generated holograms (in visible as well as THz spectral ranges).It has been shown to significantly improve the obtained intensity patterns compared to the single backpropagation of the target intensity distribution 26,27 . In this work, the phase delay map area has been divided into segments with different functionalities.For the purpose of this study, four segments were defined for each structure.All segments were imaging the same target-a square with a side of 40 mm-but at slightly different distances: 490 mm, 495 mm, 500 mm, and 505 mm.Segments may be organized in multiple ways on the structure's surface.For example, a circular aperture may be divided into quarters, each representing a separate segment.The way the segments are combined is an additional degree of freedom in this design method.Since, in this method, each segment represents a separate part of the whole phase modulation, one can calculate each segment separately and perform scalar propagation on each of them.For this reason, phase modulation of one segment can be propagated by a small distance, and then a modulation from another can be combined.Using this concept, one can make the whole structure image at one chosen distance-single plane, even though all designed segments have different imaging distances.This method of segment combination with resulting image plane composition is depicted in the top row of Fig. 1.Another option is to simply combine all segments, imaging at different distances, into a structure.As a result, the final structure produces multiple image planes at once.The bottom row of Fig. 1 shows this combination method with the following image plane composition.The illumination type and number of image planes for each structure are also marked in Fig. 2. The design wavelength (DWL) was fixed to 1.15 mm (corresponding to the frequency of 260 GHz), which matched the available equipment and optical properties of the materials (mainly, the absorption coefficient). Eight types of structures are investigated in this paper: five designed with the proposed segmentation method, two designed with neural-network-based (NN-based) approach, and reference structure, all illustrated in Fig. 2. Within the set of segmented structures, three DOEs were divided into quarters (four segments of a circle) and combined to reconstruct a square at different distances (Quartered with Multiple planes-QM) or the same single distance (Quartered with Single plane-QS).The third quartered structure was combined to reconstruct at a single plane like QS but had other simulation parameters chosen to be closer to the NN-based structures (Quartered with NN-based parameters-QN).The design of this structure aimed to provide a better comparison with the NN-based structures, which will be described in detail later.The following two structures have been divided into honeycomb cells with round (Honeycomb with Circular apertures-HC) or hexagonal (Honeycomb with Hexagonal apertures-HH) apertures.The central aperture and each pair of opposing cells were treated as separate segments (resulting in a total number of four segments). A reference structure (REF) has been designed with a standard Ping-Pong algorithm.It consists of a single segment in a single plane.It can be described as a typical computer-generated hologram for the uniform illumination of a square. The sampling period of a model of the structure was set to 117 μm based on the requirements of the manufacturing method.Since the propagation method uses the Fourier transform, the resolution of a calculation matrix had to be large enough to avoid unwanted sampling effects.The resolution was set to 4096 × 4096 points.The exceptions were QN and REF, whose sampling period was set to 0.9 mm and resolution was reduced to 1024 × 1024. Subsequently, two structures have been designed with the previously investigated algorithm for the optimization of the DOEs based on the neural network (NN) 13,23 .The structures designed with the NN algorithm do not utilize the segmentation method.They are both single-plane DOEs, consisting of a single segment and work as an additional reference, designed with a unique, novel method.The principle of the NN-based design method is described in our recent publication 23 .The crucial information is that the NN algorithm uses the convolution method to simulate radiation propagation and adaptive moment estimation (ADAM 28 ) method to optimize the phase delay map to match the target amplitude (40 mm square).The optimization parameters have been set to In both methods, each quarter is calculated separately, and each is responsible for imaging a square at slightly different distances (490 mm, 495 mm, 500 mm, and 505 mm).The first case introduces each quarter one by one in the order from the largest imaging distance to the smallest.Between the introduction of consecutive quarters, propagation is used to move by the difference in imaging distances to the plane of the next quarter (Roman numerals show the order of operations).The last propagation is backward to arrive at the selected distance of 500 mm from the image plane.As a result of the first method, image planes from each quarter coincide.In the second method, all quarters are placed at the same plane, resulting in separate images of a square. Vol:.( 1234567890 www.nature.com/scientificreports/α = 0.1 , β 1 = 0.9 , β 2 = 0.999 and ǫ = 10 −5 .Optical design parameters were the same as for segmented struc- tures.However, the sampling period was set to 0.9 mm and matrix resolution to 128 × 128 points, which results www.nature.com/scientificreports/from the implementation of the NN algorithm.Two structures have been designed with this method applying the illumination with a uniform amplitude (NNPW) and a Gaussian-shaped amplitude (NNG).Two types of illumination have been investigated in the simulations.The first, denoted as plane wave (PW), is an illumination with a constant amplitude distribution.It makes sense from the perspective of the structures' design, as every part of the element contributes equally to forming the final intensity pattern.This approach, however, is not physical, as in most cases the amplitude distribution of the illuminating field is of Gaussian shape.Therefore, some structures have been designed using the Gaussian amplitude distribution.This approach described the experiment more accurately, but it pays less attention to the optimization of the outside regions of the structure.It should emphasized that in both approaches, the illuminating wavefront is flat (the beam is collimated, forming the quasi-plane wave).The differences concern only the amplitude distribution of the beam. Parameters and types of structures are additionally summarized in Fig. 2. Manufacturing Physical models can be created from numerical representations of phase modulation distributions with the use of 3D printers.Fused Deposition Modeling (FDM), also called Fused Filament Fabrication, was chosen as the manufacturing method for all diffractive structures described in this paper.Numerical representations of the holograms have been saved in 8-bit loss-less BMP files, which means that the phase values obtained have been sampled into 256 levels.Next, 3D models were obtained from BMP files, with dimensions accounting for the known refractive index of the material used to manufacture the structure. In the modeling process, two different approaches were performed.The first solution was the representation of each pixel by a node.Nodes were extruded into different height levels according to the phase values varying from 0 to 255 and the material's refractive index used in manufacturing.The height represented by each pixel was calculated using the physical formula h(x, y) = 2π φ(x,y) n−1 , where h(x, y) is the extrusion pixel height as a function of x and y Cartesian coordinates, φ(x, y) is the desired pixel phase retardation distribution, is the design wavelength, and n is the refractive index of the material.Subsequently, nodes were connected into the triangular mesh, creating a 3D model.Due to the interpolation between pixels, such an approach gives satisfying results for small sampling values and continuous phase changes of the structure.This method was performed for structures with 117 μm and 4096 × 4096 px using Blender software. The second approach for 3D modeling is a novel method presented in our previous study 23 .The idea is to extrude each pixel as a cuboid into the height level described by the structure's height formula.As a result, the extruded pixels more accurately represent gray-scale BMP files.Sampling is precisely determined, allowing the 3D printer to accurately manufacture the area of each cuboid according to the nozzle size used in the manufacturing process.Thus, the gray-scale images are precisely represented by 3D models and manufactured structures.Moreover, this approach allows for a more accurate representation of irregular and chaotic phase distributions that might occur, e.g., in NN-based algorithms.This modeling method is a better solution for structures with smaller matrix sizes and sampling distances larger than 900 μm, and it was applied for structures QN, REF, NNPW, and NNG. The designed thickness of each structure was 2.07 mm, as calculated by the formula for the height of the structure.Since phase modulation could locally have a zero value, we included a 1.5 mm substrate for all structures, resulting in a total maximum thickness of 3.57 mm.Each structure also has been fabricated with 5 mm thick square frame for easier handling in transportation and during measurements.Additional material from the substrate and frames made structures more rigid and reduced the risk of warping and deformation that might have occurred during the manufacturing process and cooling of the structures. Optical properties of the selection of materials had been determined with THz time-domain spectroscopy 29 (THz-TDS TeraPusle Lx system from Teraview).After examination of multiple polymer materials available for FDM 3D printing technology 30 , presented in our study 31 , styrene butadiene copolymer (SBC) was selected for manufacturing the structures.The refractive index n (dashed lines) and the absorption coefficient α (solid lines) in the frequency domain for acrylonitrile styrene acrylate copolymer (ASA), butenediol vinyl alcohol copolymer (BVOH), polyamide 12 (PA 12), polycarbonate (PC), polyactic acid (PLA), and SBC materials are shown in Fig. 3 with photographs of the prepared samples.In Fig. 3, the vertical dashed line corresponds to the DWL frequency equal to 260 GHz for the designed structures. The samples presented in Fig. 3b were manufactured with 450 μm horizontal resolution and 100 μm varietal resolution, which correspond to the line and layer thicknesses applied in the manufacturing process.In the THz radiation region, especially for the DWL of 1.15 mm, the lines and layers have sub-wavelength dimensions.As a result, the samples intended for THz radiation exhibit homogeneity throughout their entire volume.All the samples were manufactured the same way as the structures pretested in this study.Consequently, the measured optical characteristics of the samples are directly indicative of those of the structures.SBC, manufactured by Orbi-Tech, also called BendLay, has suitable optical properties for manufacturing phase diffractive passive optical components.According to the materials' characteristics illustrated in Fig. 3, SBC has a significantly lower absorption coefficient than compared polymer materials in the entire verified radiation range from 100 GHz to 1 THz.The SBC refractive index value equals 1.557 and the absorption coefficient 0.162 cm −1 for DWL (260 GHz). Knowing the absorption coefficient and structure thickness, one can estimate the maximum absorption (absorption at maximum thickness) by applying the Beer-Lamber's law with the absorption coefficient as the attenuation coefficient.Such calculations yield an attenuation factor of 5.6%.Maximum losses from Fresnel reflection can be estimated from a refractive index value and under the assumption of normal incidence as 4.7% in the presented case.The total maximum losses from the material properties are then estimated at ca. 10%. Methods This subsection describes a Schottky diode-based setup with frequency multipliers prepared for the experimental evaluation of manufactured structures.A short description of an experimental protocol is provided.Finally, the analyzed experimental results are summarized. Experimental setup and protocol A VDI multiplier chain based on Schottky diodes with a proper horn antenna was used as a source of radiation at the DWL (260 GHz), and a VDI WR3.4 zero bias detector (Schottky diode-based) with a symmetrical diagonal horn antenna as a detector.The DWL matched the frequency with the maximum power output of the source (0.95 mW).It has to be noted that the source is strongly coherent, which results in relatively effortless and, in many cases, unwanted interferences.Therefore, free space propagation required additional shielding/masking of the radiation reflected from different surfaces in the setup, which is still within the coherence length of the emitter.As the quasi-plane-wave illumination is necessary, an off-axis aluminum parabolic mirror was used.It redirected the collimated beam onto the investigated structure.The detector has been placed in the image plane of the evaluated DOE on three Thorlabs NRT150 motorized stages in a configuration allowing for 3D scans.Its responsivity was equal to 1500 V/W at the DWL.The voltage produced by the detector was measured with a Stanford Research SR830 lock-in amplifier.Voltage readings were directly proportional to the measured intensity for selected DWL and power ranges.The detector was moved point-by-point and line-by-line with a 2.5 mm step in a 60 mm × 60 mm square, which translated to 26 • 26 = 676 data points.The signal for a single point was averaged for 0.3 s.The scanning time of a single scan was equal to 45 min, which is connected mostly with the motion of the translation stages.It has to be noted that the selected sampling distance results in overlapping data points in the experiment due to the fact that the size of the horn is larger than the sampling distance.Figure 4 shows the visualization of the experimental setup. A relatively large dimension of structures in relation to the propagation distance and the utilization of the scalar approach for the design procedure result in inaccuracy in the determination of the exact image plane position.In order to obtain comparable results, the best imaging distance for reference has been identified and applied for all measured structures.All structures imaged the square with significant speckles at positions roughly corresponding to the results from simulations. Numerical analysis A scan of each structure provides a set of data points.Each set was normalized (rescaled) locally: the maximum from a given scan was equal to 1, and 0 remained the same (minimum values were not scaled down to 0).The normalization of the maximal registered signal was necessary for a fair comparison between the structures.It is connected with two effects.Firstly, the highest signal results from the coincidental positive interference of the radiation in the form of the brightest speckle.It is unstable and connected with the whole experimental setup; therefore, it should not influence the comparison between the investigated structures.Secondly, some of the proposed DOEs have different shapes and, thus, different active areas.This means that they gather different amounts of radiation, which further influences the values of the registered signal.The normalization of the intensity allows for independence from these factors.Region of interest (ROI), with the shape and size corresponding to the requested image, was defined as a 40 mm square.Special care was taken to keep the same number of data points in each ROI.For normalized data, two measures of central tendency (mean and median) and three measures of variability (interquartile range (IQR), sample standard deviation, and root mean square (RMS)) were calculated from ROIs.The IQR depicts the central 50% of the registered intensity values.The standard deviation denotes differences in registered intensity values in relation to the mean value in ROI.The RMS additionally accounts for the absolute intensity values of the registered data points.RMS was calculated as a square root of the ROI points' mean square: , where x i is the i-th data point from the set of ROI points, and n is the number of data points in the ROI set. For the calculation of the signal-to-noise ratio (SNR), an additional set of points representing the background was selected (marked with red dots in Fig. 5).Similarly as for ROI (marked with green dots in Fig. 5), background points were selected to have the same number of points for all structures.Figure 5 shows an example of ROI and background selection.SNR is calculated as the mean value of the intensity distribution from ROI data points divided by the mean value of intensity from background data points.It has to be emphasized that there are other methods to define the SNR.The prevailing approach in image processing is to use the ratio of mean signal value www.nature.com/scientificreports/ to standard deviation of the noise.This method is valuable for the qualification of image quality.However, this approach does not provide valuable information for beamforming tasks. To summarize, raw data points from each experimental scan contain information on the position of the detector and measured voltage.Voltage is treated as directly proportional to the intensity of the detected radiation.From all measured data points, those corresponding to ROI and background were selected for further numerical analysis. Results and discussion Figure 2 depicts all measured and simulated data together with the designed phase modulations and photographs of the manufactured structures.The last column of the figure contains a summary of properties that differ between each case.The third and fourth columns present the results of the theoretical simulations and the experimental evaluation.In both columns square areas of intensity higher than the background are observed.The difference of the intensities is significant enough to properly fit the square ROI for each structure. From a general overview of 3D printed materials and printing methods, one can conclude that the resolution of such prints is high enough to represent most of the details from designed phase maps.The only details that may be hard to reconstruct with this printing method are wavy distortions on edges between quarters in QS and QN, which come from the propagation used in the process of constructing phase modulations in those two cases.Prints of neural-network-based structures (NNG and NNPW) have no noticeable inconsistencies due to the manufacturing method.However, some errors are expected due to significant local variations in phase modulations (large phase changes in neighboring positions). Resolutions and sampling distances of simulations and experiments are different for each of the evaluated structures.Parameters of simulations are directly tied to parameters for the calculation of phase delay maps, whereas experimental parameters result from the experimental setup.The direct numerical comparison between simulations and experiments is difficult due to different samplings of calculation and obtained experimental matrices.A denser sampling of the calculation matrix results from the necessity of creating a high-resolution phase delay map to manufacture the structures.On the other hand, sparse sampling of the obtained experimental matrix is related to the detector's aperture and scanning time.From the qualitative examination of simulation data, one can expect that the REF structure will be the brightest.Since NNG and NNPW were simulated with a different method (neural-network-based) than other structures, their result is hard to compare. Calculated numerical measures (Fig. 6) show that reference structure (REF) has the highest mean and median value (the higher, the better). This indicates that the REF structure has the highest intensity, with QN and QM having second and thirdbest results in this category.Measures of variability show a slightly different picture.The least variable according to standard deviation and RMS (the lower, the better) are HC, HH, and NNPW, in that order, with HC having a significant lead.IQR (the lower, the better) indicates HC and HH as the least variable (difference at the third Figure 6.On the left: the box plot for each structure based on its region of interest (ROI) data points.Boxes represent the interquartile range (IQR) of measured intensity.Orange, solid lines inside the boxes show the median, and green, dashed lines show the mean values of registered intensity.Additionally, the highest mean and median values for reference structure have been added in the background of the box plot forming elongated lines to enable comparison with the values inside boxes for each structure.Whiskers protruding from the boxes to the left and right sides denote the lowest/highest data point within 1.5 IQR from the first/third quartile.All data points outside those ranges are displayed as empty black circles.On the right: standard deviation (denoted by black crosses) and root mean square (RMS), marked with filled red circles) from ROI.Higher values of mean and median are preferred as the distribution is less influenced by the high-intensity spots.Narrower boxes (IQR) and lower standard deviation and RMS correspond to better (lower) variability of the distribution.decimal place) with QS as the third least variable.At the same time, SNRs (displayed in Fig. 5; the higher the result, the better) show similarly high results for HH, HC, and QM. The division into segments created different possibilities for other designs.Since each quarter has been iterated separately, QS and QM are differentiated from each other by the combination method and consequently by common (single) or separated (multiple) imaging distances.QS has better variability results than QM (3% better standard deviation, 25% IQR, and 8% RMS) but at the cost of decreased average value (by 13%) and slightly decreased SNR (by 19%). NN-based structures (NNG and NNPW), when compared with segmented structures with the same sampling distance (QN), show improvement in variability measures (IQR by up to 31%, standard deviation by up to 14%, and RMS by up to 22%).However, NN-based structures suffer a significant reduction in average value (by at least 26%) and SNR (by at least 52%).Taking a broader look at all presented data, one can observe indications of correlation: structures manufactured with cuboid method (0.9 mm sampling distance) compared with node method structures (0.117 mm sampling distance) tend to have higher measures of central tendency, higher measures of variability, and lower SNR.This dependence is broken by NN-based structures whose measures of tendency and variability lie in between node-based structures and have the lowest calculated SNR. Some of the presented segmentation methods use symmetric segments (HC, HH) and some asymmetric ones (e.g., QM).Symmetric approaches improve further data variability at the cost of lowering central tendency measures.At the same time, structures with symmetric segmentation keep the highest SNR value in the context of the presented structures.Different shapes of apertures do not seem to introduce significant changes. Conclusions The preceding analysis shows that it is possible to obtain a greater uniformity (reduced variability) in DOEs' output intensity distributions by an application of in-plane phase segmentation.In general, one can observe that all modified structures (non-REF) demonstrate lower variability (up to 17% in standard deviation, 45% in IQR, and 37% in RMS).However, better variability results come at the cost of the average value of the output intensity distributions, represented by measures of central tendency (up to 46% in mean values and 58% in median).Structures with the lowest variability, such as HC and HH, have at the same time the least efficient in terms of the redirected power.Among all structures, it is possible to select those that present improved variability performance with a better average/median of intensity, e.g., QS or QM.SNR plot also shows a better concentration of THz radiation on the specified area for all non-REF structures (increase up to 68%) apart from neural-network-based approaches (reduction of SNR up to 42%). The presented holographic approach with segmentation shows improvement in the variability of intensity distribution behind designed structures.Additionally, structures still perform beamforming functions-produce square distributions at selected distances-while improving SNR.Designed structures require no active elements in the setup.On the other hand, speckle noise is still present in the output distribution, which is to be expected in setups with highly coherent sources.NN-based solutions do not show significant improvements over the presented segmentation approach.At the same time, the presented analysis does not reject the NN-based approach as futile.Further investigation in this area is required.Subsequent studies should also cover larger scanning ranges to verify the beamforming quality and, therefore, better map noise distribution outside the ROI. The elimination of the speckle patterns in the case of highly coherent THz beams is a very challenging task.The proposed method, evaluated on the example of the uniformly illuminated square, has shown some improvements.It should be noted that the range of applications of the discussed methods covers all kinds of THz imaging or tomography systems, where speckle patterns are an issue.On the other hand, there are also applications of the speckle patterns themselves, such as, for example, ghost imaging 32,33 .Therefore, the methods of manipulating such patterns (whether to mitigate or enhance their presence) can serve as an important role in THz imaging systems. Figure 1 . Figure 1.Visualization of two different combination methods resulting in single (top row) and multiple image planes (bottom row).The left side of the images shows how phase segments are combined, and the right side displays the influence of each segment on the final image plane composition.In both methods, each quarter is calculated separately, and each is responsible for imaging a square at slightly different distances (490 mm, 495 mm, 500 mm, and 505 mm).The first case introduces each quarter one by one in the order from the largest imaging distance to the smallest.Between the introduction of consecutive quarters, propagation is used to move by the difference in imaging distances to the plane of the next quarter (Roman numerals show the order of operations).The last propagation is backward to arrive at the selected distance of 500 mm from the image plane.As a result of the first method, image planes from each quarter coincide.In the second method, all quarters are placed at the same plane, resulting in separate images of a square. Figure 2 . Figure 2. A summary of examined structures.In columns from left to right: a phase modulation introduced by each structure; in the next column, a photograph of the manufactured structure; numerical simulation results; experimental data; a short description of the properties of each structure.Acronyms on the leftmost side identify structures.Phase modulations show changes in the 0−2π range (black to white) with a color scale at the bottom left of the figure.Intensity distributions of simulations and experiments are separately (individually) normalized with a common color scale at the bottom right of the figure.Images from experimental evaluation contain red squares showing areas selected as a Region of Interest (ROI) for each structure.The detailed quantitative comparison of structures is given later in the "Results and discussion" section.The size scale in millimeters is placed in the bottom right corner of each image. https://doi.org/10.1038/s41598-024-63517-7www.nature.com/scientificreports/ Figure 3 . Figure 3. On the left (a): the absorption coefficients (solid lines) and the refractive indices (dashed lines) in the frequency domain for polymer materials used in the fused deposition modeling (FDM) additive manufacturing technology.The vertical dashed line marks the frequency of the source (260 GHz).The presented data was obtained by THz-TDS examination of acrylonitrile styrene acrylate copolymer (ASA), butenediol vinyl alcohol copolymer (BVOH), polyamide 12 (PA 12), polycarbonate (PC), polyactic acid (PLA), and styrene butadiene copolymer (SBC) materials.On the right (b): photographs of 3D printed pellets prepared for examination with THz-TDS. Figure 4 .Figure 5 . Figure 4. On the left (a): visualization of the experimental setup.The setup uses a Schottky diode-based source and detector in a single-point scanning configuration working at 260 GHz.The source illuminates the parabolic mirror from a distance equal to the mirror's focal length.The mirror with a focal length of 600 mm and diameter of 200 mm collimates the beam and directs it at a structure under test.The beam size is approximately 200 mm in diameter and collimated beam has Gaussian-like intensity distribution.The scanning plane is placed around 500 mm behind the structure.On the right (b): the photograph of the setup with labeled elements and a trace of the radiation propagation.
7,560
2024-06-03T00:00:00.000
[ "Physics", "Engineering" ]
Kernel graph filtering—A new method for dynamic sinogram denoising Low count PET (positron emission tomography) imaging is often desirable in clinical diagnosis and biomedical research, but its images are generally very noisy, due to the very weak signals in the sinograms used in image reconstruction. To address this issue, this paper presents a novel kernel graph filtering method for dynamic PET sinogram denoising. This method is derived from treating the dynamic sinograms as the signals on a graph, and learning the graph adaptively from the kernel principal components of the sinograms to construct a lowpass kernel graph spectrum filter. The kernel graph filter thus obtained is then used to filter the original sinogram time frames to obtain the denoised sinograms for PET image reconstruction. Extensive tests and comparisons on the simulated and real life in-vivo dynamic PET datasets show that the proposed method outperforms the existing methods in sinogram denoising and image enhancement of dynamic PET at all count levels, especially at low count, with a great potential in real life applications of dynamic PET imaging. Major comments: 1) The methods part is confusing. Not enough physical explanation has been provided for the employed theoretical model. 2) Lines 70-71, is index "i" is essentially the scan time? Please explain. 3) Line 81, what does the sigma of matrix W represent? Please explain. It is confusing to me. 4) Line 86 , what is the physical meaning of the parameter "x". Please define and explain. 5) Lines 73-75, "The sinogram denoising method to be proposed in this paper does not require specific knowledge about the corrupting noise in the noisy sinograms. Therefore there is no further assumption on pi's.", meaning is unclear. Please re-write. (1), the Gaussian kernel function, the 2 parameter Gaussian function could be represented as; 6) Equation Comment on the similarities and differences of your Gaussian kernel function and the one I showed above. 7) What does α represent in Eq. (3)? I am again confused. (5), what γ actually does here? 9) Lines 141-144, "At smaller i's, the energy is low and energy variations are large, we use smaller knni to include fewer neighbour sinograms, namely, fewer nonzero aij's. At larger i's, the energy is high and energy variations are small, we use larger knni to include more neighbour sinograms, namely, more nonzero aij's". I am again confused here, and I cannot follow the derivations. Please re-write. 8) Equation 10) Please discuss in detail and in a clear manner, the difference between your work and previous work cited in Ref. 11) The PET data you have used are actually simulated and you have introduced Poisson noise, the method seems to work well. However, it is highly recommended to use realistic experimental data. Please use some realistic measurement examples and compare. 12) Lines 267-275, the authors used some parameter values. I am again confused how these values are chosen? Some sort of justification would be needed for the use of these values. If in case, it is really not possible to justify and these values are chosen in a "random" fashion, the you need to perform sensitivity study by changing the values of each of these parameters and investigate their respective influence on the final results. I have some PET data of mouse head, there are namely two file extension .img and .hdr, does your program model takes .img and .hdr? please confirm this. Minor comments: 1) Lines 14-15, "For example, when other imaging modalities, such as magnetic resonance (MR) or computed tomography (CT), are incorporated, some bone or MR-only lesion [4] originally nonexistent in the PET image may be introduced.". meaning unclear, please re-write. 2) Line 24, "…correlated and the is very difficult to reduce." Remove "the". 6) Lines 227-228, "To get dynamic PET images, the time activity curve (TAC) in Fig. 1 (b) was filled into corresponding tissues to produce the noiseless ground truth images". Please explain how you have obtained the time-activity curves in your work. 7) Lines 269-270, "For fair comparison, these parameters were tuned differently on the sinograms data". How did you tune these parameters? Please elaborate. 8) Fig. 6, Please consider changing line and symbol colors for different method, it is hard to differentiate these. Check for similar issue in other plots. 9) In conclusion, Lines 447-448, "Extensive simulation studies and tests on in-vivo dynamic PET data have shown the efficacy and advantages of the proposed method over the existing methods". At the current stage and version of your manuscript, I think the test that you have performed is not extensive, please refer to my comments in the "Major comment" section of this review report. An optional comment: (this comment is optional and not mandatory; the authors can choose to ignore this comment) Fig. 1. Reconstructed PET images of mouse head. If your model can work with .img and .hdr file extensions, I am willing to send you my PET sinogram data that upon reconstruction will reproduce the images shown in Fig. 1. I guess it would be a good test to see how your model behaves when realistic PET measurements are supplied to it. End of my comments
1,199.8
2021-12-02T00:00:00.000
[ "Physics" ]
Effect of defects controlled by preparation condition and heat treatment on the ferromagnetic properties of few-layer graphene Magnetism in graphene has stimulated extensive studies to search for novel metal-free magnetic device. In this paper, we use a synthesis method far from equilibrium state named self-propagating high temperature synthesis (SHS) to produce few-layer graphene with different defect contents and then use a heat treatment process (vacuum-annealing and air-cooling) to further control the defects in graphene. We find that the type and content of defects in graphene can be controlled by adjusting the mole ratio of reactants (Mg: CaCO3) for SHS reaction and the temperature of the subsequent heat treatment. The deviation of the ratio of reactants from stoichiometric ratio benefits the production of graphene with higher concentration of defects. It is indicated that the temperature of the heat treatment has remarkable influences on the structure of graphene, Raman-sensitive defects can be recovered partly by heat treatment while IR-sensitive defects are closely related with the oxidation and decomposition of the oxygen-containing groups at elevated temperature. This work indicates that SHS is a promising method to produce graphene with special magnetism, and the heat treatment is an effective way to further adjust the magnetism of graphene. This work sheds light on the study to develop carbon materials with controlled ferromagnetism. Graphene has generated a lot of activity in the area of material science due to its exceptional electronic and mechanical properties 1,2 . Compared with other properties, magnetism in graphene 3-8 has stimulated extensive studies to search for novel metal-free magnetic device. The emergence of magnetism in versatile natured graphene and the ability to control its properties can lead graphene to be an excellent material for spintronics and other memory based device applications which promise information storing, processing and communicating at faster speed with lower energy consumption. Research on the origin of magnetism in graphene oxide 9 , graphene nanoflakes [10][11][12] , hydrogenated graphene 13 and graphene nanoribbons 14 suggested that the magnetic behavior of graphene based materials is to a large part governed by their structures. Although the mechanism of graphene magnetism is complicated, extensive theoretical and experimental studies indicated that defects 15 , disordering 4 , covalent-adsorption 16 and magnetic edge state in graphene nanoribbons 17 and partially hydrogenated epitaxial graphene 13 are the potential carriers for the magnetism in graphene. Ferromagnetism has also been observed in graphene materials prepared by different methods like thermal exfoliation of graphitic oxide, conversion of nano diamonds, arc evaporation of graphite in hydrogen and graphene oxide partially reduced by hydrazine and further completely reduced by thermal annealing, since graphene obtained by different methods has different types and quantity of defects 13 . Recently, we have developed a facile and cost-effective method named as self-propagating high temperature synthesis (SHS) to produce few-layer graphene 18 . The SHS process utilizes the heat generated by the exothermic reaction of Mg and CaCO 3 to sustain itself in the form of a combustion wave after external ignition. The process is of high reaction temperature, 1 fast heating and cooling speed and far from equilibrium state, so the defect in graphene made by this method is special. We have found that few-layer graphene samples both non-doped and doped with nitrogen produced by SHS method exhibit ferromagnetic properties and have high Curie temperatures (>600 K), and the saturation magnetization and coercive field increase with the increasing of nitrogen contents in the samples 19 . Taking advantage of the far-from-equilibrium-state SHS process, people are expected to produce graphene with different kinds and contents of defects, which helps further clarify the relationship between defects and the ferromagnetic properties of graphene. However, few works have been done on these issues. In the present study, firstly, we explored the method to produce few-layer graphene with different defect concentrations by changing the ratio of reactants (Mg: CaCO 3 ) in SHS process. Secondly, in order to further improve the magnetic property of SHS graphene, we proposed a heat treatment method (vacuum-annealing and air-cooling), which is heating the sample in vacuum environment at a certain high temperature and then cooling down to room temperature in atmospheric environment. Our works indicated that the deviation of stoichiometric ratio of the reactants under far from equilibrium state is a promising method to produce graphene with special magnetism, and that the designed heat treatment is an effective way to further adjust the ferromagnetism of graphene. Experimental Synthesis of graphene. Here, we used the SHS method to synthesize graphene with different content of defects by changing the ratio of reactants: magnesium, (99.5% purity) and calcium carbonate (CaCO 3 , 99.5% purity); these materials were purchased from Sinopharm Chemical Reagent Co., Ltd. The SHS experiments were conducted in a stainless-steel combustion chamber under an atmosphere of carbon dioxide (99.9%) 19 . In order to investigate the effect of reactant composition on the chemical and ferromagnetic properties of graphene, the molar ratios of Mg and CaCO 3 were chosen as 2:1 and 4:1; the ratio (2:1) is a stoichiometric ratio according to the reaction: 2Mg + CaCO 3 = 2MgO + CaO + C (graphene), while the ratio (4:1) was designed to deviate from the stoichiometric ratio. The products were expressed as M2C1 and M4C1, respectively, according to the ratios of Mg and CaCO 3 . 16 grams of Mg for M2C1 and 32 grams for M4C1 were added to 33.3 grams of calcium carbonate and then milled in a mortar for 20 minutes, respectively. Each sample was ignited by an electric ignition device composed by a direct current (DC) power source and a resistance-based wire heater. The ignition current was 22 A. The coarse product was placed in dilute hydrochloric acid (10 v/v %) containing ethanol (20 wt %) and sonicated for 1 h, then washed with deionized water and absolute ethanol in that order. The obtained sample was dried in a vacuum oven at 120 °C for 24 h. Every graphene sample (M2C1 or M4C1) was divided into 4 parts and three of them were heated at 500 K, 650 K and 800 K, and named as M2C1-500 or M4C1-500, M2C1-650 or M4C1-650 and M2C1-800 or M4C1-800, respectively. As a contrast, the initial M2C1 and M4C1 sample without heat treatment was named as M2C1-G and M4C1-G (G stands for the generated graphene). The heating rate from room temperature to the desired temperature was 5 K·min −1 and kept for 5 min in vacuum (10 −4 Pa), then cooled down to room temperature within 5 minutes in air by opening the valve. Characterization techniques. The phase composition of the as-prepared powders was analyzed by powder X-ray diffraction (XRD) analyses (Philips X'Pert diffractometer) with CuKα radiation. Environmental scanning electron microscopy (ESEM, Helios Nanolab 600i) and high-resolution transmission electron microscopy (HRTEM JEM-2100) were used to observe the morphology of the graphene sheets. The TEM specimens were prepared by dropping ethanol/water (38 v/v %) solution containing 1 wt % graphene onto a copper grid and drying at 100 °C. Raman spectra was obtained using a Raman Station (B & WTEK, BWS435-532SY) with a 532 nm wavelength laser corresponding to 2.34 eV. X-ray photoelectron spectroscopy (XPS, Thermo Fisher) was utilized to determine the bonding characteristics of the samples. All XPS peaks were calibrated according to the C 1 s peak (284.6 eV). The magnetic properties were measured using a Quantum Design MPMS magnetometer based on a superconducting quantum interference device (SQUID). Thermogravimetric analysis (TGA) was performed on a Netzsch STA 449 F3 under a heating rate of 10 K·min −1 in air atmosphere form 300 K to 1200 K. The nitrogen adsorption/desorption measurements were carried out on Belsorp mini II (Japan) at 77 K to obtain the specific surface area of M2C1-G and M4C1-G. Before adsorption/desorption tests, the samples were degassed at 150 °C for 4 hours with vacuum pumping. Figure 1 shows the typical SEM and TEM images of the SHS products. Figure 1(a) and (b) are the SEM images of M2C1-G and M4C1-G, respectively. In the images, thin corrugated sheets can be found assembled together, showing a three dimensional porous structure. In addition, the EDX of both samples have been provided in the Supplementary Information Fig. S1 and Table S1. It reveals that both M2C1-G and M4C1-G are mainly composed of C and O, and a small amount of Ca and Mg. The components of their composition have been list in Table S1. In the sample of M4C1-G, the contents of magnesium and calcium are less than those in M2C1-G, which are consistent with the results of XPS. Figure 1 the SSA of monolayer graphene is 2630 m 2 g −1 . According to the above analysis and the Raman analyses in the following section, we conclude that the products synthesized by SHS method are few-layers graphene. Result and Discussion The difference of the morphology between M2C1-G and M4C1-G can be understood by considering the mole ratios of reactants (Mg: CaCO 3 ) for SHS reaction. The stoichiometric mole ratio of the reaction between Mg and CaCO 3 is 2:1, which is just the ratio for M2C1-G, while the ratio for M4C1-G is 4:1, much higher than the stoichiometric ratio. The deviation of the stoichiometric ratio for M4C1 means that Mg is excessive for the reaction, and the excessive Mg may play multiple roles in the SHS reaction. Firstly, the excessive Mg may melt at 648 °C and volatilize at 1107 °C, which can absorb large amount of heat produced by the exothermic SHS reaction (ΔH = −632 kJ/mol) and decrease the maximum temperature of the reaction. Secondly, the gaseous Mg in the enclosed space of reaction container may affect the growth process of graphene since they may decrease the collision probability of the reactive carbon atoms produced during the SHS reaction process. As a result, we can deduce that the reaction temperature for M4C1-G is lower than that of M2C1-G which benefits the production of smaller and thinner sheets for M4C1-G as shown in Fig. 1. Of course, this is only the basic discussion on the phenomenon, to further understand the roles of the excessive Mg, more work should be done to clarify the mechanism of the SHS reaction. Figure 2 shows the FTIR spectra of M2C1-G, −800 and M4C1-G, −800. The absorption peak around 1575 cm −1 is ascribed to the skeletal vibration of aromatic ring (C=C stretching vibration); the peaks at 1141 cm −1 , 1717 cm −1 , 2850-2920 cm −1 and 3200-3600 cm −1 are attributed to the C-O-C, C=O, C-H and O-H vibration, respectively. On the one hand, from Fig. 2(a) it can be found that the peaks corresponding to H 2 O (1624 cm −1 ) and O-H vibration (3200-3600 cm −1 ) with the increase of heat treatment temperature, suggesting the remove of hydroxyl and water on graphene; the peaks corresponding to oxygen-containing groups are not clear for M2C1-G, suggesting that M2C1-G has good chemical stability. On the other hand, it is interesting to see that the FTIR spectra of M4C1-G is quite different from that of M2C1-G. The peaks corresponding to epoxy, hydroxyl, carbonyl and carboxyl groups can be both found for M4C1-G and M4C1-800; however, the relative intensities of peaks corresponding to epoxy, carbonyl and carboxyl groups for M4C1-800 increase obviously, while the peaks corresponding to O-H vibration almost disappear, compared with those for M4C1-G. Consequently, it can be concluded that M2C1-G has less oxygen-containing groups and is more stable for the heat treatment than M4C1-G and that the oxidization of graphene happens for M4C1-G heat-treated at high temperature. To better study this behavior, we performed the Thermogravimetric Analysis (TGA) and the differential scanning calorimetry (DSC) of M2C1-G and M4C1-G in air environment at the heating rate of 10 K·min −1 and the result has been added in Fig. S2 (Supporting Information). From the curves of DSC, two exothermic peaks can be seen, corresponding to the two weight loss stages from the curves of TG. The first exothermic peak is located at 776 K for M2C1-G and 740 K for M4C1-G, while the second exothermic peak is at 906 K and 884 K for M2C1-G and M4C1-G, respectively. The first exothermic peak is small compared with the second one for M2C1-G, while they are almost equal for M4C1-G. Accordingly, there are two weight loss steps for the SHS graphene. The first step of weight loss occurs at the temperature range of 300-830 K for M2C1-G and 300-670 K for M4C1-G, corresponding to the removal of adsorbed water and the labile oxygen-containing groups. The second mass loss range is from 830 K to 950 K for M2C1-G and 670 K to 950 K for M4C1-G, which is assigned to the combustion of the carbon skeleton of graphene, releasing CO and CO 2 . From the results of DSC and TG analysis, we conclude that the SHS graphene had two types of structure. One is easily oxidized at low temperature, corresponding to the oxygen-containing groups and carbon defects; the other is more thermally stable, oxidized at higher temperature, assigning to the defect-free parts in SHS graphene 20 . But the ratio of the two exothermic peaks in the curves of DSC for the two samples is different. The relatively intensity of the first peak to the second peak in M4C1-G is much higher than that in M2C1-G, indicating that M4C1-G contained more oxygen-containing groups and its thermal stability is lower than that of M2C1-G, which are consistent with the results from FTIR and XPS. XPS characterizations are further performed to analyze the elemental composition and C/O configuration in the samples. The XPS survey spectra of the samples in Fig. 3(a) show the presence of carbon, oxygen, magnesium and calcium elements, which is in agreement with the result of XRD. The high resolution C 1 s spectra of M2C1 and M4C1 heat-treated at different temperatures are shown in Fig. S3 and S4, respectively. The spectra are analyzed by XPSpeak41 software and corrected for the background signals using the Shirley algorithm prior to curve resolution 21 and π-π* satellite peak (290.5 ± 0.1 eV) [22][23][24] . In order to obtain more detailed information, the contents of components in C 1 s of M2C1 and M4C1 treated at different temperatures are analyzed according to the fitting and the results are shown in the Fig. 3(b,c,d and e). Figure 3(b) and (c) demonstrate the effect of heat treatment temperature on the XPS areas for C=C and C-C bonds. For M2C1 sample, the content of XPS area for C=C has a small fluctuation in the treatment temperature range 300 to 650 K and then decreases for 800 K. Interestingly, it is clear to find that the content of XPS area for C-C has an opposite trend. Since C=C and C-C bonds are related with sp 2 and sp 3 C in graphene, the well opposite trend suggests that the oxidized sp 2 carbons are mostly changed to sp 3 carbons and vice versa. Similar trend can also be found in M4C1 sample. XPS results in Fig. 3(d and e) give us information about the effect of heat treatment temperature on the contents of oxygen functional groups. Firstly, it can be found that the contents of carboxyl group in both M2C1 and M4C1 have an increasing trend with the increase of the heat-treatment temperature. The content increase (2.4%) of carboxyl group from M4C1-G to M4C1-800 is higher than that (1.45%) of M2C1 samples. Secondly, for the content of C-O group, it changes relatively small for M2C1 with the increase of heat treatment temperature but fluctuates largely for M4C1 heat-treated at 500 K, suggesting that M4C1 is easier to be oxidized at 500 K to produce C-O group (hydroxyl or epoxy group) and then the group decomposed at higher temperature. Thirdly, the contents of C=O and their fluctuation for M2C1 and M4C1 are relatively small. As a result, the content of groups in M2C1 is relative stable compared with those in M4C1, the results also give us valuable information for the explanation of the ferromagnetic properties of SHS graphene. Raman spectroscopy is considered to be an effective tool for characterization of mono-, few-, or multil-layer graphene [25][26][27][28] . The Raman spectra of the M2C1 and M4C1 samples treated at different temperatures are shown in Fig. 4(a and b). The Raman spectra of M2C1 and M4C1 show three peaks. The G band at 1570 cm −1 represents the in-plane bond-stretching motion of the pairs of sp 2 hybridized C atoms (the E 2g phonons); the D band at Scientific RepoRts | 7: 5877 | DOI:10.1038/s41598-017-06224-w 1341 cm −1 corresponds to breathing mode of rings or K-point phonons of A 1g symmetry; and the second-order D (2D) band at 2678 cm −1 originates from a two phonon double resonance process 28,29 . The 2D peaks of the M2C1 and M4C1 samples around 2678 cm −1 ,which shift greatly to lower wavenumber compared with that of graphite (2714 cm −1 ), can identify the samples as few-layer graphene 30,31 . The relative intensity of the D peak (I D ) to the G peak (I G ) in graphene is directly proportional to the level of defects in the sample 32,33 . Here, the intensity ratios (I D /I G ) is used to demonstrate the relatively change of I D to I G with the heat treatment temperatures as shown in Fig. 4(c). The defect density (cm −2 ) of graphene can be investigated and defined as n D = 5.9 * 10 14 * E L −4 * (I D /I G ) −1 29 , where the laser energy E L = 2.34 eV (λ = 532 nm) and the calculated defect density is shown in Fig. 4(c), which is in the same order as that of the annealed graphene prepared by CVD reported by Park 34 and two orders higher than highly ordered pyrolytic graphite irradiated by 140 eV Ar + ions reported by Ugeda 35 . From Fig. 4(c), it can also be found that the intensity ratio of I D /I G of the M4C1 sample is much higher than that of the M2C1 sample treated at the same temperature. This result indicates that the mole ratio of the reactants (Mg: CaCO 3 ) plays an important role on the defect density in the SHS products, the deviation of the ratio of reactants (Mg: CaCO 3 ) from stoichiometric one in the case of M4C1 benefits the production of graphene with higher defect density. The ratios of I D /I G of both M4C1 and M2C1 decrease with the increase of heat treatment temperature, while the decline for M4C1 is more obvious than that of M2C1. But the defect concentration in M4C1 is still higher than that in M2C1 overall even after heat-treated at 800 K. The reduction of defect density with the increase of the heat treatment temperature suggests the repair of the Raman-sensitive defects is the main process during the heat-treatment process. Finally, we can conclude that the ratio of reactants is the main factor for the formation of defects in SHS graphene and that the heat treatment can repair part of the defects characterized by Raman spectra, especially for M4C1. The ratio of I 2D /I G has been used for the identification of the number of graphene layers. The value of I 2D /I G obtained from Fig. 4(a and b) for the samples treated by different temperature are shown in Fig. S5(a). It can be found that the I 2D /I G value is about 0.6 for M2C1 and about 0.4 for M4C1, which are larger than that of graphite (about 0.3) 36 . These results also indicate that M2C1 and M4C1 are few layer graphene. In addition, comparing the ratios of the samples treated by different temperatures, we can find that the ratios of I 2D /I G for M2C1 and M4C1 at any treated temperatures do not change significantly, which indicates that the heating treatment has no obvious effect on the number of graphene layers. At last, we summarize the full width at half-maximum (FWHM) of the 2D band in the spectra for all samples in Fig. S5(b). We can find that there is no obvious variation on the FWHM of the 2D band for M4C1 and M2C1 at any heating treatment temperature. As mentioned in the research of J. T. L. Thong 37 , the FWHM of the 2D band in graphene could be a quantitative guide to distinguish the layer number (single-to five-layers) of few-layer graphene. However, it is based on graphene produced by mechanical exfoliation which has less defects, and the research about the relationship between defective graphene and FWHM of 2D peak has not yet been reported. In addition, the size of laser light spot for Raman spectra we used is about 100 μm and laser light may penetrate many graphene sheets in its light path, so the 2D peak we got reflects the information of many graphene sheets, which is composed of many overlaid 2D peaks. So the FWHM of 2D peak may not provide us with exact information about the number of SHS graphene layer. Powder X-ray diffraction is used to analyze the phases in M2C1 and M4C1 samples as shown in Fig. 4(d). It can be found that the most intense peaks in the two XRD spectra are the peaks near 26.0° corresponding to the (002) plane of graphite. The XRD spectra are similar with the FLG in refs 30, 36. The peaks belonging to CaO (JCPDS No. 48-1467) and MgO (JCPDS No. 45-0946) can also be found for M2C1, which are the by-products of SHS reaction. To investigate the magnetic properties of the SHS samples, the magnetization behaviors versus magnetic field curves for M2C1-G and M4C1-G and the heat-treated samples by 500 K, 650 K and 800 K are measured at room temperature (300 K) in the magnetic field range from −5000 Oe to 5000 Oe, as shown in Fig. 5(a and b). Ferromagnetism is shown clearly for M2C1 and M4C1 samples according to the magnetic hysteresis loops. The relationship between saturation magnetizations (M s ) and heat treatment temperatures obtained from Fig. 5(a and b) has also been shown in Fig. 5(c). As mentioned in our former work 19 , the total ferromagnetic impurities (such as Fe, Co and Ni) in SHS graphene are less than 15 ppm, indicating the ferromagnetic contribution of impurities could be neglectable. Therefore, the results represent that the ferromagnetism of SHS graphene is due to the structure defects in it. For comparison, we further summarized the M s values of several carbon materials at room temperature reported in the references as shown in Fig. 5(d). Among these materials, the saturation magnetization of M4C1 produced by SHS method in this paper is the highest at room temperature. Surprisingly, it can be also found that the M s changing tendencies of M2C1 and M4C1 heat-treated at different temperatures are quite different as shown in Fig. 5(c). Since it generally believed that the ferromagnetism is associated with defects in graphene, it is reasonable to deduce that the changing tendencies of the M s for M2C1 and M4C1 are affected by the changes of the defects in them. The changing tendency of defect concentration for M2C1 in Fig. 4(c) is consistent with that of the M s for M2C1, however, it is quite different for M4C1. It may indicate that Raman-sensitive defects must not be the only representation for the ferromagnetism in graphene. It is well known that Raman measurement is sensitive to symmetric structures while FTIR spectra is sensitive to asymmetric structures. We divide the defects of the SHS graphene into Raman-sensitive and FTIR-sensitive defects. Edges (zigzag and armchair), vacancies (including single vacancy, hydrogen partially saturated vacancy and vacancy cluster) and disordering are the defects originating from the broken of the C-C bonds which make graphene sheets distorted. These defects can be measured by Raman spectrum and mentioned as Raman-sensitive. The defects corresponding to the oxygen-containing groups include carboxyl group, carbonyl group and hydroxyl group, etc., which are connected to the graphene layers by covalent bonds and also introduce various edges and defect sites. They are FTIR sensitive and mentioned as FTIR-sensitive. We could explain the difference of the M s tendency between M2C1 and M4C1 by considering the changes of both Raman-and FTIR-sensitive defects. On the one hand, the Raman-sensitive defects in graphene have been repaired in a certain extent as shown in Fig. 4(c), which could reduce the M s of the SHS graphene. On the other hand, the content of carboxyl group increases with the increase of heat treatment temperature, which could increase the M s . M2C1 is more stable at elevated temperature and has less content of carboxyl and other oxygen-containing groups, so the Raman-sensitive defects play a more important role on the ferromagnetism than FTIR-sensitive one, so the M s for M2C1 has a similar trend with the change of Raman-sensitive defects. Since M4C1 is more thermal sensitive and easy to be oxidized at higher temperature as above mentioned the effect of FTIR-sensitive defects on the ferromagnetism may overcome that of Raman-sensitive defects, as a result the M s for M4C1 has a similar trend with the FTIR-sensitive defects. In addition, according to XPS results in Fig. 3(b-d), only the XPS area of carboxyl group has the same changing trend with that of M s for M4C1 or M2C1, the carboxyl group must be the origin of the ferromagnetism for SHS graphene. Based on the analyses above-mentioned, we obtain a more comprehensive scene of the magnetism properties of the SHS graphene and the effect of heat treatment on them. Both Raman-sensitive and FTIR-sensitive defects could contribute to the ferromagnetic properties of graphene. The heat treatment plays an important role on elucidating the effect of different factors for the ferromagnetism of graphene. Conclusions In this study, we obtain a more comprehensive scene of the ferromagnetic properties of the SHS graphene and the methods to tune them. Firstly, the deviation of the mole ratio of SHS reactants (Mg: CaCO 3 ) from stoichiometric ratio benefits the production of few-layer graphene with smaller and more plicated sheets, and also benefits the production of both Raman-sensitive and FTIR-sensitive defects. Secondly, the heat treatment method could adjust the contents and types of defects by using the competitive relationship of the repair and oxidation processes. Thirdly, there are two origins of the ferromagnetism of the SHS graphene, associated with the Raman-sensitive and IR-sensitive defects respectively. The M s trends could be explained by considering the changes of both Raman-sensitive and FTIR-sensitive defects. As a result, this work sheds light on the study to develop carbon materials with controlled ferromagnetism.
6,238.4
2017-07-19T00:00:00.000
[ "Materials Science" ]
Spin dynamics investigations of multifunctional ambient scalable Fe3O4 surface decorated ZnO magnetic nanocomposite using FMR Microwave spin resonance behavior of the Fe3O4 surface decorated ZnO nanocomposites (FZNC) has been investigated by ferromagnetic resonance (FMR). Modified hydrothermal method has been adopted to fabricate FZNC samples with Fe3O4 nanoparticles chains were used as seeds in the uniform magnetic field to decorate them on the surface of the ZnO nanoparticles in a unique configuration. Spin dynamics investigation confirms the transition of ZnO from diamagnetic to ferromagnetic as the sharp FMR spectra converts to the broad spectra with Fe3O4 nanoparticles incorporation. A single broad FMR spectra confirms that no isolated Fe3+ or Zn2+ ions exist which is also in agreement with XRD confirming suitable composite formation. Further, the increase in Fe3O4 concentration leads to decrease in g-value which is resulting from the internal field enhancement due to magnetic ordering. Also, various spin resonance parameters were calculated for the FZNC which provides a detail information about the magnetic ordering, exchange coupling and anisotropy. Elemental analysis confirms the presence of Fe and Zn simultaneously and transmission electron microscopy (TEM) image show the presence of Fe3O4 on the grain boundaries of ZnO which has been confirmed by taking high-resolution TEM and electron diffraction patterns on both sides of the interface. These unique structural configuration of the FZNC has tremendous potential in various magneto-optoelectronic, spintronics and electro-chemical applications. Microwave spin resonance behavior of the Fe 3 O 4 surface decorated ZnO nanocomposites (FZNC) has been investigated by ferromagnetic resonance (FMR). Modified hydrothermal method has been adopted to fabricate FZNC samples with Fe 3 O 4 nanoparticles chains were used as seeds in the uniform magnetic field to decorate them on the surface of the ZnO nanoparticles in a unique configuration. Spin dynamics investigation confirms the transition of ZnO from diamagnetic to ferromagnetic as the sharp FMR spectra converts to the broad spectra with Fe 3 O 4 nanoparticles incorporation. A single broad FMR spectra confirms that no isolated Fe 3+ or Zn ions exist which is also in agreement with XRD confirming suitable composite formation. Further, the increase in Fe 3 O 4 concentration leads to decrease in g-value which is resulting from the internal field enhancement due to magnetic ordering. Also, various spin resonance parameters were calculated for the FZNC which provides a detail information about the magnetic ordering, exchange coupling and anisotropy. Elemental analysis confirms the presence of Fe and Zn simultaneously and transmission electron microscopy (TEM) image show the presence of Fe 3 O 4 on the grain boundaries of ZnO which has been confirmed by taking highresolution TEM and electron diffraction patterns on both sides of the interface. These unique structural configuration of the FZNC has tremendous potential in various magneto-optoelectronic, spintronics and electro-chemical applications. Magnetic manipulation in materials achieved through hybrid materials are of great interest owing to their unique physical, chemical and optical properties which can be tuned by external magnetic field 1 . Magnetic control on the materials properties has been an exciting area of research as they exhibit immense potentials for the device development with enhanced performance 2 . One of the most vital part of both basic research and applications of these composites lies in rational understanding the spin dynamics of these materials to achieve the desired properties 3 . Metal oxide materials are widely used in engineering as well as biomedical applications. In particular, zinc oxide (ZnO) of wurtzite hexagonal structure (space group-C 6v 4 -P6 3 mc, direct band gap 3.37 eV) is one of the most prominent materials with wide interdisciplinary applications in optoelectronic, photo-catalysis, sensing, and drug delivery 4,5 . However, nanotechnology has opened a whole new horizon of several biomedical application such as bio-imaging/sensing, anti-microbial/cancer and drug/gene delivery etc. as nanostructures behaves in contradicting manner from their bulk counterparts due to enhanced surface dependent properties and large defect sites 6 . Magnetic nanocomposites (MNC) or magneto-hybrid materials are multiphase materials with one or more magnetic phases and one of the phases present in the nanoscale range 7 www.nature.com/scientificreports/ properties that are inherently different from the present phases or encompasses a combination of the property of all the present phases in the system. The MNCs are solid solutions of two or more components in which the matrix of one material is mixed with the other material to obtain the desired properties. Superparamagnetic (SP) Fe 3 O 4 particles of inverse spinel AB 2 O 4 structure with A (octahedral) site occupied by Fe 2+ ions and B (tetrahedral) sites occupied by Fe 3+ ions has high magnetic saturation (M s = 40 emu/g with average particle diameter = 10 nm) which makes them effective as a hybrid material 2,8 . SP Fe 3 O 4 nanoparticles are ideal for their use in combination with ZnO as it has a narrow bandgap, biocompatible, non-toxic, high chemical stability, mechanical hardness, and high saturation magnetization (M s ) material 9 . The embodiment of the magnetic nanoparticles in ZnO provides a unique blend of properties as ZnO provides excellent optoelectronic properties and incorporation of magnetic nanoparticles provides an opportunity to tune these properties through an external magnetic field. The unique magneto-optoelectronic properties of these materials have engrossed the research focus on the synthesis of Fe 3 O 4 -ZnO nanomaterials (FZNM) in various morphologies and structures. Also, FZNM fabrication has been reported in wide domain and interdisciplinary field such as biomedical, optoelectronics, electrochemical, and photocatalytic due to its enhanced magnetic field induced magneto-optoelectronic, spintronic, and electrochemical performance. Altogether, heterostructure FZNM has tremendous potential to be efficiently used in spintronic devices. The ferromagnetic alignment in ZnO by incorporating magnetically active material provides new landscapes in the field of magneto-optoelectronic, spintronics, and electrochemical devices 10 . The existence of the strong correlation between the structure of the composite to its physical and chemical properties requires exploration of different nanocomposite structures and its optimization. Major attention in the fabrication of the FZNM is required to attain isotropic properties by uniformly distributing the nanoparticles in the matrix. Matrix dispersed structures are useful when uniformity of the properties is required throughout the material. The molten salt method is one pot novel technique for the synthesis of matrix dispersed magnetite-zinc oxide nanocomposites. Reddy et al. 5 prepared Fe 3 O 4 -ZnO hybrid nanocomposite for application in anode material in Li-Ion batteries. They have prepared the composite by the molten salt method in the average size range above 100 nm. ZnO acts as a good matrix element for Fe 3 O 4 imparting it exceptional Li-ion recycling properties. The two-step approach adopted by S. Singh et al 11 for the fabrication of Fe 3 O 4 embedded ZnO magnetic semiconductor nanocomposites has shown great potential and has achieved good isotropic properties. In the first step, glycine functionalized magnetic nanoparticles were prepared which are mixed with the Zn precursors and refluxed to in-situ fabricate ZnO. The prepared composite displays remarkable detoxification properties which were used for the removal of bacterial pathogens. Further, core-shell structures are highly efficient as they provide excellent stability, dispersibility, and functionality to the composite materials. Constituents of the core-shell and their ratio can be manipulated to achieve the desired properties required by the application. Seed mediated growth process and sequential nanoemulsion techniques have been successfully employed for the synthesis of core-shell nanostructures 4,12,13 . Jian Wang et al 14 prepared Fe 3 O 4 -ZnO core-shell nanocomposites by a simple two-step chemical method of size range 60 nm for their use in wastewater treatment. They have not observed any degradation of photocatalytic properties after several cycles of water treatment suggesting a high performance of the material. Magnetic properties imparted by Fe 3 O 4 core ensured the re-usability of the composite. These core/shell nanocomposites were employed in the fabrication of inverted solar cells for their enhanced performance. The incorporation of magnetite nanoparticles leads to an increase in short circuit current density due to the presence of local magnetic field and ZnO nanoparticles suppressed parasitic absorption of radiation by magnetic nanoparticle 15 . Further, multicore-shell structures are sculptured when a magnetic layer is needed to be sandwiched between two non-magnetic layers. Interfacial interactions at the two surfaces of the core nanoparticles impart unique properties to the material. Multifunctionality of the structure facilitates their use as a probe for target-specific imaging and drug delivery agents 16 . Uniform spatial distribution of properties necessitates the fabrication of Janus structures Fe 3 O 4 -TiO 2 nanocomposites prepared by Zeng et al 17 shows good imaging properties and can be employed in photodynamic therapies. Probing the spin dynamics of the nanocomposite can provide us with new insight enabling the improvements in the functionality of the composite. Understanding the dipolar and superexchange interactions existing among the particles is essential to comprehend the dynamics of the composite 18 . Spin resonance studies of pure ZnO and Fe 3 O 4 nanoparticles using the FMR technique have been carried out extensively in the past 19,20 . ZnO nanoparticles are diamagnetic but reveal their paramagnetic comportment in the FMR spectrum. The peak observed in the FMR spectra for ZnO stipulates the existence of paramagnetism 21 . Further, the presence of oxygen vacancies and interstitial zinc ions are responsible for the manifestation of paramagnetic behavior in the resonance spectra. The random orientation of electron spins in ZnO realigns itself altogether into the direction of the applied magnetic field absorbing energy in a narrow band of frequencies. Due to the simultaneous realignment of all the spins, a narrow linewidth is observed in the ZnO FMR spectra. On the other hand, Fe 3 O 4 nanoparticles are SP in nature and exhibit high spin polarization at room temperature 22 . Anisotropy energy enforces the magnetization direction of individual spins to freeze in a specified direction. Exchange interactions existing among the spins preserves the parallel alignment of the magnetic moments. The cumulative effect of the two interactions results in a parallel alignment of spins in different directions. Due to different directions of spins in neighboring crystallites, absorption of radiation occurs over a wider range of frequencies leading to linewidth broadening. FZNM provide opportunities of resonance enabling widespread applications and components concentration or amount of the constituents in the composite can be optimized to get the resonance properties in accordance with the application implementation 18 . Numerous reports are available in the literature investigating the structural, magnetic, and electronic properties of FZNC. However, no significant attempts have been made to explore the spin dynamics of FZNC to the best of our knowledge. All the previous FZNC reported in the literature has Fe 3 O 4 nanoparticles in the core and ZnO in the shell, whereas in the present work we have capitalized the chain structure formation of the Fe 3 O 4 nanoparticles to occupy the shell of the ZnO core. This configuration has not been reported yet and has several advantages. An intensive literature search has been made and we have not observed any investigation targeting the spin dynamics of FZNC in detail. The understanding of the spin dynamics of the magneto-nanocomposite can open new avenues in the field of magnetic field tunable semiconductor oxides 23 . Herein we have performed a detailed investigation of the dynamic magnetic properties of multifunctional FZNC. We have adopted a simple facile chemical synthesis technique for the fabrication of the nanocomposite in which Fe 3 O 4 nanoparticles prepared by co-precipitation technique were used as seeds followed by in-situ hydrothermal growth of the ZnO. The ZnO precursors were mixed with the Fe 3 O 4 seeds in the presence of the constant magnetic applied radially to assist in chain formation over the in-situ seeding ZnO nanoparticles. This allows Fe 3 O 4 nanoparticles to coat the ZnO nanoparticles rather than taking the core position. Further, the structural behavior of the prepared nanocomposite has been obtained by the X-ray diffraction (XRD) technique. The obtained XRD pattern for each sample was fitted by Williamson-Hall (W-H) method to calculate the crystallite size and strain. The morphology and size distribution was confirmed by scanning electron microscope (SEM) and transmission electron microscope (TEM). Further, an energy dispersive X-ray spectrum (EDS) was used to obtain the elemental mapping to confirm the structure of the FZNC. The static magnetic properties of the pure ZnO, Fe 3 O 4, and FZNC were obtained by vibrating sample magnetometer (VSM) to observe the effect of magnetic nanoparticles incorporation. The spin dynamics of the FZNC was investigated by the microwave spin resonance technique by fitting the FMR spectra of the samples. The FMR spectra obtained for the samples were fitted with various theoretical models presented to establish a relationship among different spin resonance properties. Various spin resonance properties were calculated for each sample and optimized concentrations were obtained, which is the key for efficient material performance. Synthesis of MNC samples A detailed experimental investigation has been performed to investigate the spin dynamics of FZNC to understand their microwave absorption behavior. The schematic of the synthesis of the Fe 3 O 4 surface decorated ZnO magnetic nanocomposite is depicted in Fig. 1. Multifunctional MNC was prepared in a two-steps process; in the first step, seeds of Fe 3 O 4 nanocrystals have been prepared by the standard co-precipitation method. Synthesis and surface functionalization of Fe 3 O 4 nanocrystals were performed by adopting a procedure described before 24 . Further, in the second step, these prepared Fe 3 O 4 nanocrystals were utilized as seeds for the synthesis of MNC using the hydrothermal method. In the typical procedure, 2 g polyethylene glycol was added in 10 ml of deionized (DI) water at a uniform stirring rate of 400 rpm for 15 min. Dried Fe 3 O 4 nanocrystals were then added in three different concentrations (25,50, and 100 mg) under controlled sonication during the mixing process. These three samples were named FZA, FZB, and FZC respectively with increasing Fe 3 O 4 concentration in the composite system. Afterward, 0.5 M 10 ml zinc acetate solution was added to the above solution dropwise under a constant stirring rate (400 rpm) for 30 min. 10 mL 1 M urea and 0.5 g polyvinylpyrrolidone (PVP), were mixed separately and added to the resultant solution dropwise while maintaining a constant temperature of the solution. The resultant solution was kept in a uniform radial magnetic field with mechanical stirring for 15 min to allow chain formation of the Fe 3 O 4 nanoparticles providing the shell to grow the ZnO nanoparticles in the core as shown in Fig. 1. The complete solution was transferred into the 100 mL hydrothermal autoclave reactor and kept inside a hot air oven at temperature 200 °C for 14 h. The resultant product was then washed several times with ethanol and DI water and dried in a vacuum oven overnight. The obtained powder was calcined at 600 °C for 2 h in a muffle furnace (CARBOLITE GERO-temperature stability + 2.3 °C) 9,25 . Further uncapped ZnO was prepared by adopting the same procedure and experimental conditions except for the addition of Fe 3 O 4 nanocrystal. Also, a small amount of Fe 3 O 4 nanocrystal was taken out for different characterizations in the first step and it has also been probed for all the characterizations to provide an intensive comparison of bare ZnO, Fe 3 O 4 nanocrystals, and MNC. The details of the samples and nomenclature are shown in Table 1. Results and discussions The XRD plots of FZNC, ZN, and FN samples are shown in Fig. 2(a-e). The XRD pattern of FN is shown in Fig. 2 27 . From the XRD patterns, we have observed that the optimized concentration of FN in nanocomposite does not affect the structural properties significantly and appears as a separate phase. It is not mixed with the ZnO matrix which signifies that the composite will show properties of both of the constituents' particles. Figure 3 shows the SEM micrographs of the FZNCs at low and high magnification to observe the morphology of the samples. Figure 3(a1-c1) depicts the SEM images at low magnification and (a2-c2) at high magnification for the MNC samples. From the SEM micrographs, we can clearly distinguish two differentiable sizes of the particles in all the FZNCs samples. The XRD results confirm that the crystallite size of the FZNCs lies in the range 40-50 nm and Fe 3 O 4 nanoparticles taken as seeds are in the range around 10 nm. So, we can conclude from the micrographs that the large particle size depicted in the SEM micrographs are ZnO nanoparticles and smaller Fe 3 O 4 nanoparticles are decorated on the surface of the ZnO. Figure 3(a1) and (a2) shows the SEM micrographs of the FZA samples at low and high magnification from which we can clearly observe that the small Fe 3 O 4 particles are randomly dispersed in the samples. The ZnO and Fe 3 O 4 particles are oriented erratically and thus the system will demonstrate isotropic behavior. This is well in agreement with the XRD results in which no separate peak is observed for the Fe 3 O 4 nanoparticles which implies that the ZnO peaks suppress the Fe 3 O 4 peak due to their much lower concentration. Whereas, for sample FZB, Fe 3 O 4 nanoparticles were very well decorated at the surface of the ZnO particles as shown in Fig. 3(b1) and (b2). This makes the MNC sample highly uniform and isotropic as particles are uniformly distributed in the sample. The XRD results also confirm the same as the slight bump at the major peak of Fe 3 O 4 is observed compared to the pure ZnO nanoparticles. Finally, Fig. 3(c1) and (c2) show the SEM micrograph for the sample FZC which contains the largest concentration of FN nanoparticles. Figure 3(c1 and c2) clearly illustrates that FN nanoparticles are agglomerated at the surface of the ZnO www.nature.com/scientificreports/ nanoparticles and they are appearing as a separate phase which can be corroborated from the XRD results where the Fe 3 O 4 major peak is appearing abruptly. Figure 4(a) depicts the bright field image of FZB sample that clearly shows the presence of lower contrast small particles surrounding the dark core. The high-resolution transmission electron microscope (HRTEM) images and selected area electron diffraction (SAED) micrograph were taken at the interface to observe the present structures. The HRTEM image and SAED patterns shown in Fig. 4(b) confirm that the lower contrast small particles are Fe 3 O 4 and dark big particles are ZnO. The existence of lattice fringes in both regions confirms the crystalline nature of the structures. SAED pattern present in the inset of Fig. 4(b) shows the polycrystalline nature of the nanocomposites. The uniform and well-organized lattice fringes of 0.281 nm corresponding to the (100) plane confirms the presence of hexagonal ZnO in the core. However, the lattice fringes of 0.253 and 0.296 nm corresponds to (311) and (220) planes, respectively, that affirms the presence of cubic crystalline phase Fe 3 O 4 2,28,29 . The particle size of Fe 3 O 4 is calculated to be range 2-10 nm. EDS analysis in Fig. 4(c) demonstrates various elements; Zn, Cu, Fe, and O present in the structure. Figure 4(e-i) shows the elemental mapping of C, Fe, O, Cu, and Zn elements, respectively obtained using TEMCON software. Figure 4 confirms the Fe 3 O 4 surface decorated ZnO structure as shown schematically in Fig. 1. Further, Fig. 5(a) reveals an HRTEM image of sample FZA which shows the presence of ZnO as well as Fe 3 O 4 nanoparticles. Some nanocrystallites of hexagonal ZnO are distributed randomly with different interplanar spacings of 0.246 and 0.261 nm corresponding to (101) and (002) planes. Fe 3 O 4 nanoparticles with an interplanar spacing of 0.253 nm correspond to (311) planes of cubic crystal structure were also noticed at some places 30,31 . Only a few numbers of Fe 3 O 4 nanoparticles are observed due to less concentration taken while synthesizing these samples. Also, the dislocation in (002) planes of the ZnO is evident from Fig. 5(a) which is caused by the www.nature.com/scientificreports/ incorporation of the Fe 3 O 4 (311) planes. The particle size of the Fe 3 O 4 nanoparticles was found to be 11.2 nm ( Fig. 5(b)). It is clear from Fig. 5 Fig. 5(c-d). Fe 3 O 4 and ZnO nanoparticles were observed to be scattered, no uniform structure is present as depicted in Fig. 5(c). The excess amount of Fe 3 O 4 is present as agglomerates around the ZnO nanoparticles which suggests that the formation of the composite is not suitable. An interface between these nanoparticles can be noticed in Fig. 5(d). Thus, the results are found in agreement with the proposed geometry that The dependence on externally applied magnetic field of the static (dc) magnetization, m(H) for all the samples as recorded at room temperature in applied magnetic field up to 2 T is shown in Fig. 6(a-e) Table 1 shows that FZA and FZB samples have low M s = 0.2816 and 0.5565 emu/g, respectively, which can be well understood from the fact that these nanocomposite samples are of low FN concentration. Whereas, sample FZC shows higher Ms = 5.6976 emu/g as it has a higher concentration of FN loading. All the Furthermore, experimental and fitted FMR spectra of the samples have been shown in Fig. 7. All the FMR spectra were fitted using suitable distribution functions and spin resonance parameters were obtained from the best-fit curves. Figure 7 shows a single broad-spectrum for FZNC samples, implying that isolated Fe 3+ and Zn 2+ ions do not exist. The resonance line observed is the contribution from both Fe 3+ and Zn 2+ in a single phase 33 . FMR spectra of all the samples are symmetrical, but their resonance magnetic field ( H R ) and peak-to-peak linewidth ( H PP ) exhibit systematic variations. The resonance parameters such as H R , H PP , g-factor, spin concentration (Ns) and relaxation time (T S ) are calculated for all the samples and listed in Table 1 34 . From Table 1 we can conclude that microwave resonance parameters strongly depend upon Fe 3 O 4 concentration in the composite. FMR spectra of both bare ZnO and Fe 3 O 4 are in accordance with the earlier reported results in the literature 34,35 and resonance in ZN and FN is observed at 3526 G and 3338 G, respectively 36 . Adding different amounts of Fe 3 O 4 nanoparticles in the ZnO host matrix shifts the resonance spectra of ZnO in a systematic manner. Resonance is achieved at a lower value of the applied magnetic field with an increase in FN content in the composite. This results due to the increase in ferromagnetic interactions among Fe 3 O 4 nanoparticles as the distance between them decreases on increasing of their proportion in the nanocomposite 37 . Another important parameter of the FMR spectrum is the g-value which provides the orbital contribution to the magnetic moment and is calculated using Eq. 1. Here, h is Planck's constant, ν is microwave frequency, μ B is Bohr's magneton and H R is the resonance magnetic field 37 . ZN and FN show signals at g = 1.969 and g = 2.0809, respectively, which is in accordance with the literature. The g value for Fe 3 O 4 nanoparticles is greater than of bare ZnO nanoparticles, which is in line with the spin-orbit coupling. It can be elucidated from Eq. 2. Here, λ is the spin-orbit coupling constant and ∆ is the energy level separation. λ takes negative values when the 3d shell is more than half full and is positive for half-filled 3d shells 38 . The g-value increased monotonically from 2.0005 to 2.17008 as the amount of Fe 3 O 4 was stepped up in FZNC because of an increase in the composite, the increase in magnetic anisotropy possibly results in linewidth in the spin-orbital interactions. The large spin-orbit coupling can make FZNC a key material in future spintronic computing 23 . Furthermore, FMR spectra Due to different directions of internal anisotropy fields of differently oriented facets of crystallites, the resonance curve is broadened since individual crystallites achieve resonance at their own magnetizing field. Another factor attributing to the broadening is the interference of crystalline fields of different crystallites in the region at the interface. As a result, both the zero-energy level splitting and the g-value of ferromagnetic ion located at or near the interface will get changed. There will be a random variation in the magnetic field required for resonance and the lines will get broadened 39 . Deepty et al. 40 reported that enhancement in magneto-dipolar interactions partially leads to linewidth broadening. At higher concentrations, the phenomenon of exchange narrowing curtails the effectiveness of magneto dipolar interactions existing among the fine Fe 3 O 4 clusters which result in the narrowing of linewidth 41,42 . This can be elucidated using the Eq. (3) 35,36 . where H A and H ex are anisotropy and exchange fields, respectively 43 . The small value of linewidth at higher concentrations indicates that incorporation of Fe 3 O 4 in ZnO reduces the energy loss and makes it suitable for highfrequency applications 44 . From the literature, it has been found that two-time effects or relaxation phenomena are involved in the resonance process i.e. spin-spin and spin-lattice relaxation 45 . The applied field H changes the direction of the field that each dipole experiences. As a result, the dipoles undergo a slight re-orientation and hereby relaxation occurs by small changes in the energy levels of the dipoles. Spin-spin relaxation time is a measure of the time required to establish thermal equilibrium in the spin system. It occurs when the applied field H is smaller than fluctuating internal fields H i produced by the dipoles. The spin-spin relaxation time has been calculated from �H 1/2 and g-value, listed in Table 1 using Eq. (4) 40 : Sharp decrease in T S value is observed with increasing Fe 3 O 4 content after which it gets saturated showing no or little dependence on ferromagnetic particle concentration. This can be explained using the Heisenberg uncertainty principle 45 : www.nature.com/scientificreports/ At low Fe 3 O 4 concentrations, due to spreading in the energy levels of dipole because of large dipole-dipole interactions, the dipoles own a wide range of frequencies. Consequently, a dipole will require a shorter time to reorient itself about a different precession axis 39 . At higher concentrations, competitive dipole-dipole and exchange interactions lead to the saturation in relaxation time values. Further, Ns has been calculated using Eq. (6): Here ΔH 1/2 is the full width at half maximum of the absorption peak. It is seen from Table 1 that initially the spin number increases with increasing ferromagnetic particle concentration. After reaching an optimum concentration, it shows a very small incongruous decrease. The aberrant decrease in the value of N S can be ascribed to a decrease in crystallization degree of the composite as the amount of Fe 3 O 4 nanoparticles exceeds a critical concentration resulting in the decrease of the spin number of Fe 3+46 . Another effect of increasing the ferromagnetic particle concentration in the composite is that the line shape changes from Lorentzian to pseudo-Voigt, which is intermediate between Gaussian and Lorentzian and then to a Gaussian line shape. One of the parameters that is indicative of the line shape is the ratio of the full width at half-height, ΔH 1/2 of the integrated spectrum to ΔH PP , ΔH 1/2 /ΔH PP, is 1.657 for a Lorentzian, 1.135 for pseudo-Voigt and 1.191 for a Gaussian line shape. Broader wings of the Lorentzian peak shape as compared to Gaussian peak shape pinpoint to a large number of magnetic spins in sample FZA as compared to sample FZC. This is in conformity with our results for Ns as shown in Table 1. Pseudo-Voigt line shape for sample FZC indicates an optimum balance between magneto-dipolar and exchange forces existing between magnetic moments. The spin resonance parameters suggest that the relationship of the concentration of the Fe 3 O 4 and magnetic and spin dynamic properties are very complex 35,36 . Although, the magnetic saturation is quite straightforward which increases sharply with an increase in Fe 3 O 4 concentration whereas the spin resonance parameters depend on the interaction among the constituents. FZB sample has a lower magnetic saturation value but has higher spin concentration resulting from the uniform distribution of the Fe 3 O 4 nanoparticles which does not affect the magnetic ions distributions, while sample FZC which has high saturation, has low spin concentration due to spin canting effect. The present investigation allows FZNC to be used more effectively in various spintronic and optoelectronic devices. Conclusion In conclusion, the room temperature spin dynamics in FZNC have been investigated in detail experimentally and the effect of Fe 3 O 4 concentration on the properties of FZNC has been established. SP Fe 3 O 4 nanoparticles were used as the seed to synthesize FZNC which are incorporated in the ZnO matrix occupying the grain boundaries. Fe 3 O 4 and ZnO are present in the composite as a separate phase which converges that the composite will show properties of both FN and ZnO. The grain boundary incorporation of magnetic nanoparticles is achieved by producing the uniform radial magnetic field which allows the formation of chain structure while in-situ growth of the ZnO nanoparticles. The incorporation of FN particles on the grain boundaries has been confirmed through TEM and SEM. Further, HRTEM and SAED have been taken on both sides of the interface in the FZNC to confirm the constituents of the composite. The static magnetization curve of FZNC shows an SP behavior which further complements the electron microscopy results that both FN and ZP are present as a separate phase and FN is not incorporated in the matrix of ZnO. Further, single broad FMR spectra of the composite show that no Fe and Zn ions are present in the samples, and large spin-orbit coupling is observed at higher FN loading. T s of the samples also increases with an increase in FN loading and FMR spectra shape changes from Lorentzian to pseudo-Voigt. This is corroborated by the N s results, which show large values of N s for sample FZB rather than sample FZC due to balancing between magneto dipolar and exchange forces between magnetic moment. Large spin-orbit coupling in FZB samples can be achieved which is evident from the spin resonance parameters obtained for the samples. The present investigation depicts some unique and novel results as magnetic incorporation provides unique opportunity for the development of high-performance materials with the external control which unlocks the door for highly efficient devices. Experimental To investigate the crystalline phase purity, X-ray diffraction (XRD) measurement of as prepared FZNC, Fe 3 O 4 nanocrystals (FN), and ZnO particles (ZN) was performed using Rigaku Ultima -IV with Cu-Kα radiation (P = 3 kW, λ = 1.5406 Å). All the samples were scanned in the range 5° to 90° at 40 mV-40 mA at a step size 0.01 with a slow scan rate. Crystallite size and strain in the samples were calculated using the Williamson-Hall (W-H) method 47 . The morphology of the MNC samples was observed using SEM (Zeiss EVO MA-10 SEM) operating at 10 keV. Further, to analyze the nanocomposite formation and microstructural features, the TEM images at low and high resolution were obtained by JEOL JEM2100F operating at 200 kV. Also, the EDS spectrum and elemental mapping of the as-prepared nanocomposite samples were depicted using Jeol-JEM2100F standard attachment. The room temperature static dc magnetic measurement of the samples was performed using a vibrating sample magnetometer (VSM, Lake Shore 7304). The microwave spin resonance investigation has been performed using Bruker EMX-10 spectrometer at room temperature using 100 kHz field modulation. The measurements were performed using X-band (9.85 GHz) microwave frequency, and a TM011 mode cavity. The distortion of the FMR spectra has been avoided by keeping the modulation amplitude less than or equal to one-third of the peak-to-peak linewidth. The FMR spectra were recorded at room temperature and various spin resonance parameters such as resonance field, Landé g-tensor, spin-spin relaxation time, and spin concentration were calculated. Further, these line spectra were fitted to the (6) N S = 9 4π 2 �H 1/2 gµ B
7,592.2
2021-02-15T00:00:00.000
[ "Materials Science" ]
Thermal Stability of Fluorescent Chitosan Modified with Heterocyclic Aromatic Dyes Fluorescent biopolymer derivatives are increasingly used in biology and medicine, but their resistance to heat and UV radiation, which are sterilizing agents, is relatively unknown. In this work, chitosan (CS) modified by three different heterocyclic aromatic dyes based on benzimidazole, benzothiazole, and benzoxazole (assigned as IBm, BTh, and BOx) has been studied. The thermal properties of these CS derivatives have been determined using the Thermogravimetric Analysis coupled with the Fourier Transform Infrared spectroscopy of volatile degradation products. The influence of UV radiation on the thermal resistance of modified, fluorescent chitosan samples was also investigated. Based on the temperature onset as well as the decomposition temperatures at a maximal rate, IBm was found to be more thermally stable than BOx and BTh. However, this dye gave off the most volatile products (mainly water, ammonia, carbon oxides, and carbonyl/ether compounds). The substitution of dyes for chitosan changes its thermal stability slightly. Characteristic decomposition temperatures in modified CS vary by a few degrees (<10 °C) from the virgin sample. Considering the temperatures of the main decomposition stage, CS-BOx turned out to be the most stable. The UV irradiation of chitosan derivatives leads to minor changes in the thermal parameters and a decrease in the number of volatile degradation products. It was concluded that the obtained CS derivatives are characterized by good resistance to heat and UV irradiation, which extends the possibilities of using these innovative materials. Introduction Fluorescent compounds are now widely used in various industrial branches and fields of modern medicine, such as imaging materials, photosensitizers, light-emitting diodes, and biological markers, allowing for the detection of protein, nucleic acids, living cells, or damaged tissues [1][2][3][4][5]. Particularly noteworthy is the use of fluorescent dyes in medical diagnostics, e.g., the early detection of cancer (e.g., neurologic tumors, spinal glioma, thyroid, and parathyroid tumors; breast, colon, and skin cancers) or HIV [6][7][8]. Among those compounds are methylene blue, fluorescein, and indocyanine green. Another group of fluorescent moieties used in anticancer Photodynamic Therapy (PDT) is BODIPY-dyes based on boron difluoride and dipyrromethene groups with various substituents. As reported by the latest works, fluorescence labeling can also be successfully applied in biological studies and in the detection of the COVID-19 coronavirus [9,10]. Such wide applications are the reason for the intensive research on fluorescent compounds in recent years, contributing to significant progress in modern technologies utilizing light emission phenomena. For practical applications, the specific properties of such compounds are required, namely, a high absorption coefficient, a high fluorescence quantum yield, sensitivity, stability, and a resistance to chemical and environmental factors (including atmospheric energy UV-C radiation with a wavelength of 254 nm was studied for both solid samples and solutions using FTIR and UV-Vis spectroscopy. It has been proven that photochemical reactions (mainly photobleaching) occur with greater efficiency in solutions than in the thin solid films of modified chitosan [35]. The use of chitosan as a matrix for fluorescent compounds has many advantages, the most important of which is the biodegradability of the system and an increase in the photochemical stability of the substituted dyes. Furthermore, these modified chitosan specimens are characterized by biocompatibility, biological inertness, and biocidal properties. The main goal of this work was to present the results of the thermogravimetric analysis coupled with the FTIR analysis of evolved gas for three fluorescent chitosan (CS) derivatives containing substituted trans-2-[2-(4-formylphenyl)ethenyl]benzimi-dazole, BIm, p-[trans-2-(benzoxazol-2-yl)ethenyl]benzaldehyde, BOx, and p-[trans-2-(benzthiazol-2-yl)ethenyl]benzaldehyde, BTh. The thermal stability of the dyes (BIm, Box, BTh) used to modify chitosan was also investigated. TA-FTIR was applied to illustrate the chemical changes in the structure of the modified fluorescent chitosan under the influence of UV irradiation. This work is a continuation of the characterization of the same N-substituted chitosan derivatives presented in the previous work [35]. Information about the thermal properties of these new polysaccharide materials is essential from a practical point of view-it allows us to estimate the possibility of their use at elevated temperatures. At the same time, TGA tests enable the detection of structural changes in the case of the prior exposure of samples to UV radiation, often used as a sterilizing agent. Materials Chitosan (CS), a copolymer of β-(1→4)-linked D-glucosamine with N-acetyl-Dglucosamine, and substrates for the synthesis of the dyes (2-methylbenzimidazole, 2methylbenzoxazole, 2-methylbenzothiazole, and terephthalaldehyde) have been purchased from Sigma-Aldrich (St. Louis, MI, USA). Other reagents and solvents (acetic anhydride, acetic acid, acetone, and methanol) were supplied by Avantor™ Performance Materials, Gliwice, Poland. The average molecular weight and deacetylation degree of the initial chitosan were 50,000 g/mol and 87%, respectively. The synthesis of the dyes (BIm, BOx, BTh) was performed according to the procedure described in work [36]. These compounds were substituted into the biopolymer chains using the reaction of the aldehyde groups of the dyes with the amino groups in CS. The chemical structures and abbreviations of the studied compounds are shown in Figure 1. The modified chitosans contained a relatively low amount of dye substituents, i.e., 2.9%, 2.7%, and 2.4% in CS-BIm, CS-BOx, and CS-BTh, respectively. The thin films ( Figure S1) were made from the obtained chitosan derivatives by pouring dilute acetic acid solutions onto leveled 8 cm Petri dishes and evaporating the solvent at room temperature. A detailed description of this modification and the methodology of the sample preparation for testing were included in the earlier publication [35]. Half of the films of the chitosan derivatives were subjected to UV irradiation in room conditions for 8 h using a low-pressure mercury-vapor lamp TUV-30 W produced by Philips, Holland. The lamp emitted radiation with a wavelength of 254 nm, and the dose obtained by the samples was 28.8 J/cm 2 . Samples of the same thickness (approx. 10 µm) were selected for irradiation. Thermal Analysis The thermogravimetric analysis (TGA) of all samples (non-irradiated and UV-irradiated) was performed on the Jupiter STA 449 F5 thermoanalyzer by Netzsch coupled with the FT-IR Vertex 70V spectrometer by Bruker Optik in the following conditions: a nitrogen atmosphere, a temperature range of 20-600 °C, and a heating rate of 10°/min. The weight of the samples was 6-10 mg. The simultaneous measurement of TG, DTG, and DSC allowed for the determination of the following parameters: decomposition onset temperature (T0), temperature (Tmax) at the maximum process rate (Vmax), weight loss (∆m) for individual stages, and heat effects (H) accompanying the decomposition. The mixture of volatile products, released at temperatures ranging from room temperature to 600 °C, was analyzed based on the infrared spectra recorded in the range of 400-4000 cm −1 . The evolved gases were transported to the gas cell (with ZnSe spectrophotometric windows) in an FTIR spectrophotometer. The connections between the thermoanalyzer and spectrophotometer were heated to 200 °C to prevent condensation. The carrier gas shows no infrared absorption. Results and Discussion The thermogravimetric analysis of specially synthesized heterocyclic compounds and chitosan modified with those dyes, as well as the influence of UV radiation on the thermal stability of the samples, were tested under dynamic conditions. The investigations included the in situ detection of gaseous products released during heating (using FTIR spectroscopy). Combining these two techniques allows for an understanding of the thermal decomposition mechanism of new fluorescent chitosan derivatives, which have not been the subject of research so far. Thermal Stability of Chitosan Modifiers: Low-Molecular-Weight Heterocyclic Compounds-BIm, Box, and BTh The TGA shows that the course of the thermal decomposition of the three tested heterocyclic aromatic dyes differs significantly, which of course results from the difference in the chemical structure of these compounds (Figure 2a-c). In any case, no evaporation of moisture is observed (no weight loss around 100 C). Two of these compounds (BIm and BTh) are destroyed in one step. Only BOx shows two visible on the DTG curve stages: Thermal Analysis The thermogravimetric analysis (TGA) of all samples (non-irradiated and UV-irradiated) was performed on the Jupiter STA 449 F5 thermoanalyzer by Netzsch coupled with the FT-IR Vertex 70V spectrometer by Bruker Optik in the following conditions: a nitrogen atmosphere, a temperature range of 20-600 • C, and a heating rate of 10 • /min. The weight of the samples was 6-10 mg. The simultaneous measurement of TG, DTG, and DSC allowed for the determination of the following parameters: decomposition onset temperature (T 0 ), temperature (T max ) at the maximum process rate (V max ), weight loss (∆m) for individual stages, and heat effects (∆H) accompanying the decomposition. The mixture of volatile products, released at temperatures ranging from room temperature to 600 • C, was analyzed based on the infrared spectra recorded in the range of 400-4000 cm −1 . The evolved gases were transported to the gas cell (with ZnSe spectrophotometric windows) in an FTIR spectrophotometer. The connections between the thermoanalyzer and spectrophotometer were heated to 200 • C to prevent condensation. The carrier gas shows no infrared absorption. Results and Discussion The thermogravimetric analysis of specially synthesized heterocyclic compounds and chitosan modified with those dyes, as well as the influence of UV radiation on the thermal stability of the samples, were tested under dynamic conditions. The investigations included the in situ detection of gaseous products released during heating (using FTIR spectroscopy). Combining these two techniques allows for an understanding of the thermal decomposition mechanism of new fluorescent chitosan derivatives, which have not been the subject of research so far. Thermal Stability of Chitosan Modifiers: Low-Molecular-Weight Heterocyclic Compounds-BIm, Box, and BTh The TGA shows that the course of the thermal decomposition of the three tested heterocyclic aromatic dyes differs significantly, which of course results from the difference in the chemical structure of these compounds (Figure 2a-c). In any case, no evaporation of moisture is observed (no weight loss around 100 • C). Two of these compounds (BIm and BTh) are destroyed in one step. Only BOx shows two visible on the DTG curve stages: the main with a maximum at 279 • C (with dominant mass loss) and the overlapping minor step at about 300 • C ( Figure 2b). Taking into account the temperatures characterizing the decay onset, the BOx sample has the lowest thermal resistance (T o is 244 • C). In contrast, the most durable is BIm, which begins to decompose only around 320 • C (Table 1). Additionally, the maximum rate of the BOx decomposition is the lowest compared to the other two. The accurate identification of gaseous products is difficult due to their variety and small quantities. The simultaneous FTIR analysis shows that the greatest number of volatile products was separated from the BIm sample. In addition to the moisture (3000-3600 cm −1 ), the other low intensive bands were observed at 1695, 1596, 1445, 1207, 1168, 966, 809, and 740 cm −1 , which indicates the evolution of the fragmentation products containing carbonyl, ether, and vinyl groups ( Figure 3a) [37]. Nitrogen-containing compounds cannot be excluded, because the absorption of amine/amide (1655 amide I, 1523 amide II, and 1580 N-H) overlaps with the carbonyl region. The weak band at 966 cm −1 can be assigned to ammonia. In the remaining two compounds-BOx and BTh-the number of volatile products registered in infrared is negligible and on the noise limit ( Figure 3b,c), suggesting that the released products condense. It can therefore be concluded that the primary reactions of the thermal decomposition of BIm with the breaking of covalent bonds and the evolution of gaseous products lead to the formation of free radicals, which then recombine into more stable products (not decomposing even at 600 C, where the carbon residue is above 35%). The weight loss of BIm at 600 • C is approx. 64%, while BOx and BTh decompose almost completely (∆m > 96%). These variations can be explained by the different stabilities of the heterocyclic ring, in which the weakest point is a single C-heteroatom linkage. It should be emphasized that the studied molecules are stabilized by resonance due to the presence of π electrons from aromatic rings, central vinyl bonds, as well as non-bonding electrons at the O, N, and S atoms. The most significant thermal stability in the case of BIm can be explained by the presence of hydrogens at the N atoms in the rings, which can form hydrogen bonds with the N or O atoms from neighboring molecules. Therefore, it can be assumed that these hydrogen bonds between the heterocyclic rings contribute to the increased thermal resistance of BIm. There is no such possibility in BOx and BTh, although, of course, in all of the studied heterocycles, there are aldehyde groups that participate in the formation of hydrogen bonds between the end groups. The relatively high residue of the BIm sample at 600 • C, compared to the other two, indicates the formation of stable carbonaceous cross-linked structures containing aromatic rings. In the least stable sample BOx, due to the presence of an oxygen heteroatom, primary degradation products with oxidizing properties can be formed, which contributes to the greater degradation of the residue. As can be seen from the results, sulfur has a similar effect. To establish an order of the stability of the tested compounds, the values of the onset of the decomposition temperature and the temperature at the maximum decomposition rate from TGA (Table 1) were used. It is as follows: It can be concluded that the thermostability of these three heterocyclic compounds follows a similar trend as their photostability, tested in diluted solutions using an identical source of UV radiation [30]. The shape of the DSC curves also differs depending on the type of sample (Figure 2c). The studied heterocycles show endothermic peaks on the DSC corresponding to the melting point. Only BTh exhibited two separated melting temperatures (157 and 174 • C), indicating the coexistence of two different crystalline forms. The endotherms at higher temperatures correspond to the main decomposition of the samples. The Gram-Schmidt (G-S) curve of BIm exhibits two extrema at 393 • C and 488 • C (very broad), pointing to the most intense release of volatile decomposition products in these temperatures (Figure 2c). Similarly, in the BOx sample, two main extrema appear at 342 and 549 • C. Only the BTh sample shows a continuous, almost monotonic increase in the G-S curve with temperature, without any peaks. It should be added that the extremum points at the G-S curves do not correspond exactly to the T max read from DTG. This is understandable because the samples do not just decompose into volatile products. The accurate identification of gaseous products is difficult due to their variety and small quantities. The simultaneous FTIR analysis shows that the greatest number of volatile products was separated from the BIm sample. In addition to the moisture (3000-3600 cm −1 ), the other low intensive bands were observed at 1695, 1596, 1445, 1207, 1168, 966, 809, and 740 cm −1 , which indicates the evolution of the fragmentation products containing carbonyl, ether, and vinyl groups (Figure 3a) [37]. Nitrogen-containing compounds cannot be excluded, because the absorption of amine/amide (1655 amide I, 1523 amide II, and 1580 N-H) overlaps with the carbonyl region. The weak band at 966 cm −1 can be assigned to ammonia. Thermal Stability of Chitosan Derivatives-CS-BIm, CS-BOx, CS-BTh The shape of the TG, DTG, and DSC curves of the samples heated at a constant rate of 10 °C/min to 600 °C changes slightly (Figure 4), indicating a minor influence of chitosan In the remaining two compounds-BOx and BTh-the number of volatile products registered in infrared is negligible and on the noise limit (Figure 3b,c), suggesting that the released products condense. It can therefore be concluded that the primary reactions of the thermal decomposition of BIm with the breaking of covalent bonds and the evolution of gaseous products lead to the formation of free radicals, which then recombine into more stable products (not decomposing even at 600 • C, where the carbon residue is above 35%). Thermal Stability of Chitosan Derivatives-CS-BIm, CS-BOx, CS-BTh The shape of the TG, DTG, and DSC curves of the samples heated at a constant rate of 10 • C/min to 600 • C changes slightly (Figure 4), indicating a minor influence of chitosan modification on the course of its thermal degradation. Considering the total weight loss in the modified chitosan samples, several percent lower than in the pristine CS, it can be concluded that the aromatic heterocyclic substituents slightly hamper the formation of thermally stable carbonaceous residue. Since, in the initial stages of the thermal destruction of polysaccharides, the side substituents are detached from the macrochains with the simultaneous release of gaseous products [41,42], CS loses its functional properties. Obviously, in the case of N-substituted chitosan derivatives, the abstraction or destruction of fluorophores (dye substituents) leads to a loss in Chitosan is a stable biopolymer up to about 250 • C. However, in the initial heating stage (in the range of 80-150 • C), a weight loss of approximately 8%, caused by the release of adsorbed water, is observed ( Table 2). The similar thermal stability of chitosan has been reported in the literature [38,39]. The data indicate that the water is somewhat more strongly bound in the two N-substituted derivatives, CS-BIm and CS-BTh, than in the original chitosan. The main decomposition of CS starts at 254 • C and proceeds at a maximum rate of 277 • C. It is accompanied by a weight loss of about 55%. Thermal decomposition is an exothermic transformation, as shown by the DSC curves. This exothermic effect can be connected with the simultaneous crosslinking of chitosan, as evidenced by a large residual mass not decomposed at 600 • C. As was reported before by Moussou et al., the apparent activation energy of CS thermal decomposition, determined by the Ozawa-Flynn-Wall method, is about 146 kJ/mol [40]. The shape of the TG, DTG, and DSC curves of CS-BIm, CS-BOx, and CS-BTh is similar to the unmodified chitosan curves, but shifts in the main determined thermal parameters are observed. An increase in T o , T max , and T exo at the DSC curve in the main decomposition stage (step II) indicates an improvement in the thermal resistance of CS-BOx and CS-BTh compared to CS. In the light of these data, CS-BIm appears to be the least thermally resistant material, although the differences in the determined parameters are low. This can be explained by the lower thermal crosslinking degree, which the smallest amount of carbonaceous residue (55.6%) indicates. This sample exhibits the highest value of decomposition heat (172 J/g). Considering the total weight loss in the modified chitosan samples, several percent lower than in the pristine CS, it can be concluded that the aromatic heterocyclic substituents slightly hamper the formation of thermally stable carbonaceous residue. Since, in the initial stages of the thermal destruction of polysaccharides, the side substituents are detached from the macrochains with the simultaneous release of gaseous products [41,42], CS loses its functional properties. Obviously, in the case of N-substituted chitosan derivatives, the abstraction or destruction of fluorophores (dye substituents) leads to a loss in emission properties. It can be assumed that the carbon residue consists mainly of crosslinked and condensed aromatic structures resulting from formed covalent bonds between the macrochains and the graphitization process, similarly as in unmodified CS [41,42]. This was confirmed by spectroscopic studies carried out systematically while heating the samples. The FTIR spectra showed the disappearance of functional groups (OH, NH) and a decrease in the degree of deacetylation, followed by chain breakage and the opening of glucoside rings. Finally, only the bands characteristic of C-H bonds in the aliphatic and aromatic compounds (corresponding to both stretching and deformation vibrations) were detected in the spectra of thermally degraded CS. The slight variations in the determined thermal parameters of dye-modified chitosans are due to the low degree of N-substitution by heterocycles: 2.4-2.9% [35]. The lack of endothermic melting peaks of BIm, BOx, and BTh on the DSC curves of CS-BIm, CS-BOx, and CS-BTh additionally confirms that these modifying dyes are covalently linked to CS. Summarizing this part of the research, the following sequence of thermostability can be proposed for the unirradiated samples: CS-BOx > CS-BTh > CS > CS-BIm This order was established from the temperature values at a maximum rate (T max ) for the main (II) decomposition stage, listed in Table 2. The explanation of the observed effects is not trivial. The reverse trend can be seen: the least stable dye (BOx), when chemically incorporated into CS macromolecules, improves its stability (CS-BOx). Considering the free radical mechanism of thermal decomposition, it can be assumed that benzoxazole radicals generated from the least stable substituent (BOx) quickly recombine with CS macroradicals (thus, finishing the degradation chain reaction), delaying the overall decomposition process. On the contrary, BIm has the opposite effect, i.e., being the most thermostable dye itself, it generates radicals initiating chitosan degradation faster. Probably, the benzimidazole radicals are responsible for this effect due to the easier reaction with CS. Effect of UV-Irradiation on the Thermal Stability of Chitosan Derivatives The thermal analysis also allows for the observation of changes in the modified chitosan samples under UV radiation (Table 3, Figure 5). As can be seen from the presented data, the thermal stability of the modified chitosan previously exposed to UV changes slightly. The water content in the CS-derivatives is lower by about 3% than in CS. To compare the thermal stability of the exposed CS-BIm, CS-BOx, and CS-BTh, we consider the main (II) stage of decomposition. Among the UV-irradiated samples, the CS-BOx shows the most significant stability (as evidenced by the value of T max and T exo ), which may indicate a particular share of oxygen-containing decomposition products in the creation of thermostable structures in this biopolymer (e.g., crosslinking). However, the maximum decomposition rate of this sample is the highest (V max = 8.2%/min). Similarly, the same sample not exposed to UV radiation was characterized by the highest decomposition rate (Table 2). It should be recalled that the starting BOx showed the lowest thermostability among the three tested heterocyclic compounds. This suggests that its degradation products may contribute to the thermal stabilization of CS. The abstraction of dye side groups during pyrolysis followed by their decay to free radicals may lead to subsequent recombination with the CS macroradicals formed in the earlier stages of thermal degradation. The exposure to UV radiation only resulted in a 2-6.5 • C increase in T max in the CS alone, CS-BIm, and CS-Box samples, while in the CS-BTh sample, the T max decreases by two degrees. Additionally, the loss of mass during the thermal decomposition of the irradiated samples differs slightly from that in the non-irradiated samples. It can therefore be concluded that the samples of modified CS are resistant to heat and, like chitosan itself, decompose only above 250 • C. Additionally, the loss of mass during the thermal decomposition of the irradiated samples differs slightly from that in the non-irradiated samples. It can therefore be concluded that the samples of modified CS are resistant to heat and, like chitosan itself, decompose only above 250 °C. However, some differences are observed in the composition of volatile products released from the irradiated samples ( Figure 6). Generally speaking, the most significant number of volatile products at temperatures above 250 °C was released from the starting (unexposed) chitosan (Figure 6a). The main bands are observed for unirradiated CS alone at the values (in cm −1 ) of 3520-3630 (OH), 2260-2400 (CO2), 1680-1840 (C=O, amide), 1390, 1280, 1174 (C-O), and 972 (N-H). Thus, the products are a mixture of water, carbon dioxide, ammonia, acetamide, and low-molecular carbonyl (ester)/ether derivatives. This is in line with the results published by Corazzari et al. [43], who confirmed the release of H2O, CO, CO2, NH3, and CH3COOH during the pyrolysis of chitosan using the TGA-FTIR and GC-MS methods. Only carbon monoxide and acetic acid were not detected in our experiment. In another study, apart from the above-mentioned compounds, acetamide was found, the amount of which depends on the degree of CS deacetylation [44]. In an earlier work by Zeng et al., heterocyclic aromatic compounds with nitrogen, mainly pyrazines, were also detected during an isothermal test at 553 K [45]. Such products were not found in this work. In UV-irradiated CS, similar bands were observed, but they were much less intensive (Figure 6b). This allows us to conclude that the identified products partially evolved during the photolysis preceding the thermal decomposition. The FTIR of the gaseous excretions from CS-BIm (Figure 6c), besides the substances typical for CS, also contain a significant number of unsaturated compounds (absorbing at 953 cm −1 ), but in this irradiated sample, their release is minimized (Figure 6d). The exposure to UV radiation only resulted in a 2-6.5 °C increase in Tmax in the CS alone, CS-BIm, and CS-Box samples, while in the CS-BTh sample, the Tmax decreases by two degrees. Further, the loss of mass during the thermal decomposition of the irradiated samples differs slightly from that in the non-irradiated ones. It can therefore be concluded that the However, some differences are observed in the composition of volatile products released from the irradiated samples ( Figure 6). Generally speaking, the most significant number of volatile products at temperatures above 250 • C was released from the starting This is in line with the results published by Corazzari et al. [43], who confirmed the release of H 2 O, CO, CO 2 , NH 3, and CH 3 COOH during the pyrolysis of chitosan using the TGA-FTIR and GC-MS methods. Only carbon monoxide and acetic acid were not detected in our experiment. In another study, apart from the above-mentioned compounds, acetamide was found, the amount of which depends on the degree of CS deacetylation [44]. In an earlier work by Zeng et al., heterocyclic aromatic compounds with nitrogen, mainly pyrazines, were also detected during an isothermal test at 553 K [45]. Such products were not found in this work. The unexposed CS-BOx and CS-BTh samples (Figure 6e,g) show less CO2, but the detached carbonyl product is dominant. Only in CS-BOx is a more volatile product after the exposure to UV observed (Figure 6f), which may indicate the catalytic effect of the photodegradation products trapped in the polymer matrix on the thermal destruction of this specimen. According to Bussiere's reports [46,47], the released volatile products can accumulate on the surface of the samples and contribute to an increase in roughness and adhesion work. The authors emphasize that the observed changes are the effect of the surface crosslinking. The order of the thermal stability of the irradiated chitosan derivatives (estimated from the temperatures at a maximum decomposition rate of IInd step-Tmax, Table 3) is: CS-BOx > CS-BTh  CS > CS-BIm It practically does not differ from that for the non-irradiated samples. Conclusions Thermogravimetry coupled with an FTIR analysis of volatile decomposition products indicated that BIm is the most thermally stable sample, and BOx is the least thermally resistant sample among the three tested dyes used for CS modification. At the same time, In UV-irradiated CS, similar bands were observed, but they were much less intensive (Figure 6b). This allows us to conclude that the identified products partially evolved during the photolysis preceding the thermal decomposition. The FTIR of the gaseous excretions from CS-BIm (Figure 6c), besides the substances typical for CS, also contain a significant number of unsaturated compounds (absorbing at 953 cm −1 ), but in this irradiated sample, their release is minimized (Figure 6d). The exposure to UV radiation only resulted in a 2-6.5 • C increase in T max in the CS alone, CS-BIm, and CS-Box samples, while in the CS-BTh sample, the T max decreases by two degrees. Further, the loss of mass during the thermal decomposition of the irradiated samples differs slightly from that in the non-irradiated ones. It can therefore be concluded that the samples of modified CS are resistant to heat and, like chitosan itself, decompose only above 250 • C. The unexposed CS-BOx and CS-BTh samples (Figure 6e,g) show less CO 2, but the detached carbonyl product is dominant. Only in CS-BOx is a more volatile product after the exposure to UV observed (Figure 6f), which may indicate the catalytic effect of the photodegradation products trapped in the polymer matrix on the thermal destruction of this specimen. According to Bussiere's reports [46,47], the released volatile products can accumulate on the surface of the samples and contribute to an increase in roughness and adhesion work. The authors emphasize that the observed changes are the effect of the surface crosslinking. The order of the thermal stability of the irradiated chitosan derivatives (estimated from the temperatures at a maximum decomposition rate of IInd step-T max , Table 3) is: CS-BOx > CS-BTh ∼ = CS > CS-BIm It practically does not differ from that for the non-irradiated samples. Conclusions Thermogravimetry coupled with an FTIR analysis of volatile decomposition products indicated that BIm is the most thermally stable sample, and BOx is the least thermally resistant sample among the three tested dyes used for CS modification. At the same time, the BIm sample released the most volatile products, containing heteroatoms (mainly oxygen and nitrogen) and unsaturated bonds. Heterocyclic aromatic substituents only negligibly contribute to the heat resistance of the CS. The TGA analysis showed the resistance of chitosan derivatives up to ca. 250 • C. Based on the temperatures at the beginning and the maximum decomposition rate, it can be stated that the lowest thermal stability is exhibited by CS-BIm, which is related to its lower susceptibility to thermal crosslinking. The other two derivatives (CS-BOx and CS-BTh) also showed a lower mass at 600 • C (compared to CS alone), which is a carbonized crosslinked biopolymer residue. The effect of UV exposure on thermal stability was also studied. CS-BOx shows the best resistance among all UV-exposed and unexposed CS derivatives. The main volatile products of the thermal decomposition of modified CS are water, carbon dioxide, ammonia, and carbonyl/ether compounds. The number of these evaporated products decreases in the UV-irradiated samples. The good thermal and photochemical stability of the dye-substituted chitosan is crucial from a practical point of view. Thanks to this, such materials intended, for example, for biomedical applications, can be sterilized with heat or UV radiation.
7,047.6
2022-05-01T00:00:00.000
[ "Chemistry" ]
Uvaria chamae ( Annonaceae ) Plant Extract Neutralizes Some Biological Effects of Naja nigricollis Snake Venom in Rats Uvaria chamae is a well known medicinal plant in Nigerian traditional medicine for the management of many diseases, but investigations concerning its pharmacological characteristics are rare. In this study, we evaluate its venom neutralizing properties against Naja nigricollis venom in rats. Freshly collected Uvaria chamae leaves were air dried, powdered and extracted in methanol. To study the antivenom properties, albino rats were orally administered with a dose of 400 mg/kg body weight and 1 h later, the venom was administered intraperitoneally at a dose of 0.08 mg/kg body weight of rats. Albino rats (male) weighing between 180-200 g were randomly divided into five (5) groups of three (3). Groups 1-5 received water, normal saline, venom, Uvareia chamae and venom, Uvaria chamae, respectively. Blood clothing time, bleeding time, antipyretic activity, haemoglobin, RBC, WBC, creatine kinase, AST, ALP and ALT activities total protein antioxidant activity and some blood electrolytes, plasma urea and uric acid were measured. Our results showed that Uvaria chamae methanol extract neutralized some biological effects of Naja nigricollis venom. The venom increased the rectal temperature, enzyme activities, bleeding time and other blood parameters. The plant extract was able to reduce these parameters in the extract treated groups. Details of the results are discussed. From this study, it is clear that U. chamae leaf extract had antivenom activity in animal models. The above results indicate that the plant extract possess potent snake venom neutralizing capacity and could potentially be used for therapeutic purpose in case of snake bite envenomation. INTRODUCTION Snake venom is a complex mixture of many substances, such as toxins, enzymes, growth factors, activators and inhibitors with a wide spectrum of biological activities (Theakston, 1983;Rahmy and Hemmaid, 2000).They are also known to cause different metabolic disorders by altering the cellular inclusions and enzymatic activities of different organs. Snake bite is an important cause of mortality and morbidity and it is one of the major health problems in Nigeria.Snake bite often results in puncture wounds inflicted by the animals.Although, the majority of snake species are non-venomous rather than venomous, snakebite remains an important medical problem in both developing and developed countries (Kasturiratine et al., 2010).Snake bite pose a major health risk in many countries, with the global snake bites exceeding 5,000,000 per year (Kasturiratine et al., 2010).Snake bite envenomations are frequently treated with parenteral administration of horse or sheep-derived antivenoms aiming at the neutralization of toxins.But despite the success of serum therapy, it is important to search for different venom inhibitors, either synthetic or natural, which would complement the action of antivenoms, particularly in relation to the neutralization of local tissue damage (Cardoso et al., 2003).Plant extracts constitute an extremely rich source of pharmacologically active compounds and a number of extracts has been shown to act against snake venom (Martz, 1992).The medicinal value associated with a plant can be confirmed by the successful use of its extract on snake bite wounds (Mors et al., 2000;Otero et al., 2000a;Soarea et al., 2004).Application of medicinal plants with anti-snake venom activities might be useful as first aid treatment for victims of snake bites, which is particularly important in local areas where antivenoms are not readily available (Otero et al., 2000b, c;Nunez et al., 2004;Sanchez and Rodriguez-Acosta, 2008).More so, antivenoms have some disadvantages, thus limiting their efficient use (Chippaux and Goyfton, 1998;Heard et al., 1999;Da silva et al., 2007).For example they can induce adverse reactions ranging from mild symptoms to serious (anaphylaxis) and in addition, they do not neutralize the local tissue damage (Gutierrez et al., 2009). Thus, complementary therapeutics needs to be investigated, with plants being considered as a major source (Soares et al., 2005).In many countries, plant extracts have been used traditionally in the treatment of snake bite envonomations.Thus, vegetal extracts have been found to constitute an excellent alternative with a range of anti snake venom properties.However, in most cases, scientific evidence of their antiophidian activity is still needed.The exact mechanisms of action of ;the plant extracts remain largely illusive, however, a number of previous reports indicate that plant-derived compounds, such as rosmarinic acid (Ticli et al., 2005;Aang et al., 2010) quercetin (Nishijima et al., 2009) and glycyrrhizin (Assafim et al., 2006) can inhibit biological activities of some snake venoms in vivo and in vitro. Uvaria chamae is a Nigerian medicinal plant that belongs to the family, Annonaceae.It is commonly called by the Igala people of Kogi State as Ayiloko, Kaskaifi by the Hausas, Oko oja by the Yorubas in Nigeria as well as Akotompo by the Fula-fainte of Ghana.It is a medicinal plant used in the treatment of fever and injuries (Kumar and Sadique, 1987).These are other oral claims that the plant can cure abdominal pain, used as treatment for piles, wounds, sore throat diarrhea etc. The aim of the present study was to evaluate the ability of Uvaria chamae extract to neutralize some biological effects of Naja nigricollis venom in rats. MATERIALS AND METHODS Chemicals, solutions and equipment: All chemicals used in the present study were of analytical grade and purchased from reputable company (BDH, UK).Kits of Triglycerides, Total Cholesterol, Creatine kinase, AST, ALT, ALP were from Randox laboratories (UK).UV/visible spectrophotometer (Shimadzu) centrifuge (Heraeus Christ GMBH Estrode), Analytical balance, measuring cylinder, micropipette, mortar, pestle Digital thermometer and deep freezer. Plant material collection and extract preparation: Fresh leaves of Uvaria chamae were collected from farm located in Odogomo in Ankpa Local Government area of Kogi State, Nigeria.The plant was identified taxonomically and authenticated by Mr. Patrick Ekwuno, a botanist in the Department of Biological sciences, Kogi State University, Anyigba, Nigeria.The fresh leaves were air-dried for four weeks, powdered using mortar and pestle and stored in an airtight container.Uvaria chamae leaf powder (200 g) was extracted in 500 mL of methanol using cold maceration for 48 h.After that, sample solution was filtered through 0.45 mm filter to remove the insoluble materials.The filtrate was concentrated by removing the solvent completely using a water bath.For oral administration, extract was dissolved in 10 mL Phosphate Buffer Saline (PBS).To make the extract soluble in PBS, 1% tween 80 was used. Animal model: Wistar albino rats (male) weighing between 180 to 200 g was obtained from Mr. Emmanuel Titus Friday, Department of Biochemistry, Kogi State University, Anyigba, Nigeria.This study was approved by the Department of Biochemistry according to the institutional ethics.These animals were used as approved in the study of snake venom toxicity.Rats were allowed to acclimatize for two weeks with access to clean water and animal feeds (supplied by Top feeds, Anyigba, Nigeria) in the experimental site.They were maintained in standard conditions at room temperature, 60±5% relative humidity and 12 h light/dark cycle. Experimental design: Wistar albino rats were randomly divided into five groups of three rats: Group 1: Control group that received only water (2 mL) Group 2: Control group that received normal saline (2 mL) Group 3: Envenomed rats that did not receive any drug treatment Group 4: Envenomed rats treated with U. chamae extract Group 5: Control group that received U. chamae The extract was administered orally at a dose of 400 mg/kg body weight of rats and 1 h later, the venom was administered intraperitoneally at a dose of 0.08 mg/kg body weight of rats. Before and after envenomation, the rectal temperature was measured.After envenomation, different parameters such as bleeding time, clotting time, enzymes activities, (creative kinase, AST, ALP and ALT), electrolytes, plasma cholesterol and triglycerides were measured.Collected blood samples (2 mL) were centrifuged at 400 r.p.m for 10 min to separate the plasma. Determination of activity of U. chamae on blood coagulation system (clothing and bleeding time) in rats. Bleeding time: For the determination of the bleeding time, modified procedure of Mohamed et al. (1969) was used.Four hours after the treatment of the animals, the tail of each rat was gently pieced with lancet.A piece of white filter paper was used to blot the blood gently from the punctured surface of the body.The readings were taken every 15 sec.The end result occurs when the paper was no longer stained with blood. Clotting time: For the determination of the clotting time, the modified method of Igboechi and Anuforo (1986) was used, clotting time is the time required for a firm clot to be formed in fresh blood on glass slides.The blood sample was collected from, the rats via tail bleeding and a drop was placed on a clean plain slide and every 15 sec, a tip of office pin was passed through the blood until a thread-like structure was observed between the drop of blood and tip of the pin.The thread-like structure was an indication of a fibrin clot.The time was recorded. Determination of antipyretic activity: The method of Bisignano et al. (1994) was used to evaluate the antipyretic activity of the extract.The rats were fostered overnight and rectal temperature was recorded using digital thermometer with a rectal probe.The rectal temperature was recorded before and after envenomation. Blood sample collection and measurement of some haematological parameters: At the end of the experimental period, the animals were made inactive by chloroform anaesthization.Blood samples were collected via cardiac puncture into EDTA bottles to prevent coagulation.The blood samples were centrifuged for five min, results were read on the hematocrit reader for Packed Cell Volume (PCV), White Blood Cell (WBC) Red Blood Cell (RBC) and hemoglobin level, platelet as described by Baker and Silverton (1985). ENZYME ACTIVITY ASSAYS Creative kinase activity assay: The activity of creatine kinase was determined according to the method described by Szasz et al. (1976).Randox CK110 kit was used for the quantitative in vitro determination of the enzyme activity.The creatine kinase activity was calculated using the formula: U/L = 8095 X ΔA at 340 nm/min.Where ΔA = change in absorbance. Alkaline Phosphatese (ALP) activity assay: The activity of this enzyme was measured as described by Schmidt and Schmidt (1963).A portion (0.5 mL) of ALP substrate was dispensed into labeled test tubes and equilibrated to 37°C for three min.At an interval of 2 min, 0.05 mL of each of standard, control and sample were added to respective test tubes and gently mixed.Deionized water was used as reagent blank.The tubes and their content were then incubated for 10 min.Following the same sequence as given above, alkaline phosphatease color developer was dispensed into the tubes and thoroughly mixed.The absorbance of each sample was read at 590 nm and recorded using spectrophotometer.The activity of the enzyme was calculated thus: Enzyme activity (U/L) = (Absorbance of sample /Absorbance of standard) × value of stand Alanine aminotransferase (ALT) activity assay: The measurement of ALT activity was as described by Schmidt and Schmidt (1963). A Spartate Animo Transferase (AST) activity assay: AST activity determination was as described by Reitman and Franked (1957). A portion (0.5 mL) of buffer was dispensed into all test tubes and 0.1 mL of distilled water, standard, control and sample were dispensed into respective tube, mixed and incubated for 30 min at 37°C.After incubation, 0.5 mL of 2, 4-dinitrophenyl hydrazine was dispensed into respective test tubes, mixed and allowed to stand for 20 min at 25°C.A portion (5.0 mL) of 1.0 m sodium hydroxide was then dispensed into the tubes, mixed thoroughly and the absorbance read at 540 nm after 5 min. Determination of plasma triglyceride: The plasma triglyceride level was measured according to the method described by Tietz (1990).Randox TR 210 assay kit was used for the quantitative invito determination of triglyceride in plasma.Triglyceride concentration was calculated using the formular: (Absorbance of sample/Absorbance of standard) × concentration of standard (mmol/L) Determination of plasma cholesterol: The plasma cholesterol was measured according to the method described by Richmmd (1973) Randox CH 200 kits were used for the quantitative in vitro determination of cholesterol in plasma using a standard. The concentration of cholesterol in the sample was calculated by the formula: (Absorbance of sample/Absorbance of standard) × concentration of standard (mmol/L) Estimation of plasma total protein and albumin: The blood plasma obtained from centrifuging was used for the estimation of total protein and albumin following the method described by Gornal et al. (1949) and McPherson and Everad (1972) respectively. Estimation of plasma electrolytes: Plasma sodium ion was determined as described by Maruna (1958). Potassium ion was estimated following the method of Terri and Sessin (1958).The method of Skeggs and Hochstrasser (1964) was followed in the estimation of chloride ion. Determination of plasma urea and uric acid: These were determined following standard methods.Plasma urea laws according to the procedure outlined by Carl et al. (2006).Similarly Trinder (1969) method was adopted in the estimation of plasma uric acid. Plasma creatine level estimation: Blood plasma creatine was determined as described by Jaffe (1957). Measurement of DPPH free radical scavenging activity of U. chamae: The free radical scavenging activity of the plant extract was measured employing the modified method of Blois, 1985.A portion (1 mL) each of the different concentrations (1.0, 0.5, 0.25 and 0.625 mg/mL) of extracts or standard (quercetin) in a test tube was added 1 mL of 0.3 mM DPPH in methanol. The mixture was vortexed and then incubated in a dark chamber for 30 min after which the absorbance was measured at 517 nm against a DPPH control containing only 1 mL of methanol in place of the plant extract.Percentage scavenging activity was calculated using the expression: Percentage scavenging activity = � Abso rbance of control −Absorbance of samp le Absorbance of contr ol � ×100 Statistical analysis: The mean value+S.E.M was calculated for each parameter.Results were statistically analyzed by one-way-Analysis of Variance (ANOVA) followed by Benferonis multiple comparison.p<0.05 was considered as significant. Clotting time: The result of the effects of U. chamae against Naja nigricollis venom on blood clotting time is as presented in Table 1.The clotting of the control (group 3) is lower when compared to the extract treated Bleeding time: Bleeding time is the time taken for bleeding to stop.As presented in Table 2, the bleeding time of the group 3 (control) which was not treated with any drug was higher (140.00±2.774)indicating a deleterious effect of the snake venom.The plant extract treated group (group 4) had a reduced bleeding time when compared with group 3. Antipyretic activity: As shown on Table 3, the rectal temperature of group 3 (control) which was not treated but envenomed is higher than group 4 treated with the plant drug.This is an indication of antipyretic activity of the plant. Hematological parameters: Hematological parameters were significantly (p<0.05)reduced in group 3 (envenomed rats) when compared with the extract treated group 4 (Table 4).The WBC was most reduced when compared with other hematological parameters.This therefore means that the extract neutralized the biological effect induced by the venom in the extract treated group that had increased HGB, WBC, RBC and PCV.Lipid profile: The triglyceride and cholesterol were reduced by the snake venom in rats (group 3) as shown in Table 5.This reduction for cholesterol was statistically significant (p<0.05) when compared with the extract treated group 4. The extract (U. chamae) had some measure of protection against lipolysis induced by the snake venom. Enzyme activity assay: The result of the effect of U. chamae extract on the activities of the enzymes assayed is as presented in Table 6. The snake venom induced increased activity (group 3).The extract treated group i.e., 4 and 5 had reduced enzyme activity in the entire enzyme assayed.These reductions were statistically significant (p<0.05) when compared with the control (group 3). Changes in protein and some blood constituents: As presented in Table 7 and 8 total protein, albumin, creatinine and urea were reduced by the snake venom in rats.Uric acid concentration was increased and this was reduced by the plant extract in the extract treated group 4. The electrolytes also increased in the envenomated group 3 except for potassium. Antioxidant activity of extract: The result of the antioxidant activity of the plant extract is as presented in Table 9. The free radical scavenging activity of the plant extract (0.355 mg/mL) is comparable with the standard quercetin used (0.296 mg/mL). DISCUSSION Snake bite is an important cause of morbidity and mortality and is one of the major health problems in Nigeria.The most effective and acceptable therapy for snake bite victims is the immediate administration of antivenom following envenomation (Mahanta and Mukkerjee, 2001).The orthodox medical treatment of snake venom poisoning so is limited by the use of anti-venom, which is prepared from animal sera.Although, the use of plants against the effects of snake bites has been recognized, more scientific attention has been given to since last 20 years (Alam and Gomes, 2003).Like plants, snakes venom can also be considered a sophisticated laboratory of biotechnology.The search for bioactive molecules in plants used in folk medicine has been growing in the past few years.In this study we have reported that U. chamae neutralized some biological effects induced by Naja nigricollis venom including various parameters such as blood clotting time, bleeding time, some hematological parameters, lipid profile, enzyme activities which were measured.The measurement of these parameters in plasma is of importance in the assessment of the pathphysiological state of snake bite victims. The results suggest that Naja nigricollis venom can disturb rat metabolism.The study showed that the extract of U. chamae was effective in neutralizing the lethality and the effects of Naja nigricollis venom in animals.Several workers have studied the ability of plants as well as their purified fractions to inhibit biological activities of snake venoms (Melo et al., 1994;Maiorano et al., 2005;Oliveira et al., 2005;Cavalcante et al., 2007;Lomonte et al., 2009;De Paula et al., 2010).However, only a few have investigated the neutralizing mechanism of their action.In some cases a direct interaction with catalytic sites of enzymes or with metal ions which are essential for enzymes activities may be involved (Borges et al., 2005;Nunez et al., 2005). Regardless of the precise mechanism U. chamae appear to be a promising chemical agent for use as first and treatment, or in combination with antiserum.Many snake venoms are known to cause pathological properties associated with haematological disturbances leading to in coagulability of blood.Some local tissue necrosis always accompany envenomation from this snake species.Spontaneous bleeding and coagulation disturbances are some of the haematological effects of Naja nigricollis in patients (Warrell et al., 1976).The fundamental differences between blood clotting and bleeding determination is that bleeding is associated with integrity of blood vessels while clotting is a function of clotting factors deficiency. The decrease in clotting time level observed in Table 1 establishes the fact that treatment of animals with extract/venom mixture abolished the blood incoagulability.The capacity of plasma to form thrombin is also relevant in the blood coagulation system.These entire blood characteristic are affected by the toxic components of Naja nigricollis venom (Denson et al., 1992). In the envenomated animals (group 3) that were not treated with extract there was significant (p<0.05)reduction in clotting time due to the presence of venom.In groups 4 and 5 treated with U. chamae extract, the extract neutralized this effect of the venom and the clotting time was maintained at the normal level when compared with the control groups 1 and 2. Bleeding time is associated with integrity of blood vessels and is known to cause pathological disturbances leading to incoagulability of blood. In this present study, the level of bleeding time increased significantly (p<0.05) in the envenomated animals in group 3 that were not treated with extract.The increase in bleeding time in this group established the blood incoagulability (Denson et al., 1992). Pro-coagulability commonly found in cobra venom cause blood coagulation to occur due to its thrombinlike effect and also it can cause the activation of factor X to Xa.The anticoagulants prevent blood from clotting essentially due to the effect of the venom fribrinolysis or fribrinogenolysis or action of phospholipase on platelets or plasma phospholipids.The two chemical may be found in the same venom.The conflicting results obtained in the clotting and bleeding time (Table 1 and 2) could be as a result of the presence of pro-coagulant and anticoagulant in the same venom. Table 3 presents the results obtained from the measurement of antipyretic activity of U. chamae extract.The victims of Naja nigricollis envenom action also present fever as one of the symptoms of event on action (Warrell et al., 1976).Rectal temperature increased signfically (p<0.05) in group 3 rats that received Naja nigricollis venom compared with the value obtained before envenomation.This effect was neutralized in groups 4 and 5.The result revealed the antipyretic activity of the plant. As presented in Table 4, Packed Cell Volume (PCV) of the envenomed rats were reduced significantly (p<0.05), when compare with nonenvenomed ones.This is consonance with the report of Mwangi et al. (1995). White blood cells are effectors of the immune system, (in group 3 there was significant reduction in the WBC compared to group 4 that received venom and extract.This suggests that the plant extract must have combated the venom directly without cells of the immune system producing effectors cells.Pathological properties of Naja nigricollis are mainly associated with haematological disturbances leading to hemorrhage.The platelet inhibition was not due to either serine proteases or metalloproteinase which may be present in the venom.In this study it was demonstrated that the venom effectively inhibits clot formation and platelet aggregation. The reduction in number of platelets in blood also leads to spontaneous bruising and prolonged bleeding as observant on Table 2. Haemoglobin is the principal molecule responsible for the transport of both oxygen and carbon oxide in blood in group 3 the haemoglobin level decreased due to the effect of the venom compared to group 4 and 5. The results of the effects of U. chamae extract on the plasma lipid profiles in rats after Naja nigricollins envenomation is as presented in Table 5.These are few reports on the effects of snake venom on plague cholesterol and triglyceride levels were observed in group 3 rats.This result suggests that the snake venom might have mobilized lipids from adipose and other tissues.Lipolytic enzymes, which are present in many snake venoms, could have splitted tissues lipids, with the liberation of free fatty acids.It has been reported also that increased total plasma lipids levels caused by administration of snake venom and the disturbance of lipids metabolism, could be attributed to liver change and distraction of cell membrane of animal tissues (Abdul-Nabi et al., 1997).However, plasma cholesterol and triglycerides have been shown to decrease following some other venoms injection in rats (Meier and Stocker, 1991).In this study, the plant extract offered some protection against the lipolytic activity of the venoms cholesterol is more in the extract treated group than the control (group 3). As presented in Table 6, there was a significant (p<0.05)increase in the activity of the enzymes assayed for in group 3 rats when compared with group 4 and 5 that received oral dose of the plant extract in these group the activity of the enzymes were reduced suggesting protective effect of the plant.The increase in enzyme activities in group 3 might be due to muscle necrosis causing the enzymes to leak out of the muscle in to the plasma.The present study revealed (Table 7) that the injection of crude venom of Naja nigricollis caused reduction in total protein albumin, urea, creatinine and increase uric and concentration in envenomated rats (group 3) but these blood constituents were increased in the extract treated groups.It might be assumed that, the reduced levels of these constituents could be due to disturbances in renal functions as well as haemorrhages in some internal organs.In addition, the increase in vascular permeability and haemorrhages in vital organs due to the toxic action of various snake venoms were described by Meier and Stocker (1991) and Marsh et al. (1997).The increased values of these blood constituents in the extract treated groups 4 and 5 are indication of the protective effect of U. chamae. There are few investigations regarding the effect of snake venoms on serum electrolytes.Mohammed et al. (1964) reported an initial decrease in blood sodium and initial increase in blood potassium following W. aegyptia eventuation.Similar observations were seen with venoms of both W. aegyptia and E. coloratus in rats (Al-jammaz, 1995).In this present study snake venoms produced increased sodium and chloride levels and reduction in potassium, in the undecorated rats (Table 8).The disturbance in electrolyte levels might be due to acute renal failure and glomerular tubular damage (group 3). The extract treated group 4 showed reductions in electrolyte levels implying neutralization of the venom toxicity. The DPPH (2, 2-diphenyl-1 picrylhydrazyl) radical is considered to be a model lipophilic radical.The radical scavenging activity of U. chamae was determined from the reduction in absorbance at 517 nm due to scavenging of stable DPPH free radicals.The scavenging effect of the leaf extract on the DPPH radical is shown in Table 9.This positive DPPH test suggests that the sample is a free radical scavenger.The neutralizing effect of the plant on the snake venom toxicity could as well be linked to the free radical scavenging properties of U. chamae extract.The free radical scavenging activity of the plant is concentration dependent and this is a good attribute of pharmacological agents. In conclusion, the present experimental results indicate that U. chamae extract was effective in neutralizing the toxic effects of Naja nigricollis venom and or has an alternative or complementary treatment strategy of envenomation by Naja nigricollis.Further experiment could address the fractioning of the U. chamae extract in order to identify the bioactive compounds responsible for these observations, their efficacy, safety and the antiophidian mechanism of action which could possibly lead to the development of pharmaceutical formulations for treating snake bite accidents-victims. Table 1 : Effect of U. chamae extract on clotting time after Table 2 : Effect of U. chamae extract on bleeding time after Table 3 : Antipyretic activity of U. chamae extract in Naja nigricollis envenomation Rectal temperature (°C) Values in the same column with the same superscript are considered not significant (p>0.05);Values in the same column with different superscript are considered significant (p<0.05) when compared with venom control (group 3) Table 4 : Effect of U. chamae extract on some hematological parameters in rats Table 5 : Effect of U. chamae extract on two plasma lipid profiles in rats after Naja nigricollis envenomation Rectal temperature (°C) Table 6 : Values in the same column with the same superscript are considered not statistically significant (p>0.05);Values in the same column with different superscript are statistically significant (p<0.05) when compared with control (group 3) Effect of U. chamae extract on enzyme activities after Naja nigricollis envenomation Enzyme activities Table 7 : Changes in plasma constituents of rats following envenomation and treatment with U. chamae extract Values in the same column with the same superscript are considered not statistically significant (p>0.05);Values in the same column with different superscript are statistically significant (p<0.05),when compared with control (group 3); TP: Total protein; Alb: Albumin Table 8 : Effect of U. chamae extract on plasma electrolytes after snake envemomation Values in the same column with the same superscript are not statistically significant (p>0.05);Values in the same column with different superscript are statistically significant (p<0.05),when compared with control (group 3)
6,307
2013-04-25T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
On the Geometric Unification of Gravity and Dark Energy In the framework of Finslerian geometry, we propose a geometric unification between traditional gauge treatments of gravity, represented by a metric field, and dark energy, which arises as a corresponding gauge potential from the single SU(2) group. Furthermore, we study the perturbation of gravitational waves caused by dark energy. This proposition may have far reaching applications in astrophysics and cosmology. In this paper we use the framework of Finslerian geometry [24-29, 37, 38] to propose a geometric unification between traditional gauge treatment of gravity, represented by the metric field g μν , and dark energy, which appears as a corresponding gauge potential B μ , arising naturally from the gauge treatment of the single SU (2) group [30][31][32]. We demonstrate that the dark energy would result naturally as a geometric effect of Randers space, rather than being an additional suggestion. Randers space, as a special kind of Finsler space, was first proposed by G. Randers [37]. A generalized Friedmann-Robertson-Walker (FRW) cosmology of Randers-Finsler geometry has been also suggested [39][40][41]. Furthermore, we study the dark energy perturbation of gravitational wave, and discuss some wider potential applications of this in astrophysics and cosmology. On the Finslerian Geometric Unification of Gravity and Dark Energy In the framework of a Riemannian approach, where two nearby particles are subject to the traditional gravitational field g μν (free-falling), the Equation of Deviations of Geodesics (EDG) takes the form: D 2 n μ ds 2 + R μ νκλ n ν n κ n λ = 0 ( 1 ) where n ν , n κ represent the deviation vector; D ds is the covariant derivative; and R μ νκλ is the Riemann tensor. The equation of motion of two nearby particles subjected to the action of the massless vector B-field of dark energy is obtained by introducing the Lorentz term into the geodesics equation (1) and replacing the four-acceleration a μ = du μ ds by Du a ds . where B μν represents the dark energy field strength: B μν = B ν,μ − B μ,ν ; and B μ is the dark energy gauge potential arising from the single SU(2) group [30]. Equation (2) thus modifies (1) in the general form: where and β is a constant β = 1 16πG . The term Φ μ in relation (3) describes the external interaction between two nearby mass particles due to dark energy. For Φ μ = 0, relation (3) is reduced to (1), where only the gravitational field is present. Φ μ also governs the relative acceleration between two nearby particles in the flat space with R a βγ δ = 0. From the above we conclude that Riemannian geometry does not provide a sufficient framework for the geometric unification between gravity and the dark energy. The disadvantage of Riemannian geometry is that the equation of motion of a particle subject to the action of a gravitational and dark energy field doesn't occur physically from the geometry of space-time and it is necessary to be imposed as an independent axiom. Riemannian geometry can be extended through the introduction of the Finsler space [25]. The metric function of the Finsler space is given by: where g μν is the Riemannian metric tensor and B μ is the dark energy vector potential. The metric f μν of Finslerian space [29,37], is given by where g μν is the Riemannian metric tensor and h μν a metric tensor, which is given by where A space endowed with the metric tensor (4) is called a Randers space [29,37]. The presence of the dark energy component h μν in space-time causes the isotropy of space to break down. The geodesic equation for this space is: Then, from Λ μ can be derived the connection coefficient Λ κλ μ of the space, similar to the Berwald connection coefficients discussed in [28]. By analogy to the Berwald curvature tensor [28], we may associate with connection coefficient Λ κλ μ a curvature tensor. where R i hj k is the Riemannian curvature tensor that came from the metric g μν , and B i hj k is given by (13) where The equation of geodesic deviation is given by hj k x h x k , z j is the deviation vector and x is the tangent vector. AsH i hj k has a part independent of velocity x m , we have the relation By (16), (12) becomes By (17) we derive the action of the system The variation of action's integral leads to the field equations where R mn = R i mni , R = g mn R mn and T mn is the energy-momentum tensor. The (20) are the field equations of the "gravito-dark energy" of the Randers space with respect to the condition (16). From a physical point of view the curvature H ihj k can be considered as a "gravito-dark energy" curvature of the space. For (12) and (15) we observe that the deviation equation z j has two terms: a pure gravitational deviation, represented by R i hj k in the curvature tensor equation (12), which we would observe if there was no dark energy field, and the admixture of gravitational and dark energy deviations, represented by B i hj k in the curvature tensor equation (12). We examine the following cases: For B i hj k = 0 we have where the deviation equation (15) becomes a Riemannian one. For B i hj k = 0, R i hj k = 0 the associated curvature tensor of Randers spaceH i hj k is derived from the connection coefficients The last relation shows that dark energy field is incorporated in the geometry of space. The second part B i hj k of the full curvatureH i hj k (12) describes the dark force that two freely falling particles of masses m 1 and m 2 would exercise on each other. In such a case, the dark force would result naturally as a geometrical effect and it would not be necessary for us to impose it in addition. Finally, for R i hj k = 0 the equations of the geodesic deviations are governed by the dark energy and relation (15) is reduced to: In this case the first term of the Randers tensor corresponds to the Lorenz metric. The metric function F (x,ẋ) can, then, be expressed in the form: This metric function is interesting for a possible linear theory caused by the dark energy field. The Friedman Equation for a Linearized Dark Energy Vector Field In some cases it is useful from a physical point of view to consider a vector field in the form y i /i, i = 1, 2, 3, 4 and the induced Finslerian metric tensor gives rise to the osculating Riemannian metric tensor g μν (x) = f μν (x, y(x)) [42]. The Osculating Riemannian approach (for details see [42]), can be specialized for the tangent vector field y(x) of the cosmological fluid flow lines. We are interested in producing the Einstein field equations as in [40]. After calculating the connection and the curvature for the Riemannian osculating metric g μν (x) we are lead to [43] where all the quantities in (25) are functions of (x, y), y = y(x). The energy-momentum tensor for the signature (+, −, −, −) is defined to be T μν = −P g μν + (μ + P )y μ y ν (26) where P is the pressure and μ is the energy density of an ideal cosmic fluid. In order to investigate the FRW cosmology, we set the Riemannian metric g μν in (24) to be the Robertson-Walker one where t is the cosmic proper time r, θ, φ, the comoving spherical coordinates, k = 0, ±1 and a R(t) the scale factor of the expanding volume. The new metric function σ (x, y) = a μν (x)y μ y ν (29) If we fix the direction y =ẋ then σ (x,ẋ) = 1. The vector field B i stands for a weak dark energy vector field |B i | 1, incorporated to the geometry of space-time as an intrinsic characteristic. This field would most naturally be expected to point in the same direction with the tangent vectors of the fluid flow lines [44]. As a result it will have only a time like component which can be expressed as a function of the proper time B i = (B 0 , 0, 0, 0). We can approximate the 0-component of the dark energy vector field B i at first order of Taylor type approximation [39,40]: Since all of the other components of the dark energy vector field vanish, only the diagonal elements of the metric and the Ricci tensor survive. Under the assumption of a weak Lorentz violation (LV) as in [39,41] we can restrictḂ(t 0 ) to be small enough (Ḃ(t 0 ) → 0) considering an almost constant value of the field. In virtue of the metric g μν (x) = f μν (x, y(x)) [42], we are able to calculate the Christoffel symbols and the curvature (for details see [39,40]). The Ricci tensors L μν can be approximated for (Ḃ(t 0 ) → 0) and this implies the following components: The substitution of (26) to the field equations (25) implies the following equations at the weak field limitR after subtracting (32) from (33) we obtain the Friedman-like equation The previous equation is similar to the one derived from the Robertson-Walker metric in the Riemannian framework, apart from the extra termṘ RḂ 0 . We associate this extra term to the present anisotropic Universe's dark energy. The B i vector field reflects a preferred direction in every tangent space and mimic's possible LV [39,41]. The Dark Energy Perturbation of Gravity Wave As an extension of the theory of gravitational waves described by General Relativity, we introduce a Finslerian metric, representing the Finslerian perturbation of Riemannian metric [33][34][35] f μν (x, y) = g μν (x) + εθ μν (x, y), |ε| 1 (35) where g μν is the Riemannian metric tensor and θ μν (x, y) is the Finslerian perturbation to the Riemannian metric tensor. Metric tensor (35) can be called a post Riemannian metric tensor [36]. Here, the Finslerian perturbation of Riemannian metric represents the dark energy perturbation of the gravity wave. This observation invites us to consider a Finslerian manifold, whose metric function contains two massless dark energy fields with 4-potential vectors, B (1) i and B (2) i , in the following form: where V i = dx i /ds is a 4-velocity of a particle, β is a constant, φ = λB i(1) B (2) i is the interaction term of two dark energy fields, λ is a constant, and Λ(x, V ) is an homogeneous function of 1st degree, assumed to be scalar in the Finslerian manifold [25,27]. The last term of (36) corresponds to the gravitational field induced by the interaction between the dark energy fields. It contains the information of the gravitational field caused by the interaction of the dark energy field. This gravitational field affects the motion of every physical object in space-time. There are regions in space-time similar to those discussed in [27]. There is a region U on the manifold "bent" by the gravitational fluctuations between the dark energy vector fields. This additional curvature causes other gravitational effects like: test particle-region U, dark energy fields-region U and dark energy fields-test particle-region U effects. The gauge transformation that relates a pair of local dark energy vector fields, whose regions of definition in space-time are typically different, is given as follows B (2) i ∂ i denotes the partial differentiation with respect to V i . Applying (5) to (36) we obtain the following metric tensor for this field: where g μν is the Riemannian metric tensor corresponding to the gravitational wave perturbation (|g μν | 1) to Minkowski metric n μν = diag(−1, −1, −1, +1) [33][34][35], and h μν describes the interactions between the gravity wave and dark energy, and dark energy to itself. Conclusion The equation of deviations of geodesics in Finsler space allows incorporation of dark energy in the geometry of space. We demonstrate that the dark energy would result naturally as a geometric effect and it would not be necessary for us to impose it in addition. In a way the geometric unification between gravity and dark energy massless vector field is achieved. We also find that, whenever the dark energy component h μν is present in space-time, the isotropy of space breaks down. In the framework of Finsler space, we can also predict a dark energy perturbation of gravitational waves.
2,975
2012-04-19T00:00:00.000
[ "Physics" ]
Combination of Talazoparib and Calcitriol Enhanced Anticancer Effect in Triple−Negative Breast Cancer Cell Lines Monotherapy for triple−negative breast cancer (TNBC) is often ineffective. This study aimed to investigate the effect of calcitriol and talazoparib combination on cell proliferation, migration, apoptosis and cell cycle in TNBC cell lines. Monotherapies and their combination were studied for (i.) antiproliferative effect (using real−time cell analyzer assay), (ii.) cell migration (CIM−Plate assay), and (iii.) apoptosis and cell cycle analysis (flow cytometry) in MDA−MB−468 and BT−20 cell lines. The optimal antiproliferative concentration of talazoparib and calcitriol in BT−20 was 91.6 and 10 µM, respectively, and in MDA−MB−468, it was 1 mM and 10 µM. Combined treatment significantly increased inhibition of cell migration in both cell lines. The combined treatment in BT−20 significantly increased late apoptosis (89.05 vs. control 0.63%) and S and G2/M populations (31.95 and 24.29% vs. control (18.62 and 12.09%)). Combined treatment in MDA−MB−468 significantly increased the S population (45.72%) and decreased G0/G1 (45.86%) vs. the control (26.79 and 59.78%, respectively). In MDA−MB−468, combined treatment significantly increased necrosis, early and late apoptosis (7.13, 33.53 and 47.1% vs. control (1.5, 3.1 and 2.83%, respectively)). Talazoparib and calcitriol combination significantly affected cell proliferation and migration, induction of apoptosis and necrosis in TNBC cell lines. This combination could be useful as a formulation to treat TNBC. Introduction Breast cancer (BC) is the most common type of cancer among women [1]. Molecular profiling of BC have shown subgroups of breast cancer that have different genetic makeups as well as clinical outcomes, calling for the development of new drugs [2,3]. Triple−negative breast cancer (TNBC) is a subtype of advanced and one of the most aggressive types of BC. It is described by the absence of progesterone receptors (PR), estrogen receptors (ER), and human epidermal growth factor receptor 2 (HER2) in the breast tumor [4,5]. TNBC patients have a higher chance of recurrence within three years after diagnosis, and the mortality rates appear to be higher throughout the next five years. TNBC accounted for 10 to 20% of all invasive BC. It has also been associated with a more advanced disease stage, high mitotic indices, higher grade, BC history in the family, and BRCA1 mutations [5]. Currently, standard treatments for BC involve targeted therapy toward receptors such as ER, PR and HER2, making it a less effective treatment option for TNBC patients [6]. Thus, to date, chemotherapy is the most effective systemic therapy for TNBC patients. However, increased metastasis, early recurrence, and poorer outcomes are still prevalent among these patients after chemotherapy [7,8]. Antiproliferative Effect of Talazoparib, Calcitriol and Their Combination in TNBC Cells The cytotoxicity of talazoparib, calcitriol and their combination in BT−20 and MDA −MB−468 was monitored for the cell index (CI) values for 96 h (Figures 1 and 2). After treatments, the CI values decreased in a time−dependent manner in both cell lines. In BT−20, the CI values dropped to half of the total cells after being treated for 61 and 28 h with the concentration of 91.6 µM talazoparib and 10 µM calcitriol, respectively ( Figure 1 and Table 1). In MDA−MB−468, the CI values dropped to half of the total cells after being treated for 69 and 50 h with the concentration of 1 mM talazoparib and 10 µM calcitriol, respectively ( Figure 2 and Table 2). All the treatments in BT−20 and MDA−MB−468 were tested in MRC−5. Results showed that all the treatments up to 96 h have no antiproliferative effect in MRC−5 (Figures 3 and 4), indicating that these treatments targeted cancer cells and not normal cell lines. Table 1). In MDA−MB−468, the CI values dropped to half of the total cells after being treated for 69 and 50 h with the concentration of 1 mM talazoparib and 10 µM calcitriol, respectively ( Figure 2 and Table 2). All the treatments in BT−20 and MDA−MB−468 were tested in MRC−5. Results showed that all the treatments up to 96 h have no antiproliferative effect in MRC−5 (Figures 3 and 4), indicating that these treatments targeted cancer cells and not normal cell lines. In BT−20, the CI values dropped to half of the total cells after being treated for 61 and 28 h with the concentration of 91.6 µM talazoparib and 10 µM calcitriol, respectively (Figure 1 and Table 1). In MDA−MB−468, the CI values dropped to half of the total cells after being treated for 69 and 50 h with the concentration of 1 mM talazoparib and 10 µM calcitriol, respectively ( Figure 2 and Table 2). All the treatments in BT−20 and MDA−MB−468 were tested in MRC−5. Results showed that all the treatments up to 96 h have no antiproliferative effect in MRC−5 (Figures 3 and 4), indicating that these treatments targeted cancer cells and not normal cell lines. Cell Migration Profile of TNBC Cells Treated by Talazoparib, Calcitriol and Their Combination The migration profile in the BT−20 cell line was studied ( Figure 5). Serum−free med served as the negative control (no cell migration was observed). Calcitriol showed a low migration rate at 8.3% lower than the untreated control ( Figure 5). The combination of 91 µM talazoparib and 10 µM calcitriol significantly inhibited migration (39%) compare with the untreated control (p < 0.001). Additionally, the combined treatment significant reduced the migration when compared with talazoparib (p < 0.001) and calcitriol mon therapy (p < 0.001) after 24 h. Cell Migration Profile of TNBC Cells Treated by Talazoparib, Calcitriol and Their Combination The migration profile in the BT−20 cell line was studied ( Figure 5). Serum−free media served as the negative control (no cell migration was observed). Calcitriol showed a lower migration rate at 8.3% lower than the untreated control ( Figure 5). The combina- tion of 91.6 µM talazoparib and 10 µM calcitriol significantly inhibited migration (39%) compared with the untreated control (p < 0.001). Additionally, the combined treatment significantly reduced the migration when compared with talazoparib (p < 0.001) and calcitriol monotherapy (p < 0.001) after 24 h. Data are represented as the mean ± SD (n = 3). The drugs were added after 24 h. The pink color line is the positive control (5% DMSO). Raw data can be found in Supplementary File S4. Cell Migration Profile of TNBC Cells Treated by Talazoparib, Calcitriol and Their Combination The migration profile in the BT−20 cell line was studied ( Figure 5). Serum−free media served as the negative control (no cell migration was observed). Calcitriol showed a lower migration rate at 8.3% lower than the untreated control ( Figure 5). The combination of 91.6 µM talazoparib and 10 µM calcitriol significantly inhibited migration (39%) compared with the untreated control (p < 0.001). Additionally, the combined treatment significantly reduced the migration when compared with talazoparib (p < 0.001) and calcitriol monotherapy (p < 0.001) after 24 h. Table S1, respectively. Table S1, respectively. File S6 and Table S2, respectively. For the statistical analysis, data were compared in terms of the migration rate between the untreated control and the treatment groups by a one−way ANOVA post−hoc test (Tukey) using SPSS (For details of statistical analysis, refer to Supplementary Files S5 and S6). Talazoparib and Calcitriol Induced Apoptosis in BT−20 Cells An apoptosis assay was performed to determine talazoparib, calcitriol and their combination's effect on BT−20 death rate after 24 h treatment. For all the treatments, the early apoptosis rates were not significantly different when compared to the control (untreated For the statistical analysis, data were compared in terms of the migration rate between the untreated control and the treatment groups by a one−way ANOVA post−hoc test (Tukey) using SPSS (For details of statistical analysis, refer to Supplementary Files S5 and S6). Talazoparib and Calcitriol Induced Apoptosis in BT−20 Cells An apoptosis assay was performed to determine talazoparib, calcitriol and their combination's effect on BT−20 death rate after 24 h treatment. For all the treatments, the early apoptosis rates were not significantly different when compared to the control (untreated cells) (the rate of early apoptosis in talazoparib, calcitriol, and their combination were 10.73 ± 6.2%, 10.25 ± 1.9% and 4.25 ± 1.2%, respectively, when compared with the rate of 10.1 ± 1.8% in the untreated control). In the calcitriol−treated group, late apoptosis (p < 0.001) and necrosis (p < 0.001) were significantly higher when compared to the untreated control. In the combined treatment, only late apoptosis (p < 0.001) was significantly higher compared to the untreated control (The rate of late apoptosis in the calcitriol and combined treatment significantly increased by 21.47 ± 1.2% and 89.05 ± 2.6%, respectively) (Figures 7 and 8). talazoparib (Panel B), calcitriol (Panel C), and combined treatment (Panel D) for 24 h vs. untreated control (Panel A). Cells were dual−stained with Annexin V−FITC and propidium iodide, and the dot plot of BT−20 with different treatments was generated. Each data set is a representative plot of three independent experiments (green dots represent viable cells, red dots represent early apoptosis, blue dots represent late apoptosis and brown dots represent necrosis), while percentages are the mean value of three independent experiments. Raw data can be found in Supplementary File S7. Talazoparib and Calcitriol Induced Apoptosis in MDA−MB−468 Cells An apoptosis assay was performed to determine talazoparib, calcitriol and their combination's effect on MDA−MB−468 death rate after 72 h of treatment. The results of Annexin V−FITC/PI dual staining demonstrated that the early apoptosis and late apoptosis rate in MDA−MB−468 significantly increased in talazoparib and the combined treated groups when compared with the control. In the talazoparib treated group, early apoptosis (p < 0.001), late apoptosis (p = 0.003) and necrosis (p = 0.015) were significantly higher when compared to the untreated control. In the combined treatment, early apoptosis (p < 0.001), late apoptosis (p < 0.001) and necrosis (p = 0.027) were significantly higher when compared with the untreated control group, whilst calcitriol did not statistically affect the necrosis (Talazoparib and combined treatment induced early apoptosis at a rate of 40.87 ± 6.3% and 47.1 ± 2.9%, respectively, vs. 3.1 ± 1.7% in the untreated control group. The rate of late apoptosis in control was 2.83 ± 0.7%, while talazoparib and combined treatment induced 23.67 ± 8.4% and 33.53 ± 5.5%, respectively. Early and late apoptosis in the calcitriol treatment group was 6.93 ± 4.4% and 6.97 ± 1.3%, respectively. Rate of necrosis was 1.5 ± 1.1% in untreated control compared to 7.73 ± 3.3% and 7.13 ± 1.2% in the talazoparib and combined treatment, respectively) (Figures 9 and 10). In comparison, the combined treatment showed significant differences compared with calcitriol in early apoptosis (p < 0.001), late apoptosis (p = 0.001) and necrosis (p = 0.041). Cell Cycle Arrest in BT−20 Cells Cell cycle analysis was performed to investigate the effect of the talazoparib and calcitriol treatment and their combination on the cell cycle phase distribution of BT−20. Following 24 h of treatment with talazoparib, the S (p < 0.001) and G 2 /M (p < 0.001) populations of the BT−20 were significantly higher compared to the untreated control. In the calcitriol−treated group, the G 2 /M (p < 0.001) population of the BT−20 significantly increased compared to untreated control (talazoparib−induced S and G 2 /M population of the BT−20 was 28.16 ± 2.37% and 21.43 ± 0.58%, respectively vs. untreated control (18.62 ± 0.36% and 12.1% ± 0.91%). For the calcitriol treatment, the G 2 /M population of the BT−20 was 30.29 ± 0.99%. For combined treatment, S and G 2 /M populations of the BT−20 were 31.95 ± 0.7% and 24.29 ± 0.42%, respectively) ( Figure 11). In comparison, the combined treatment showed significant differences when compared with talazoparib (p = 0.004) and calcitriol in the G 2 /M phase (p < 0.001). Cell Cycle Arrest in BT−20 Cells Cell cycle analysis was performed to investigate the effect of the talazoparib and calcitriol treatment and their combination on the cell cycle phase distribution of BT−20. Following 24 h of treatment with talazoparib, the S (p < 0.001) and G2/M (p < 0.001) populations of the BT−20 were significantly higher compared to the untreated control. In the calcitriol−treated group, the G2/M (p < 0.001) population of the BT−20 significantly increased compared to untreated control (talazoparib−induced S and G2/M population of the BT−20 was 28.16 ± 2.37% and 21.43 ± 0.58%, respectively vs. untreated control (18.62 ± 0.36% and 12.1% ± 0.91%). For the calcitriol treatment, the G2/M population of the BT−20 was 30.29 ± 0.99%. For combined treatment, S and G2/M populations of the BT−20 were 31.95 ± 0.7% and 24.29 ± 0.42%, respectively) ( Figure 11). In comparison, the combined treatment showed significant differences when compared with talazoparib (p = 0.004) and calcitriol in the G2/M phase (p < 0.001). Table S4. Cell−Cycle Arrest in MDA−MB−468 Cells Cell−cycle analysis was performed to investigate the effect of the talazoparib, calcitriol and their combination on cell cycle phase distribution in MDA−MB−468. Following 24−h treatment of talazoparib (p < 0.001) and combined treatment (p < 0.001), S phase population of the MDA−MB−468 significantly increased compared to the untreated control. In addition, G 0 /G 1 population of the MDA−MB−468 significantly decreased after talazoparib (p < 0.001), calcitriol (p = 0.004) and combined treatment (p < 0.001) compared with untreated control (Talazoparib and combined treatment induced S phase population in MDA−MB−468, (48.74% ± 1.87% and 45.72% ± 0.31%, respectively) compared to the untreated control (26.79% ± 1.61%). In addition, G 0 /G 1 population of the MDA−MB−468 were 41.78% ± 0.67% in talazoparib, 52.51% ± 1.26% in calcitriol and 45.86% ± 1.39% in combined treatment compared with untreated control (59.78% ± 1.46) (Figure 12). In comparison, the combined treatment showed significant differences compared with calcitriol in the G 2 /M phase (p = 0.005). control. Data were compared between the untreated control and the treatment groups by a one−way ANOVA post−hoc test (Tukey) using SPSS. Raw data and details of statistical analysis can be found in Supplementary File S10 and Table S6, respectively. Discussion Combined therapy is promising in treating cancers, especially when lower dosages of drugs are used, which in turn minimizes the side effects and cytotoxicity of long exposure of healthy tissues while still achieving the desired therapeutic results. Among PARP inhibitors, talazoparib has been reported to have the highest efficacy, which is 100−fold more potent than Olaparib [15]. A recent clinical study reported that 39% of breast cancer patients treated with talazoparib had developed anemia [24]. Another study also mentioned that talazoparib had a higher rate of alopecia and anemia [25]. This study hypothesized that the combination of talazoparib and calcitriol could improve the antiproliferation effect when compared to monotherapies. The rationale for the combination of talazoparib with calcitriol is based on the fact that calcitriol has been shown to be a potential PARP1 inhibitor, which could counterbalance the side effects of a high dose of talazoparib alone [18][19][20][21]. Additionally, one of the side effects of talazoparib is anemia, whilst evidence shows that calcitriol improves anemia and lessens the requirement for erythropoietin therapy [22,23]. Furthermore, targeting DNA repair mechanisms as one of the major contributors to cancer using PARP inhibitors seems promising for TNBC patients regardless of their BRCA mutations [16]. The combination of talazoparib with other drugs has been tested to treat different cancers. Children and adolescents have tolerated talazoparib in combination with temozolomide with refractory/recurrent solid tumors, including Ewing sarcoma [26]. Also, a combination of Palbociclib and talazoparib was proposed as a potential treatment for bladder cancer [27]. In seven TNBC cell lines, the combination of carboplatin and talazoparib showed synergistic effects [28]. However, to the best of our knowledge, this is the first combined therapy of talazoparib with calcitriol in treating TNBC cell lines. The synergistic effects of calcitriol with other drugs in their lower concentrations have been reported in previous studies. A lower dose of doxorubicin and genistein were needed to see growth inhibition in breast adenocarcinomas (MCF−7) and prostate carcinomas (LNCaP) when they were combined with calcitriol at a synergistic concentration [29]. Moreover, another study on the human pancreatic cancer model system Capan−1 showed a synergistic effect of calcitriol and gemcitabine when treated over a wide range of concentrations, in turn enhancing the inhibition of cell proliferation [30]. In the present study, both calcitriol and talazoparib monotherapy inhibited cell proliferation in MDA−MB−468 and BT−20. Calcitriol has been previously used both as monotherapy and combined therapy to treat TNBC cell lines [31,32]. In monotherapy, it inhibited TNBC proliferation through a mechanism involving the proinflammatory cytokines IL−1 β and TNF−α. The combination of calcitriol and celecoxib in two breast cancer cell lines showed a cooperative growth−inhibiting effect [33]. Additionally, calcitriol significantly inhibited the proliferation of SUM−229PE (a TNBC cell line) and MCF7 (ER−positive breast cancer cell) [31,32]. Additionally, the combined therapy of calcitriol and TNF−α had a greater cell growth inhibitory effect when compared to monotherapies in breast cancer cells [32]. The combination of calcitriol and menadione (a glutathione−depleting compound) also reduced tumor growth by improving the antiproliferative effect [34]. Furthermore, co−administration of calcitriol with curcumin or resveratrol significantly reduced the cell proliferation of the MBCDF−T cells (a TNBC), which were xenografted in nude mice [12]. However, in normal endothelial cells (EA.hy926 cells), the cell proliferation increased after calcitriol treatment [12]. In MDA−MB−231 cells (a TNBC cell), combined therapy of calcitriol with tyrosine kinase inhibitors notably inhibited cell growth [27]. Overall, calcitriol is a natural vitamin D receptor (VDR) agonist; hence, it can reduce cell viability in breast cancer cell lines that are VDR−positive [35]. In this study, analyzing the cell viability in two TNBC cell lines showed that IC 50 values of talazoparib in BT−20 cells were 11−fold lower than the IC 50 in MDA−MB−468. Evaluating the reduction in cell viability and the respective IC 50 after treatment with talazoparib monotherapy in various cancer cell lines, including breast cancer, has shown different efficiency, likely due to the genetic and epigenetic diversity among these cell lines [27,36]. Cell migration is a crucial step in cancer cell metastasis [37]. The present study observed that the combined treatment has a better anti−migration effect than monotherapies. Treatment with talazoparib for 24 h did not reduce the cell migration in BT−20. Calcitriol reduced cell migration in BT−20 when compared to untreated cells. The combination of talazoparib and calcitriol demonstrated a greater migration inhibitory effect (39%) than calcitriol monotherapy (8%) in this cell line. Similar results were observed from a study on human prostate cancer cell lines, PC−3 and DU145, where cell migration decreased and was inhibited after treatment with calcitriol [38]. Moreover, colon cancer cells (DLD−1 and HCT116) treated with calcitriol for 24 and 48 h showed a reduction in cellular migration by 62 and 80%, respectively [39]. A combination treatment of talazoparib and bazidoxifene on human ovarian cancer cells, SKOV3, has shown a greater inhibitory effect on cell migration than monotherapies [37]. In the present study, for MDA−MB−468, cell migration decreased in both mono− and combined therapy. The synergistic antitumorigenic activity of calcitriol with curcumin in MBCDF−T cell (a breast cancer cell line) also reported that all treatments resulted in a slower migration than vehicle−treated cells. However, when calcitriol and curcumin were combined, the reduction was seen to a greater extent than the monotherapies [12]. In the present study, calcitriol did not significantly increase the apoptosis in MDA−MB −468 cells, which is in agreement with some studies that also found no effect of calcitriol on apoptosis in human lung cancer, malignant pleural mesothelioma, and adrenocortical carcinoma cell lines [40][41][42]. These findings suggest that the antitumor effects of calcitriol in some TNBC cells involve cell cycle arrest and the inhibition of cell cycle progression [40]. Furthermore, the apoptosis analysis in the present study showed that the combined therapy of calcitriol and talazoparib on BT−20 and MDA−MB−468 increased the apoptotic cells. Similar results were reported when the percentage of apoptotic cells increased in talazoparib−loaded nanoemulsion−treated Adriamycin−resistant ovarian cancer cells (NCI/ADR−RES) [43]. However, in another study, the combination of talazoparib and Palbociclib did not increase the apoptosis in bladder cancer cell lines [27]. In the calcitriol−treated BT−20, cells were arrested at G2/M phase, whereas cells treated with the combination of talazoparib and calcitriol were mostly arrested in the S phase. Another study has reported that calcitriol arrested MBCDF−T cells in the G1−phase, whilst calcitriol combined with curcumin arrested cells in S−phase [12]. In the present study, talazoparib and calcitriol treatment increased G2/M arrest in BT−20. Similarly, a study also reported G2/M arrest in HCC1937 (a BRCA1 mutant) and MDA−MB−231 (a BRCA1 wild−type) TNBC cell lines after treatment with talazoparib [44]. Furthermore, co−administration of olaparib (a PARP inhibitor) with suberoylanilide hydroxamic acid in several TNBC cell lines resulted in a higher percentage of cell cycle arrest at the G2/M phase [45]. Additionally, a significant rise in the G2/M population was observed upon treatment with talazoparib in melanoma cells and Schlafen 11−deleted cancer cells [46,47]. In MDA−MB−468, the talazoparib and the combined treatment significantly increased the S phase, whereas calcitriol slightly increased in the S and G2/M phase with no significant difference. The higher concentration of talazoparib, which was needed for IC 50 in MDA−MD−468, may cause a higher portion of cells to be arrested in the S phase. A previous study has reported that among PARP inhibitors, talazoparib treatment created a higher percentage of cells in the S−phase [48]. Talazoparib has shown a diverse level of antiproliferative effects in different cell lines, indicating the impact of varying genetic backgrounds [49]. Similarly, in the present study, MBA−MD−468 was less sensitive to talazoparib when compared to BT−20 as indicated by significant differences in the IC 50 of 1 mM vs. 91.6 µM. Additionally, a recent study, which has tested the sensitivity of a panel of breast cancer cell lines to metformin, has reported that the cell lines' sensitivity varied greatly, as seen by variances in IC 50 that ranged from 0.83 to 10.13 mM [50]. There is no previous study on the combination of talazoparib and calcitriol with antagonist effects, but a study has reported that the mild antagonistic effects of piperaquine, pyronaridine, and naphthoquine may not cause any significant short−term clinical effect in treating malaria [51]. Then, it is essential to investigate the clinical benefit of our findings in pre−clinical studies. Reagents and Materials The talazoparib (BMN 673) was purchased from Selleckchem (Houston, TX, USA) (catalogue number S7048−10). Calcitriol was purchased from Tokyo Chemical Inc (Japan) (catalogue number C3078). The stock solution was prepared in dimethyl sulfoxide (DMSO; Nacalai Tesque Inc, Kyoto, Japan) at a 400 and 40 mM concentration for talazoparib and calcitriol, respectively, and stored at −20 • C. MRC−5, a normal fibroblast cell line developed from the lung tissue, was chosen as a control cell to monitor the effect of treatments on normal cells. MRC−5 was cultured in Eagle's Minimum Essential Medium (EMEM) supplemented with 10% fetal bovine serum (FBS) and 100 U penicillin / 0.1 mg/mL streptomycin. The justification for using this cell line was because it has been reported that about 60% of people diagnosed with metastatic breast cancer have lesions in either the lungs or the bones. Triple−negative disease is more likely than other types of breast cancer to metastasize to the lungs. Cell Lines and Cell Culture The culture media were changed every two days. Cells were passaged routinely. MDA−MB−468 and MRC−5 were detached using 0.25% trypsin-EDTA (Nacalai Tesque Inc, Tokyo), and BT−20 was detached using TrypLE Select Enzyme 10x solution (Gibco, ThermoFisher Scientific: Waltham, MA, USA) and counted using a hemocytometer. Measuring Antiproliferative Assay Using Real−Time Cell Analyzer (RTCA) The cell index (CI) was acquired by the RTCA iCELLigence™ system (ACEA Biosciences, Inc., San Diego, CA, USA). All monitoring was performed at 37 • C in a humidified atmosphere with regulated 5% CO 2 . E−plates (culture plates for the iCELLigence system) containing 100 µL culture medium per well were equilibrated to 30 • C, and the CI was set to zero under these conditions. For each cell type, 1 × 10 4 cells per well were seeded into 100 µL of media in a 16−well E−plate. The cells were allowed to settle down into the E−plate at room temperature for half an hour. The cells were monitored every 30 min using the xCELLigene system for 24 h. The media was then replaced with a new 100 µL of media containing 1% of the respective drug concentrations in every well of the E−plates. The vehicle control was included, containing 1% DMSO, as well as a positive control containing 5% DMSO with media. After treatment, the E−plates were incubated and monitored every 15 min for 72 h using the xCELLigene system. Data for cell adherence were normalized at 24 h [52,53]. To determine the IC 50 Tables 1 and 2. Please refer to Supplementary Files S1 and S2 to find out an example of this calculation. RTCA Data Analysis Software version 2.0 was used to calculate the IC 50 values (ACEA Biosciences, Inc.)). IC 50 of talazoparib and calcitriol was determined by an antiproliferative assay; it was used for the following experiments, including cell migration, apoptosis, and cell cycle analysis. Cell Migration Analysis The rate of cell migration was monitored using the real−time xCELLigence system, with fetal bovine serum (FBS) as a chemoattractant. A total of 160 µL of 10% FBS complete media was loaded with reverse pipetting skill into the lower chambers (LC) of the CIM−plate 16, and the last wells were loaded with serum−free media as a negative control. The upper chambers (UC) were assembled with the LC with a click sound according to the manufacturer's recommendation. According to manufacturer guidelines, a total of 50 µL of serum−free media was then loaded into the UC and placed in the CO 2 incubator for an hour for temperature equilibration to 37 • C according to manufacturer guidelines. The CIM−plate was placed in the xCELLigence system with only media, as a blank with no cells. The UC media was then replaced with 3 × 10 4 cells per well with new 100 µL of media containing 1% of the respective drug concentrations in serum−free media. The CIM−plate was equilibrated at room temperature to let the cells settle down for half an hour. Then, the CIM−plate was placed into the xCELLigence system to monitor cell migration every 15 min for 24 h in the CO 2 incubator. The doubling of cells is the main factor that determines the length of a migration assay. In this view, 24 h impedance measurements reflected the cell lines' migration from the upper chamber to the lower chamber. After 24 h, the RTCA software was stopped. Data were collected, and cell index curves were analyzed to determine the cell migration rate [52]. Apoptosis and Cell Cycle Analysis Using Flow Cytometry According to the manufacturer's instructions, apoptosis was measured using the Annexin V−FITC / PI Apoptosis Detection kit (Elabscience, Houston, TX, USA). BT−20 and MDA−MB−468 cells (1 × 10 6 cells/well) were treated for 24 and 72 h, respectively, with various concentrations of talazoparib, calcitriol and their combination in a 6−well plate (Based on the IC 50 obtained by the RTCA software). The cells were harvested and washed with chilled PBS in a polystyrene round−bottom tube prior to suspension in 100 µL Annexin−binding buffer (ABB). Subsequently, the cells were stained with 2.5 µL of Annexin V and 2.5 µL of propidium iodine (PI) staining solution for 15 min. Staining was performed in the dark at room temperature. A total of 400 µL of ABB was then added to the stained cells prior to analysis with the FACSCanto II flow cytometer (BD Bioscience, Franklin Lakes, NJ, USA). For each measurement, at least 10,000 cells were counted. For cell cycle analysis, BT−20 and MDA−MB−468 with a density of 1 × 10 6 cells/mL were treated with various concentrations of talazoparib, calcitriol and their combinations for 24 h in a 6−well plate. The treated cells were harvested and washed with chilled PBS, centrifuged (1500 rcf, 7 min), fixed with 70% cold ethanol overnight at 4 • C, and then centrifuged again. Subsequently, cells were washed with PBS to remove excess ethanol and stained with 500 µL of 20 µg/mL PI solution (BD Bioscience) for 30 min. Staining was performed in the dark at room temperature. The cellular DNA contents were identified for detection of the cell cycle distribution using the FACSCanto II flow cytometer (BD Bioscience) installed with ModFit LT (Verity Software House). At least 10,000 events were counted for each sample. Statistical Analysis All data were statistically analyzed using SPSS version 22. Data are shown as the mean ± standard deviation (SD) of three independent experiments. Multiple comparisons of the cell apoptosis and cell cycle assay were evaluated for statistical significance by the one−way ANOVA post−hoc test (Tukey), and data significance levels are shown as p < 0.05. Conclusions In this study talazoparib and calcitriol combination showed a proliferation inhibitory effect on two TNBC cell lines with BRCA wild−type and BRCA1 allelic loss. The combined therapy also has affected the cell migration, apoptosis, and necrosis rates in these cell lines. BT−20 was more sensitive to talazoparib. The combination of talazoparib and calcitriol could be useful as a new formulation to treat TNBC. An animal study should be carefully planned to confirm the results of this in vitro study. Future studies should also focus on improving and optimizing combined treatments for TNBC patients by determining the best duration, frequency, and concentration, as well as identifying and verifying biomarkers for patient selection and stratification. These data strongly suggest future clinical investigation of a combination of PARP inhibitors and calcitriol, which has the potential to dramatically improve the efficacy of innovative targeted therapy for TNBC patients with varying BRCA1/2 status.
6,928.6
2022-08-29T00:00:00.000
[ "Biology" ]
Cosmology in $f(R,L_m)$ gravity In this letter, we investigate the cosmic expansion scenario of the universe in the framework of $f(R,L_m)$ gravity theory. We consider a non-linear $f(R,L_m)$ model, specifically, $f(R,L_m)=\frac{R}{2}+L_m^n + \beta$, where $n$ and $\beta$ are free model parameters. Then we derive the motion equations for flat FLRW universe and obtain the exact solution of corresponding field equations. Then we estimate the best fit ranges of model parameters by using updated $H(z)$ datasets consisting of 57 points and the Pantheon datasets consisting of 1048 points. Further we investigate the physical behavior of density and the deceleration parameter. The evolution of deceleration parameter depicts a transition from deceleration to acceleration phases of the universe. Moreover, we analyze the stability of the solution of our cosmological model under the observational constraint by considering a linear perturbation. Lastlty, we investigate the behavior of Om diagnostic parameter and we observe that our model shows quintessence type behavior. We conclude that our $f(R,L_m)$ cosmological model agrees with the recent observational studies and can efficiently describe the late time cosmic acceleration. I. INTRODUCTION Recent observations of type Ia supernovae [1,2] together with observational studies of the Sloan Digital Sky Survey [3], Wilkinson Microwave Anisotropy Probe [4], Baryonic Acoustic Oscillations [5,6], Large scale Structure [7,8], and the Cosmic Microwave Background Radiation [9,10] indicates accelerating behavior of expansion phase of the universe.The standard cosmology strongly supported the dark energy models as resolution of this fundamental question.The most prominent description of dark energy is the cosmological constant Λ that can be associated to the vacuum quantum energy [11].Even though cosmological constant Λ fits well with observational data, it is suffering with two major issues namely coincidence problem and cosmological constant problem [12].Its value obtained from Particle Physics has discrepancy of nearly 120 orders of magnitude with its value required to fit the cosmological observations.Another promising way to describe the recent observations on cosmic expansion scenario of the universe is to consider that the Einstein's general relativity models break downs at large cosmic scales and a more generic action characterizes the gravitational field.There are several ways to generalize the Einstein-Hilbert action of general relativity.The theoretical models in which the standard action is replaced by the generic function f (R), where R is Ricci scalar, introduced in [13][14][15].The description of late time expansion scenario can be achieved by f (R) gravity [16] and the constraints of viable cosmological models have been explored in [17,18].The viable f (R) gravity models in the context of solar system tests do exist [19][20][21][22].Observational signatures of f (R) dark energy models along with the solar system and equivalence principle constraints on f (R) gravity have been presented in [23][24][25][26][27]. Another f (R) models that unifies the early inflation with dark energy and passes through local tests have been discussed in [28][29][30].Moreover, one can check the references [31][32][33] for various cosmological implications of f (R) gravity models. An extension of the f (R) gravity theory that includes an explicit coupling of matter Lagrangian density L m with generic function f (R) was proposed in [34].As a consequence of this matter-geometry coupling, an extra force orthogonal to four velocity vector appears with non-geodesic motion of the massive particles.This model was extended to the case of the arbitrary couplings in both matter and geometry [35].The cosmological and astrophysical implications of the non-minimal matter-geometry couplings have been extremely investigated in [36][37][38][39][40]. Recently, Harko and Lobo [41] proposed more evolved generalization of matter-curvature coupling theories called f (R, L m ) gravity theory, where f (R, L m ) represents an arbitrary function of the matter Lagrangian density L m and the Ricci scalar R. The f (R, L m ) gravity theory can be considered as the maximal extension of all the gravitational theories constructed in Reimann space.The motion of test particles in f (R, L m ) gravity theory is non-geodesic and an extra force orthogonal to four velocity vector arises.The f (R, L m ) gravity models admits an explicit violation of the equivalence principle, which is higly constrained by solar system tests [42,43].Recently, Wang and Liao have studied energy conditions in f (R, L m ) gravity [44].Gonclaves and Moraes analyzed cosmology from nonminimal matter geometry coupling by taking into account the f (R, L m ) gravity [45]. The present letter is organized as follows.In Sec II, we present the fundamental formulation of f (R, L m ) gravity.In Sec III, we derive the motion equations for the flat FLRW universe.In Sec IV, we consider a cosmological f (R, L m ) model and then we derive the expression for Hubble parameter and the deceleration parameter.In the next section Sec V, we find the best ranges of the model parameters by using H(z), Pantheon, and the combine H(z)+Pantheon data sets.Further, we analyze the behavior of cosmological parameters for the values of model parameters constrained by the observational data sets.Moreover in Sec VI, we investigate stability of obtained solution under the observational constraint by assuming a linear perturbation of the Hubble parameter.Further in sec VII, we employ Om daignostic test to differentiate our cosmological model with other models of dark energy, Finally in Sec VIII, we discuss and conclude our results. II. f (R, L m ) GRAVITY THEORY The following action governs the gravitational interactions in f (R, L m ) gravity, where f (R, L m ) represents an arbitrary function of the Ricci scalar R and the matter Lagrangian term L m . The Ricci scalar R can be obtained by contracting the Ricci tensor R µν as where the Ricci tensor is defined by Here Γ α βγ represents the components of the wellknown Levi-Civita connection defined by Now one can acquired the following field equation by varying the action (1) for the metric tensor g µν , , and T µν represents the energy-momentum tensor for the perfect type fluid, defined by δg µν (6) The relation between the trace of energy-momentum tensor T, Ricci scalar R, and the Lagrangian density of matter L m obtained by contracting the field equation ( 5) Here Moreover, one can acquired the following result by taking covariant derivative in equation ( 5) Taking into account the spatial isotropy and homogeneity of our universe, we assume the following flat FLRW metric [46] for our analysis Here, a(t) is the scale factor that measures the cosmic expansion at a time t.For the line element ( 9), the nonvanishing components of Christoffel symbols are here i, j, k = 1, 2, 3. Using equation (3), we get the non-zero components of Ricci tensor as Hence the Ricci scalar obtained corresponding to the line element ( 9) is Here H = ȧ a is the Hubble parameter.The energy-momentum tensor characterizing the universe filled with perfect fluid type matter-content for the line element ( 9) is given by, Here ρ is the matter-energy density, p is the spatially isotropic pressure, and u µ = (1, 0, 0, 0) are components of the four velocities of the cosmic perfect fluid. The Friedmann equations that describes the dynamics of the universe in f (R, L m ) gravity reads as and We consider the following functional form [47] for our analysis, where β and n are free model parameters. Then for this particular f (R, L m ) model with L m = ρ [48], the Friedmann equations ( 14) and ( 15) for the matter dominated universe becomes and Further, one can acquire the following matter conservation equation by taking trace of the field equations In particular, for n = 1 and β = 0 one can retrieve the usual Friedmann equations of GR. From equation ( 17) and ( 18), we have Then by using 1 , we have the following first-order differential equation Now by integrating the above equation, one can obtained the expression for Hubble parameter in terms of redshift as follows Here H 0 is the present value of the Hubble parameter. The deceleration parameter plays a vital role to describe the dynamics of expansion phase of the universe and it is defined as By using (22) in (23), we have V. OBSERVATIONAL CONSTRAINTS In this section, we analyze the observational aspects of our cosmological model.We use the H(z) data sets and Pantheon data sets to find the best fit ranges of the model parameters n and β.To constrain the model parameters, we employ the standard Bayesian technique and likelihood function along with the Markov Chain Monte Carlo (MCMC) method in emcee python library [49].We use the following probability function to maximize the best fit ranges of the parameters Here χ 2 represents pseudo chi-sqaured function [50].The χ 2 function used for different data sets are given below. H(z) datasets In this work, we have taken an updated set of 57 data points of H(z) measurements in the range of redshift given as 0.07 ≤ z ≤ 2.41 [51].In general, there are two well established techniques to measure the values of H(z) at given redshift namely line of sight BAO [52][53][54][55][56] and the differential age technique [57][58][59][60].For the complete list of data points, see the reference [61].Moreover, we have taken H 0 = 69 Km/s/Mpc for our analysis [62].To estimate the mean values of the model parameters n and β, we define the chi-square function as follows, Here, H th denotes the theoretical value of the Hubble parameter obtained by our model whereas H obs represents its observed value and σ H(z k ) represents the standard deviation.The 1 − σ and 2 − σ likelihood contours for the model parameters n and β using H(z) data sets is presented below.The obtained best fit ranges of the model parameters are n = 1.078 +0.012 −0.013 and β = −8862.13± 0.99. Pantheon datasets Recently, Pantheon supernovae type Ia data samples consisting of 1048 data points have been released.The PanSTARSS1 Medium, SDSS, SNLS, Deep Survey, numerous low redshift surveys and HST surveys contribute to it.Scolnic et al. [63] put together the Pantheon supernovae type Ia samples consisting of 1048 in the redshift range z ∈ [0.01, 2.3].For a spatially flat universe [62], the luminosity distance reads as Here c is the speed of light. For statistical analysis, the χ 2 function for supernovae samples is obtained by correlating the theoretical distance modulus with such that Here p j denotes the free model parameters and C SN represents the covariance metric [63], and where µ th is theoretical value of the distance modulus whereas µ obs its observed value. We have obtained the best fit ranges for parameters n and β of our model by minimizing the chi-square function for the supernovae samples.The 1 − σ and 2 − σ likelihood contours for the model parameters n and β using Pantheon data sample is presented below. H(z)+Pantheon datasets The χ 2 function for the H(z)+Pantheon data sets is given as The 1 − σ and 2 − σ likelihood contours for the model parameters n and β using H(z)+Pantheon data set is presented below.Fig. 4 indicates that the energy density of the cosmic fluid shows positive behavior and it vanishes in the far future.Further, the evolution profile of deceleration parameter in Fig. 5 reveals that our universe has been experienced a transition from decelerated phase to accelerated phase in the recent past.The transition redshift corresponding to the values of the model pa-rameters constrained by H(z), Pantheon, and the combine H(z)+Pantheon data sets are z t = 0.708 +0.029 −0.031 , z t = 0.887 +0.0075 −0.0009 , and z t = 0.688 +0.262 −0.224 respectively.Moreover, the present value of the deceleration parameter are q 0 = −0.497+0.005 −0.004 for the H(z) data sets, q 0 = −0.5223+0.00003 −0.0008 for the Pantheon data sets, and q 0 = −0.494+0.05 −0.035 for the H(z)+Pantheon data sets. VI. PERTURBATION ANALYSIS OF HUBBLE PARAMETER In this section, we are going to investigate stability of obtained solution of our proposed f (R, L m ) model under the observational constraint.We have considered a linear perturbation of the Hubble parameter H(z) as Here H * (z) represents perturbed Hubble parameter and δ(z) represents the perturbation term.Now by using equation ( 22) and (32) in the matter conservation equation (19), we obtained the following expression We solve the equation ( 33) numerically since it is highly non-linear and we present the behavior of the perturbation term δ(z) corresponding to the values of model parameters constrained by observational data sets.From 6 it is clear that, for the constrained values of the model parameters perturbation term δ(z) decay rapidly at late times.Therefore, the solution of our cosmological f (R, L m ) model shows stable behavior. VII. OM DIAGNOSTICS The Om diagnostic is an effective tool to classify the different cosmological models of dark energy [64].It is simplest diagnostic since it uses only first order derivative of cosmic scale factor.For spatially falt universe, it is given as Here H 0 is the present value of Hubble parameter.The negative slope of Om(z) correspond to quintessence type behavior while positive slope corresponds to phantom behavior.The constant nature of Om(z) represents the ΛCDM model. VIII. CONCLUSION In this work, we investigated the late time cosmic expansion of the universe in the framework of f (R, L m ) gravity theory.We considered a non-linear where n and β are free model parameters.Then we derived the motion equations for flat FLRW universe.We found the analytical solution presented in equation ( 22 respectively.Moreover, the present value of the deceleration parameter are q 0 = −0.497+0.005 −0.004 for the H(z) data sets, q 0 = −0.5223+0.00003 −0.0008 for the Pantheon data sets, and q 0 = −0.494+0.05 −0.035 for the H(z)+Pantheon data sets.Furthermore, we investigated the stability of the obtained solution of our model under the observational constraint by considering a linear perturbation of the Hubble parameter.From Fig 6, we conclude that for the set of constrained values of the model parameters, the obtained solution of our cosmological f (R, L m ) model shows stable behavior.Finally, the evolution profile of the Om diagnostic parameter presented in Fig 7 indicates that our cosmological f (R, L m ) model follows quintessence scenario .We also found that our cosmological f (R, L m ) model agrees with the constraint f Lm (R,L m ) f R (R,L m ) > 0 derived in [44], which is nothing but n > 0 for our considered model. FIG. 6 . FIG. 6. Profile of perturbation term δ(z) corresponding to the values of model parameters constrained by H(z), Pantheon, and the combine H(z)+Pantheon data sets. FIG. 7 . FIG. 7. Profile of Om diagnostic parameter corresponding to the values of model parameters constrained by H(z), Pantheon, and the combine H(z)+Pantheon data sets. ) for our cosmological f (R, L m ) model.Further, we obtained the best fit values of the model parameters by using H(z) data sets and recently published Pantheon data sets along with the combine H(z)+Pantheon data sets.The obtained best fit values are n = 1.078 +0.012 −0.013 and β = −8862.13± 0.99 for the H(z) datasets, n = 1.1472 +0.0028 −0.00042 and β = −8862.2± 1.0 for the Pantheon datasets, and n = 1.07 ± 0.10 and β = −8862.103± 0.096 for the H(z)+Pantheon datasets.In addition, we have investigated the behavior of energy density and deceleration parameter for the constrained values of model parameters.The evolution profile of the deceleration parameter in Fig 5 indicates a recent transition of the universe from decelerated to accelerated phase and the energy density in Fig 4 show positive behavior, which is expected.The transition redshift corresponding to the values of the model parameters constrained by H(z), Pantheon, and the combine H(z)+Pantheon data sets are z t = 0.708 +0.029 −0.031 , z t = 0.887 +0.0075 −0.0009 , and z t = 0.688 +0.262 −0.224
3,689
2022-05-01T00:00:00.000
[ "Physics" ]
EFFECT OF STARVATION ON THERMO- ELASTOHYDRODYNAMIC LUBRICATION OF ROLLING / SLIDING CONTACT A complete numerical solution of thermal compressible elastohydrodynamic lubrication of rolling / sliding contact was achieved to determine the effect of inlet boundary condition on the film shape, film pressure, and film temperature in an elastohydrodynamic line contact problem. The direct iterative technique is used to solve the simultaneous system of Reynolds , ,elasticity , and energy equations for different locations of inlet oil fed . The effect of various load, speed , and slip conditions have been investigated . the results indicate that the effects of starvation are an increase of oil film temperature and decrease in oil film thickness so that the temperature effect are significant and can not be neglected. ! " #$ ! "% & ' ( )* ' "%& +# "$ , . ! " % & ' & ! '/ $ )* ! "% & ,* ' " ! & 0 1 " % 2 $ 2 " 3 & " 4 " " 4 " 4 " 4 ' ( " ! "3 ! / 3 5 "% & 2 $ 6 0 3 & 1 "( 3 '+ ' 4 7* +# " % & , $ ! ! & ! ' ( )* 8 ! " $ ' "% & ,* ' " ! 1 " ! ) 9 : ! " ;% ? ' + @ ' 4 " +# ' ? 2 4 ); ! "% & , $ ' *% 7 1 INTRODUCTION Rolling sliding machine elements such as bearings , gears, cams and their followers are frequently subjected to high load, speed , and slip conditions. The problem of thermo-elastohydrodynamic lubrication had been treated by many workers, Cheng et.al. (1965), Daow et.al (1987) and Sadeghi et.al. (1990Sadeghi et.al. ( ,1987. Most the published works have not consider the effect of the starvation on pressure distribution , film thickness and oil film temperature so the location of the inlet meniscus is considered as known. However the reduction in film thickness due to starvation has been studied by Chiu (1972) . He conconcluded that the starvation effect in most rolling element bearing are considerably greater than the inlet heating effect. Full solutions for estimating film thickness of point contacts were made by Dowson (1976,1977) for flooded as well as starved contact. They found a dimensionless central and minimum film thickness for the flooded contacts indicating that the film thickness decreases for the starved condition as compared to the flooded condition. Johns-Rahngat and Gohar (1994) proved that the results obtained from the work of Hamrock and Dowson are quite realistic when employing starvation conditions at the contact. The present work is an attempt to study the thermal effect on the performance of contacting element in line contact with starved inlet boundary condition. Direct iterative method was adopted to solve the governing equations of the problem. GOVERNING EQUITIONS The governing equations describing the steady state , thermo-elasto hydrodynamic lubrication of the line contact using Newtonian lubricant can be discribed as follows: Reynolds ' Equation: Reynolds ' equation which govern the pressure distribution inside the oil film presented between two non conformal surfaces in line contact can be written here as follows: The pressure distribution in the contact zone is subjected to positivity constraint.The boundary conditions for Reynolds equation are given by P(x i )=0 , P(x o )=dP xo /dx=0 Elasticity Equation: Oil film thickness distribution can be evaluated in this case by using the following equation used by Wolff et.al. (1992) (2) Equations of State The lubricant viscosity is modeled by Baru ' s pressure viscosity formula. The equation was modified to include the thermal effect. It can be written as presented by previous reference The dependence of density on pressure and temperature can be expressed by the following equation: Energy Equation The temperature distribution a cross the oil film is expressed by solving the energy equation reported by Wolf (1992) and Gohr (1988) The boundary conditions used with the above equation are given by At y=0 ; T=T o At y=h ; ρ density of the solid disk (Kg/m 3 )=7800 Kg/m 3 for steel disk C 2 = heat capacity of the solid disk (KJ/Kg.K)=52 KJ/Kg.K for steel disc. U= speed of the solid disk (m/s). The mean oil film temperature can be expressed as follows by making the suitable substitutions, the mean oil film temperature can be evaluated as: NUMERICAL TECHNIQUE To obtain a complete numerical solution to the problem of compressible thermoelastohydrodynamic lubrication of rolling/ sliding contacts the direct iterative procedure was followed to solve the simultaneous system of compressible Reynolds ' Eq.(1), elasticity Eq.(2), and the energy Eq.(5) together with the Equations of state, Eqs.(3,4) . The flow chart shown in Fig.(1) illustrates the computational procedure used for calculating the pressure, film thickness , and temperature distribution within the lubricant film. The isothermal pressure and film shape are obtained and these values are then used to arrive at the initial temperature field within the lubricant film. The influence of temperature is introduced on viscosity , and density and the new pressure and film thickness are calculated. The iterative procedure is continued until the resulted temperature and pressure satisfied the following convergence criteria . RESULT AND DISCUSSION The analysis was carried out for three different nondimensional load parameters, namely (1.36E-7), (2.28E-7), and (3.42E-7). Three different slide to roll ratios are also been considered, namely (0.5,1.07,1.32). Five inlet positions were invistigated, one considered for the flooded condition, Xi=-4a while the other four account for the starved conditions in which Xi=-3.5a ,3a ,2.5a and 2a.The results in Figs. (2,5,8) show that the pressure spike decreases and diminshes for all the above cases. The pressure spike try to move outward towared the outlet with the increase of the applied load. The maximum pressure increses as the degree of starvation increases to maintaine a constant applied load. The oil film thickness decreases with increasing the degree of starvation and the applied load as shown in Figs. (3,6,9). Also the nip that occurs in the nondimensional film shape at the fully flooded case diminishes as the amount of starvation increased and the film shape changes to that of flatened type. This is can be explained with refering to Figs. (4,7,10) since the oil film temperature increases with increasing the degree of starvation and the applied load which lead to low oil viscosity. The results presented in Fig. (12) also shows that the oil film thickness decreases as the the slide to roll ratio increases. This can be explained with the aid of Fig. (13), since it can be shown from this figure that the oil film temperature increases with increasing the slide to roll ratio. The maximum pressure increases as the slide to roll ratio increases in order to maintain constant applied load. CONCLUSIONS The concluding remarks which can be withdrawn from the present results are : 1-There is a significant increase in oil film temperature as the degree of starvation and the slide/ roll ratio increases. 2-The pressure spike is decreased (nearly vanshes) as the amount of starvation and slid / roll ratio increases. The pressures pike tends to move toward the outlet as the applied load increases. 3-The oil film thickness decreases as the amount of starvation increased which is dangerous phenomenon leading to occurrence of film rapture due to metal to metal contact at the tip of the mating surfaces. 4-The results indicate that the temperature effect and the position of oil feed have significant effects and must be taken into consideration for proper design.
1,776.6
2006-06-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Smart metacomposite-based systems for transient elastic wave energy harvesting In this paper, novel harvesting systems are proposed and studied to obtain enhanced energy from transient waves. Each of these systems contains a piezo-lens to focus waves and a harvester to yield energy from the induced focused waves. The piezo-lens comprises a host plate and piezoelectric patches bonded on the plate surfaces. The piezoelectric patches are shunted with negative capacitance (NC) circuits in order to control the spatial variation of the effective refractive index inside the piezo-lens domain. The harvester is placed at the designed focal point of the piezo-lens, two different synchronized switch harvesting on inductor (SSHI) based harvesters are analyzed in the studies. Corrected reduced models are developed to predict the transient responses of the harvesting systems. The performances of the systems incorporating SSHI-based harvesters in transient wave energy harvesting are studied and compared with the system using a standard DC harvester. The focusing effect of the piezo-lens on transient waves and its capability to improve the harvested energy are verified. Since the NC circuits are active elements, an energy balance analysis is performed. Applicability of the harvesting systems is also discussed. Introduction Design more reliable, durable and comfortable products is a long-term objective in modern industries, such as aerospace, transport and civil engineering. Nowadays, with the rapid development in electronics, computer science and material science, smart structures are proposed in an attempt to achieve this goal [1]. Smart structures are hybrid systems composed of load-bearing materials and electronic units like but not limited to control and health monitoring units. An electronic unit typically contains sensors, actuators, computing devices and communication module, all these devices require power supply. Traditional batteries are not suitable choices due to their limited life spans, which will lead to maintenance problems. A more appropriate way is to integrate energy harvesting devices into the smart structures to realize self-powered systems. Due to the ubiquitous presence of vibration in structures, extensive efforts have been made to harvest vibration energy [2]. Among these studies, a lot of them were dedicated to obtain higher harvesting efficiency and/or to extend the operating frequency band of the harvesting system. Examples include (i) tuning the harvesting system through passive or active methods to match the operating frequency with the environment [3,4]; (ii) exploiting nonlinear mechanical mechanisms to widen the operating frequency band [5]; (iii) using phononic crystals [6], metamaterials [7] or acoustic black holes [8] to increase energy densities near the harvesters. Apart from the mechanical approaches, nonlinear energy extraction circuits have been proposed to boost the harvested energy when piezoelectric transducers are used. The introduced nonlinear part is termed 'synchronized switch harvesting on inductor' (SSHI) interface, which is composed of a switch and an inductor in series. The switch is turned off when the voltage of the piezoelectric transducer reaches a minimum or a maximum. Under this situation, the inductance and the capacitance constitute an oscillating circuit with a period much smaller than the mechanical one. After a very short time equals to a half of the period of the oscillating circuit, the switch is turned on again. Consequently, the voltage is inverted. The SSHI interface only requires a small amount of power to control the switch, it could be self-powered [9]. The SSHI interface is firstly proposed and studied by Guyomar et al [10], and has drawn considerable attention these years. It is connected with the piezoelectric element in parallel to form a parallel SSHI harvesting system [10][11][12][13], or in series to obtain a series SSHI harvesting system [11,12]. Researches show that the SSHI interface can improve the converted energy by the piezoelectric transducer from mechanical to electrical form and reduce the backward conversion [14]. In a weekly coupled electromechanical system, the harvested power is boosted by 4-9 times when compared to a standard DC technique proposed by Ottman et al [15]. In a strongly coupled case, these techniques will produce almost equivalent power but the required piezoelectric materials for the SSHI-based techniques are much less [10]. Based on the SSHI interface, several improved techniques have been proposed, each of them addresses a particular concern. Such as the 'double synchronized switch harvesting' [16] and the 'enhanced synchronized switch harvesting' [17] are proposed to obtain load independent techniques. The 'adaptive synchronized switch harvesting' [18] technique is developed to harvest energy in multimodal vibration situations. The characteristics, advantages and drawbacks of these SSHI-based techniques are discussed in details by Guyomar et al [9]. Harvesting vibration energy in structures is well studied, however limited effort has been devoted to harvest energy from traveling waves. Traveling waves are common in builtup structures, since the power in these structures transmits from one component to another especially at higher frequency bands [19,20]. In addition, waves will propagate away from the source when structures are under non stationary excitations, they are attenuated by damping and/or radiation thus are not reflected back to the source to form standing waves. It is important to develop harvesting systems to obtain energy from waves in those cases. To harvest energy from traveling waves, one of the main challenges is that the amount of harvested energy could be very low. In recent years, to increase harvested energy from traveling waves, several innovative harvesting systems have been developed [21][22][23][24]. The fundamental idea is to steer waves to increase the energy densities at particular positions and harvest there. For examples, an elliptical acoustic mirror or a parabolic acoustic mirror is used to focus waves in the systems proposed by [21][22][23]; in [23], an artificial periodic array with a defect is designed to localize energy at specific frequencies, an acoustic funnel formed by arrays of acoustic scatters is developed to guide waves into a narrow channel. However, the harvesting circuits in these studies are simply represented by resistive ones in order to focus analysis in the mechanical part of the system. This is acceptable from an academic point of view but not adequate in practice since most of the real-life low power electronic devices require DC, the actual harvesting circuit always contains a ACto-DC converter and maybe some other nonlinear parts, which can not be simplified as a linear resistive load. Different from the aforementioned methods to steer waves, recently a piezo-lens is proposed to focus waves [25]. The piezo-lens is composed of a host plate and several surfacebonded piezoelectric patches. These patches are shunted with negative capacitance (NC) circuits. The spatial variation of refractive index in the piezo-lens zone is designed to fulfill a hyperbolic secant profile. Results show that the piezo-lens can focus flexural waves near a designed point in a broad frequency band. Thus the piezo-lens has large potential to be exploited in developing advanced harvesting systems for waves. In this paper, the piezo-lens is combined with SSHI-based harvesters to improve the harvested energy from transient waves. An analytical relationship which connects the effective refractive index of piezoelectric system to the shunting NC value is used to design the piezo-lens; corrected reduced models are developed to predict the transient responses of the piezoelectric systems. With these tools, the performances of the harvesting systems are studied and discussed. 2. Configuration of the harvesting system 2.1. The first part: a piezo-lens The harvesting system proposed in this paper consists in the combination of a piezo-lens and a harvester. The concept and designing process of the piezo-lens are introduced in this section. The piezo-lens is obtained by periodically bonding piezoelectric patches on the surfaces of a host aluminum plate in a collocated fashion, as depicted in figure 1(a). The host plate is lying in the x−y plane and occupying the spatial . The piezo-lens zone could be divided into a 14-by-6 array of piezoelectric cell, the patches in each of these cells are shunted with a NC circuit and their bonding surfaces are grounded, as shown in figure 2. To focus flexural waves, the spatial variation of the refractive index of flexural wave inside the piezo-lens zone is designed to fulfill a hyperbolic secant function: in which, n 0 represents the refractive index of the background plate, α is the gradient coefficient and β represents the y coordinate of the symmetry axis of the refractive index profile, as illustrated in figure 1(b). Due to this design, waves incident into the lens from the Ox direction will be focused at a focal point at the b = y line, with a focal length p a = f 2 measuring from the left boundary of the lens [26]. By tuning the NC values, one can change the local dynamic properties where piezoelectric patches are bonded [7,25]. Thus, the variation of the refractive index inside the piezo-lens can be approximately realized in a piecewise form by designing the NC values of cells at different locations. According to equation (1), the refractive index only varies in the y direction. Thus, in a piezo-lens, the shunting NC values are equal in a same row (the x direction) but will differ in a same column (the y direction). To determine the required shunting NC value in each row, an analytical relationship between the effective refractive index of the cell and the NC value is developed. Here, E p sh and m p sh stand for the Young's modulus and the Poisson's ratio of the short-circuit piezoelectric material, respectively; k 31 is the coupling factor; C p T is the free intrinsic capacitance (namely the capacitance when the patch is under constant stress); C neg is the applied NC value. In the second step, the effective parameters of the shunted piezoelectric sandwich structure highlighted by the dashed lines in figure 2 are determined according to the classical laminated plate theory [27]. The effective area density and effective flexural rigidity of the piezoelectric sandwich structure are expressed as: ) is the flexural rigidity of the host plate and E b , m b denote the Young's modulus and the Poisson's ratio of the host plate respectively; r b and r p are the densities of the host plate and the piezoelectric patches respectively. In the third step, the effective area density and effective flexural rigidity of the entire shunted piezoelectric cell are derived, they are expressed as [28]: ) is the ratio of the surface covered by the piezoelectric patch to the surface of the unit cell. In the last step, the effective refractive index of flexural wave incident from the background plate into the shunted piezoelectric cell is obtained: With the relationship in equation (5), a piezo-lens is designed in three steps. Firstly, the parameters α and β in the refractive index function in equation (1) are chosen to design the location of the focal point. Then, the required refractive index for each row of piezoelectric cell in the lens zone is obtained by substituting the central y coordinate of each row into equation (1). Lastly, the required refractive index for each row is fulfilled by choosing the NC value according to equation (5). The second part: a harvester The harvester (namely the harvesting device) is located at the designed focal point to yield energy from the focused waves. The harvesting device is composed of a piezoelectric patch bonded on the upper surface of the plate and a connected energy extraction circuit. A standard DC device and two SSHIbased devices are analyzed in this paper, their topologies and The DC device involves a rectifier to convert the voltage of the transducer to a DC form, and a capacitance C s to store the harvested energy, as depicted in figure 3(a). In that figure, the piezoelectric patch is equivalently represented by a current source I eq (t) and a capacitance C h with the value equal to the blocked intrinsic capacitance (i.e. the capacitance when the patch is under constant strain condition) [14]. The rectifier is assumed to be perfect. Thus, when the absolute value of V h (t) is larger than or equal to the output voltage V t C s ( ), the rectifier conducts, the storage capacitance is charged. Under this circumstance, the harvesting circuit is governed by the equations below: On the other hand, when the absolute value of V h (t) is smaller than the output voltage V t C s ( ), the rectifier is blocked, no charge will flow to the storage capacitance, the piezoelectric element is under open-circuit condition: in which, the exact expression of I eq is given in section 3.2. The SSHI-based harvesting devices are obtained by integrating a SSHI interface into the DC one shown in figure 3(a). In the first SSHI-based device, the SSHI interface, consists of a switch S and an inductor L, is in parallel to the piezoelectric patch as illustrated in figure 3(b). It is called parallel SSHI-DC (P-SSHI-DC) device in this paper. The control law of the switch is similar to the one used in steady state cases. At most of the time, the switch is opened, the device works just like a standard DC one. The switch is triggered at the time t i when the current I eq is null (namely, the voltage of the piezoelectric patch is a local maximum or minimum). It is kept closed for a very short period of time, corresponding to a half of the period of the oscillating circuit: p D = t LC h . An inversion quality factor Q I [10] is used to describe the energy loss mainly caused by the inductor in the SSHI interface (note that in some papers [17,18], the voltage The second SSHI-based device contains a SSHI interface in series with the piezoelectric patch, as shown in figure 3(c). This device is called series SSHI-DC (S-SSHI-DC) device here. At most of the time, the piezoelectric element is under open-circuit condition. Different from the steady state cases [11,12], the switch is closed when I eq is null and . An inversion quality factor Q I is also used to take into account the loss of the switch interface in this case, accordingly a resistive is added to the inductor. The inversion process is governed by the equations below: in which, V l (t) is the voltage difference between the two ends of the inductor. Full finite element (FE) models To study performances of the harvesting system, FE models of piezoelectric systems are developed. In the FE models, the structures are discretized by 3D quadratic Lagrange elements. Each of the nodes corresponding to the piezoelectric patches has three mechanical degrees of freedom (DOFs) and a voltage DOF. The equilibrium equations for the discretized fully coupled piezoelectric system are: Here, d and V represent the structural and voltage DOFs, respectively; F and Q are the mechanical forces and charges respectively. The equations in (10) are rewritten under following considerations: (i) the voltage DOFs in the piezoelectric patches can be partitioned into DOFs inside the patches, DOFs on the free electrodes of the patches and DOFs on the bonding surfaces; (ii) the voltage DOFs on the bonding surfaces are grounded, thus the corresponding equations and columns are directly removed; (iii) there is no charge source inside the piezoelectric patches, thus the internal voltage DOFs can be eliminated by exact static condensation; (iv) in the system, the piezoelectric patches are connected with different circuits-the patches constituting the piezo-lens are connected with NC circuits and the patch for harvesting is connected with one of the circuit illustrated in figure 3, therefore, the DOFs on the free electrodes are further separated into DOFs corresponding to the patches in the piezo-lens and DOFs of the patch for harvesting; (v) as the DOFs on one electrode have identical voltages, the voltage DOFs on the free electrode of each piezoelectric patch are reduced such that only one master voltage DOF remains on the free electrode per patch. Consequently, the governing equations can be rewritten as below: in which, the matrices C L and C h are diagonal, each of their diagonal elements represents the blocked intrinsic capacitance of a piezoelectric patch; V L , V h are the master DOFs on the free electrodes of the patches in the lens and the patch for harvesting, respectively; Q L , Q h are the charges flowing to the patches in the lens and patch for harvesting respectively. More details about above process could be found in the appendix or [29]. Model reduction Instead of solving the full FE model above, which has a large number of DOFs, a reduced model is used. For the sake of simplicity, let: Using these notations and equation (11), the governing equations in time domain are written as: Equation (13b) and the following equations (17b) and (20b) represent the Kirchhoff's current law which must be satisfied at the joints where circuits are connected with the piezoelectric patches. The reduced model is obtained through a transformation between the displacement d and a set of modal coordinates h: f i is the ith natural mode of the piezoelectric system under short-circuit condition with specific homogeneous Dirichlet boundaries, it is obtained by solving the following eigenvalue problem: here, w i is the corresponding natural frequency. The modes are mass-normalized, resulting in: Using equations (14) and (16), the governing equation (13) are represented in modal coordinates as: Only the first m modes in modal matrix F will be retained, and the number is much smaller than that of the system's DOFs. Thus, the number of equations in (17) is largely reduced. However, the reduced model in equation (17) can not accurately describe the piezoelectric behaviors of the system [30] since the truncation of the higher order modes will lead to a static reduction error [29]. This static error will cause a non-negligible error of the electrostatic voltage relative to the full model's as indicated below: Here, T f e and T r e are the electrostatic transfer matrices between V and Q of the full and reduced models, respectively. To obtain more accurate voltage responses, the reduced model is corrected by modifying the blocked intrinsic capacitance matrix C Lh . The voltage responses are corrected by guaranteeing that the voltage outputs V i ( ) of the ith piezoelectric patch in the corrected reduced model and the full model are consistent when a same static electric input Q i ( ) is applied to this patch. According to equations (13) and (17), such requirement is fulfilled by modifying the capacitance matrix * C Lh in the corrected model as: The structural damping is taken into account by introducing a constant viscous damping coefficient xw 2 i into each remained mode. The NC circuits are connected to the piezoelectric patches through the following relation: The DC and SSHI-based circuits are implemented in the corrected reduced models through equations (6)- (9). Note that, in the equivalent model of the piezoelectric patch for harvesting, the current source is eq ( ) ( ) and the blocked intrinsic capacitance is the modified one * C h . Numerical results In the simulations, the dimensions of the piezo-lens and host plate are illustrated in figure 1(a). The geometry parameters of the piezoelectric cells and piezoelectric patch for harvesting are given in table 1 Comparison between the standard DC and SSHI-based devices The standard DC and SSHI-based harvesting devices have been well studied in steady state, however lack of knowledge of their applications in transient wave cases could be found. Thus first of all, the performances of the systems with these devices in transient wave energy harvesting are studied. The piezo-lens is applied in these studies. To assess the harvesting performances, two metrics are used. The first one is the harvested energy, which indicates how much energy can be yielded when a transient wave package passes through the harvester, it is the maximum energy that stored in the storage capacitance of the harvesting device: The second one is the harvesting efficiency, it is defined as [31]: in which, E input is the total input energy introduced by the excitation. The efficiency indicates how much a harvesting system can convert the input mechanical energy (E input ) into the output electric one (E har ). This metric is a useful criterion to compare the harvesting performances of systems with different configurations and/or under different operating conditions. The performances of systems with different harvesting devices are compared in figure 5. In the simulations in figure 5 and those hereinafter, the inversion quality factor Q I of the SSHI-based device is equal to 3. Q I indicates the energy loss caused by the harvesting circuit, typically a larger value of it means less energy loss [10]. Q I mainly depends on the involved piezoelectric material, the switch and the inversion inductance, its real value can only be obtained experimentally, herein the value of it is chosen according to the results in [10]. From figure 5 it can be seen that the harvested energy and efficiency strongly depend on the storage capacitance, and the optimal capacitances corresponding to the maximum harvested energy and the maximum efficiency are nearly the same for each device. In addition, when the devices are all working at optimal conditions, the two SSHI-based devices have almost equal performances, and they both harvest 2.6 times more energy than the DC device and also have 2.6 times better efficiency. To gain more insights into the harvesting performances of the systems with DC or SSHI-based devices, the typical waveforms and the converted power are compared in figure 6. The meanings of V h , I eq and V Cs are given in figure 3; the converted power is the product of V h and I eq , positive it represents the amount of power converted from mechanical to electrical, and negative it means the contrary. In the simulations in figure 6, the storage capacitances are all set as * = C C 30 s h to guarantee an acceptable performance for all the devices according to the results in figure 5. To facilitate the comparison, the maximum V Cs in the DC case is normalized to unit, and all the other voltages are normalized according to this value; a similar normalization process is used to deal with the converted power. When the transient wave package reaches the harvester, the charges begin to accumulate in the storage capacitance; the accumulation stops after the main part of the package passes through the harvester. The charges accumulated in the storage capacitance are converted from strain energy by the piezoelectric transducer, it can be seen that the SSHI-based devices can promote the conversion of power from mechanical to electrical part but suppress the contrary, thus they can harvest more energy than the DC device. It can also be observed that when the SSHI-based devices are used, the waveforms of I eq are distorted. I eq depends directly on the strains of the piezoelectric patches, which are caused by the waves in the media, the distortions indicate that the harvesting processes in those cases could have non-negligible influences on the wave propagation at the location where the harvester is mounted. Since the switch is trigged at the time when = I 0 eq , the distortions result in multiple inversions, which cause the fluctuation of the harvesting performance as revealed in figure 5. Effects of piezo-lens on transient waves and harvesting It is demonstrated in [25] that the piezo-lens can focus harmonic waves near the designed focal point in a large frequency band, thus it is expected that the piezo-lens could also -´-- According to the results in [25], the piezo-lens used in this paper is effective from about 100-8000 Hz. Thus the major frequency components of the waves generated by the excitation in figure 4 are totally within the effective frequency band. Firstly to study the effect of the piezo-lens on transient waves, the harvester is removed from the system since its presence at the focal point will make the energy concentration effect less obvious. The instantaneous transverse displacement w f at the designed focal point and input power are shown in figure 7(a). The results corresponding to the case without lens are also given as references, the maximum amplitudes of the transverse displacement and input power in this case are both normalized to unity. The piezo-lens will reflects a part of the incident waves and these reflected waves will interact with the excitation forces, thus the input power is a little bit reduced when the piezo-lens is active. Even though with this reduction of input power, the maximum amplitude of w f is improved. This result indicates that the transient waves are focused by the lens. To further verify the focusing effect, the transverse displacement of the host plate at the time corresponding to the maximum amplitude of w f is depicted in figure 7(b) for the case without piezo-lens and in figure 7(c) for the case with piezo-lens. From the comparison, it can be clearly observed that the transient waves are focused near the designed focal point. Figure 8 compares the harvesting performances between the cases with and without piezo-lens. When the piezo-lens is used and due to the focusing effect demonstrated above, the maximum harvested energy is enhanced about 2.5 times no matter which kind of harvester is used. The input power is reduced when the piezo-lens is applied, consequently the maximum harvesting efficiency is improved about 3 times for each device as illustrated in figure 8(b). In the above studies, it is verified that the piezo-lens can enhance the harvested energy from transient waves when the harvester is located at the designed focal point. However in the studies in figure 7 the harvester is removed from the system, it is not clear that whether the placement of the harvester near the piezo-lens will has significant influence on the location where the waves are concentrated. This is important since if the influence is non-negligible, the designed focal point may not be the optimal place for harvesting. The energy concentration location only depends on the incident waves when the parameters of the piezo-lens are specified [25], thus if the harvester has negligible effect on the excitation source, it will not affect the incident waves and the consequent energy concentration location. To study the influence of the harvester on the transient source, the instantaneous input power is used as a criterion. Figure 9 shows the input power induced by the transient excitation when different harvesters are used in the harvesting system. In these studies, the storage capacitance for each harvester is chosen to make the output energy maximum. The reference input power in the figure refers to the case without any harvesting device, and the maximum of it is normalized to unity. It can be observed that the harvester do not affect the input power, accordingly they will not influence the generated transient waves and the energy concentration location. Energy balance Since the NC circuits in the piezo-lens are active elements and they need to be powered, it is also important to consider the energy balance of the harvesting system. In the piezo-lens, the NC circuits need to stiffen the structure. To realize such effect, the circuits should be fully reactive [32], namely, they does not dissipate any energy. Thus, the active energy consumed by the NC circuit in a time interval from t 1 to t 2 is expressed as: For the transient wave cases here, the consumed energy is estimated in the interval between the time when the transient wave package first arrives at the piezo-lens and the time when the package leaves the lens. Figure 10 compares the consumed and harvested energy in the cases with different devices. The storage capacitance for each device is optimal (namely the output energy is maximum); the harvested energy of the system with piezo-lens and DC device is normalized to unity. When the piezo-lens is applied in harvesting systems, it is observed that the amount of consumed energy is really small compared with the harvested one, 11% with the DC device and less than 4% with the SSHI-based devices. Thus even though we take into account the energy consumed by the piezo-lens, the harvesting systems incorporating the piezolens still can yield considerably improved energy compared with the cases without piezo-lens. Practical application considerations In previous sections, performance of the harvesting systems are verified in ideal situations with very small structural damping (x = 0.001) and neglected forward voltages of diodes. In this section, influences of these factors on harvesting performance are considered, and applicability of the harvesting systems is discussed. In new examples, the structural damping is given as x = 0.05, the forward voltage of each diode is 0.6 V, the inversion quality factor Q I for SSHI interface is chosen as 5.6 according to the experimental results in [10], and the storage capacitance C s for each Figure 11 shows the waveforms of displacements and voltages in harvesting systems incorporating a piezo-lens. The measure location of the displacement is the left-bottom corner on the upper surface of the piezoelectric patch for harvesting. It can be observed that even though the maximum amplitude of the mechanical response of the patch is only micrometerscale, the harvesting systems are adequate to yield energy from the transient waves. It is noted here that the fast switch actions illustrated in figures 11(b) and (c) perhaps are difficult to obtain in piezoelectric mechanical systems. However, the piezo-lens can work at lower frequencies as long as the wavelengths are smaller than the characteristic length of the lens [25]. In those cases, the switch actions could be more realistic. The improvement of harvesting performance by using piezo-lens in these new cases are illustrated in figure 12, in which, the mean power P 1 for the case without lens and the mean net power P 2 for the case with piezo-lens are defined as: . Here, E har1 and E har2 are the harvested energy obtained by equation (22), E C neg is the consumed energy by the piezo-lens evaluated by equation (24), t har1 and t har2 are the charging duration of the storage capacitances (see figure 11). From Figure 9. Comparison of the input power when different devices are used to harvest the transient waves, the storage capacitance for each device is optimal, the reference input power refers to the case without any harvesting device. Figure 10. Comparison of the consumed and harvested energy in the harvesting system, the storage capacitance for each device is optimal, Q I =3. figure 12, it can be seen that the piezo-lens is also very effective to improve the harvesting performance in these more realistic cases. Indeed, to realize the harvesting systems, there are still challenges. The NC circuits required in the piezo-lens need to be totally reactive to stiffen the structure, but the existing circuits that could realize negative capacitance all contain resistive parts which more or less will dissipate some energy [33,34], thus they are potentially not suitable to realize a piezo-lens. However, it is hopeful that approximate fully reactive NC circuits could be achieved to reach the required stiffening effect in the coming future by optimizing the existing circuits (see [33]) or using synthetic circuits as proposed in [35]. Since the NC circuits for the piezo-lens are not determined at present, giving a rigorous energy balance analysis which takes into account the dissipated power is impossible, but it is logical to predict that the dissipated power by the piezo-lens could be very low since the NC circuits need to be (approximately) fully reactive. Besides, in some applications, the energy balance is not a critical issue. For example, a potential application of the harvesting systems is in structural health monitoring in large-scale structures. Sensors are sometimes embedded in structures at different locations in those cases, the maintenance is very difficult or even impossible when traditional batteries are used to power the sensors. Thus, the harvesting systems can be used to power these sensors. In these applications, the piezo-lens can be placed on the surfaces of structures thus it can be powered by traditional batteries rather than harvesters. Conclusions The combination of a piezo-lens with a harvester to obtain enhanced energy from transient traveling waves is studied. The standard DC, parallel SSHI-based and series SSHI-based harvesters are used. The SSHI interface can promote the conversion of the power from mechanical to electric part but suppress the contrary, accordingly the SSHI-based harvesters are more efficient than the standard DC one for harvesting energy from transient waves. The piezo-lens can focus transient waves near a designed focal point, thus placing the harvesters at that point can enhance the harvested energy about 2.5-3 times and improve the harvesting efficiency about 3 times as compared with the cases without lens. This improvement of harvesting performance is obtainable when the realistic damping effect of the host structure and the forward voltages of diodes are taken into account.
7,658
2017-02-13T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Cilia are required for asymmetric nodal induction in the sea urchin embryo Left-right (LR) organ asymmetries are a common feature of metazoan animals. In many cases, laterality is established by a conserved asymmetric Nodal signaling cascade during embryogenesis. In most vertebrates, asymmetric nodal induction results from a cilia-driven leftward fluid flow at the left-right organizer (LRO), a ciliated epithelium present during gastrula/neurula stages. Conservation of LRO and flow beyond the vertebrates has not been reported yet. Here we study sea urchin embryos, which use nodal to establish larval LR asymmetry as well. Cilia were found in the archenteron of embryos undergoing gastrulation. Expression of foxj1 and dnah9 suggested that archenteron cilia were motile. Cilia were polarized to the posterior pole of cells, a prerequisite of directed flow. High-speed videography revealed rotating cilia in the archenteron slightly before asymmetric nodal induction. Removal of cilia through brief high salt treatments resulted in aberrant patterns of nodal expression. Our data demonstrate that cilia - like in vertebrates - are required for asymmetric nodal induction in sea urchin embryos. Based on these results we argue that the anterior archenteron represents a bona fide LRO and propose that cilia-based symmetry breakage is a synapomorphy of the deuterostomes. Background Vertebrates possess pronounced visceral asymmetries along their left-right (LR) body axis, although they belong to the large phylogenetic group of the Bilateria, which refers to their bilaterally symmetric outer appearance [1]. Most organs are positioned in a characteristic way in the thoracic and abdominal cavities. In all vertebrate species examined so far, these asymmetries are under the control of the Nodal signaling cascade, which is only activated in the left lateral plate mesoderm before the first appearance of anatomical asymmetries [2]. The secreted transforming growth factor beta (TGFβ) Nodal binds to its receptor which results in the activation of its own transcription and that of its negative feedback inhibitor lefty (left-right determination factor), another secreted TGFβ superfamily member. Lefty antagonizes Nodal signaling, providing its temporal and spatial control. Additionally, the paired-like homeodomain transcription factor pitx2 is induced downstream of Nodal and mediates, through less well-known target gene activation, the setup of asymmetric organ morphogenesis. The event activating this highly conserved developmental program is referred to as symmetry breakage. Even though variations of the common theme may exist, an ancestral mode of vertebrate symmetry breaking has emerged over the past few years: at the heart of this mechanism acts an extracellular leftward fluid flow, generated by a transient ciliated epithelium, the so-called left-right organizer (LRO) [3][4][5]. The vertebrate LRO (known as Kupffer's vesicle in fish, gastrocoel roof plate (GRP) in amphibians, and posterior notochord or ventral node in mammals) represents a field of mono-ciliated cells at the posterior end of the forming notochord, flanked by endodermal archenteron cells [6][7][8]. This unique tissue, which develops during early neurula stages, consists of superficially located mesendodermal cells which are transiently placed in the primitive gut or archenteron, where they function as the LRO. The cells of the LRO subsequently leave the epithelium to ingress into deeper mesodermal structures [7,8]. The polarized attachment of cilia at the posterior cell surface, together with their clockwise rotational movement, create an asymmetric stimulus by the leftward acceleration of extracellular fluids, i.e. a leftward flow [9]. This setup has been functionally described in mammals (mouse, rabbit), amphibians (Xenopus) and teleost fish (medaka, zebrafish), and homologous tissues have been identified in salamanders (axolotl) and basal bony fish (white sturgeon). Interference with ciliary length, motility, polarization or flow function in general resulted in defects of the LR axis [10][11][12][13][14][15]. As a result of leftward flow, the Nodal inhibitor dand5 becomes down-regulated on the left margin of the LRO, where it is co-expressed on both sides with nodal itself. Nodal thus becomes liberated on the left side to induce the asymmetric signaling cascade in the left lateral plate mesoderm [16][17][18]. dand5 repression is induced through a flow-dependent intracellular calcium signal, which is mediated through the calcium channel pkd2 (polycystic kidney disease 2). Although ubiquitously expressed during these stages, pkd2 inhibition causes LR defects and prevents a unilateral calcium signal in mouse and fish LROs [5,[19][20][21][22]. Outside of the vertebrate lineage, LR asymmetries are common as well. For many deuterostome lineages, unilateral nodal expression has been described, which seems to be the common mediator of LR asymmetries in metazoan animals [3,[23][24][25]. Tunicates, the sister group of the vertebrates, as well as the cephalochordates, the most basal group of chordates, both express nodal asymmetrically on the left side [26,27]. The latter seem to show an ancient state of this set-up, with many LRO targets activated similarly to vertebrates, such as nodal, lefty, pitx2 and dand5 [26,[28][29][30][31]. Accordingly, interfering with Nodal activity in the cephalochordate amphioxus also resulted in LR defects [32]. The existence of an LRO in amphioxus has been predicted, but not yet analyzed or functionally tested [3,11]. Within the deuterostomes, the Ambulacraria, a chordate sister group which comprises the echinoderms, show unilateral nodal activation during late gastrula stages as well [33]. Although adult echinoderms took the path of reestablishing a radially pentameric body plan during evolution, their early embryo displays the typical bilaterally symmetrical embryonic development, which results in a pronounced LR asymmetry before metamorphosis [34]. As in other deuterostomes, nodal is the first asymmetrically activated gene, and is first found on one side in the developing archenteron tip [33,35]. The functional requirement and a complex downstream gene regulatory network have been elucidated in detail in sea urchin embryos [36]. This asymmetric archenteron domain instructs a second symmetrical ectodermal nodal domain to switch towards this side as well [37]. This expression is thought to be on the right, as deduced from a ventral mouth opening [25]. We have previously argued that this difference can be resolved to a common evolutionary origin, if the ventral side of echinoderms is homologized with the dorsal side of chordates [11]. This reasoning is based on the expression of dorsal organizer genes in the oral ectoderm of the sea urchin larva, suggesting that the mouth -counterintuitively -opens on the dorsal side. Such an evolutionary mouth repositioning can be easily envisaged as a transitional process from echinoderm ancestors to vertebrates [38]. Indeed, oral organizer identity has been recently shown to be functionally conserved in the sea urchin P. lividus [39]. Thus, in this scenario, left-asymmetric nodal expression is a synapomorphy of the deuterostomes [11]. One major question, however, has remained unanswered: how does symmetry breakage upstream of asymmetric nodal induction occur in echinoderms? Do sea urchin embryos possess a LRO or an evolutionary functional precursor that induces asymmetric nodal expression? Do archenteron cells possess cilia and, if so, are these required to induce asymmetric nodal expression? Using descriptive and functional approaches we show that (1) archenteron cells in the sea urchin larva harbor monocilia; (2) archenteron cilia are polarized and motile; and (3) cilia are required for asymmetric nodal induction. Sea urchin archenteron cells harbor polarized monocilia The defining feature of vertebrate LROs are polarized monocilia. As an entry point into studying sea urchin symmetry breakage, we investigated the presence of cilia on staged gastrula embryos by performing immunofluorescence (IF) with a well-characterized anti-acetylated α-tubulin antibody. In addition to the previously described long ectodermal cilia, optical sections of gastrula stage Paracentrotus lividus embryos revealed a population of shorter cilia within the developing archenteron ( Fig. 1a-b"). Higher magnification showed that most cilia ranged from 4-6 μm in length, with the most anterior ones being longer, measuring up to 10 μm (Fig. 1c). The same basic characteristics of mesendodermal archenteron cilia were also found in a second sea urchin species, Strongylocentrotus pallidus, demonstrating the conserved presence of cilia in the primitive gut of sea urchin larvae. One slight variation was observed, namely that very early gastrula embryos of Strongylocentrotus apparently lacked archenteron cilia (Fig. 1d), which were, however, present at mid to Fig. 1 Polarized cilia at the sea urchin archenteron. a-e P. lividus (a-c) and S. pallidus embryos (d-e') were analyzed by IF for the presence of cilia at the archenteron. Optical sections showed ectodermal (a'; a") and archenteron cilia (b'-c) at mid gastrula stages. Late (e; e') but not early S. pallidus gastrula stage embryos revealed cilia in the archenteron and at the archenteron tip (d, d'). Cilia were stained with an antibody against acetylated-α-tubulin (red, a'-c) or anti-α-tubulin (green, d-e'), nuclei were stained with DAPI (d-e'), and cell boundaries were visualized by phalloidin-green (c) or phalloidin-red (d-e'). f, g SEM analysis of P. lividus ectodermal and archenteron cilia. Fractured embryos allowed the visualization of monocilia on archenteron cells (g). Cilia are highlighted by an arrowhead (f-h') Posterior polarization of cilia on cells which invaginated into the archenteron. Cilia are colored in yellow and individual cells alternating in green and purple. The ciliary base is marked by a red semicircle, the center of the cell is indicated by a yellow dot. Schematic drawings adapted from Blum et al. 2014 [3] late gastrula stages (Fig. 1e). In P. lividus, cilia were present already at earlier gastrula stages (Fig. 1b, and data not shown). Scanning electron microscopy (SEM) of P. lividus was employed to further characterize archenteron cilia and to test a potential polarization of cilia along the animalvegetal, and thus along the AP axis of the embryo (Fig. 1f-h). Embryos, which were broken perpendicularly to the animal-vegetal axis and thus allowed a highpower magnification view inside the archenteron, revealed monocilia of about 4 μm in length (Fig.1f, g). Archenteron cilia were clearly present already in mid gastrula stage embryos (Fig. 1f ). Importantly, using the cilium-insertion point and the cell center as reference points, a clear posterior polarization of cilia was obvious already when cells were orienting towards the inside of the archenteron in animal/anterior direction (Fig. 1h). In summary, our descriptive analysis of the archenteron showed that cells harbored monocilia at a time point just prior to the asymmetric induction of nodal, suggesting a functional role homologous to that of vertebrate LROs. The sea urchin gastrula embryo expresses marker genes for motile cilia Next we analyzed the expression of marker genes indicative of motile cilia. To that end, sea urchin homologs of two marker genes for motile cilia, dynein axonemal heavy chain 9 (dnah9) and forkhead box protein J1 (foxj1) were cloned by RT-PCR and expression patterns during embryonic development were assessed by wholemount in situ hybridization. In the ectoderm, dnah9 was broadly expressed with intense signals in the apical tuft region. Localized mRNA expression was also found in the vegetal part of the gastrula mesendodermal tissue, followed by expression in the archenteron (Fig. 2a, b). Sense control probes were negative at all stages examined and for all genes analyzed in this study ( Fig. 2c and data not shown). Analyses of foxj1 mRNA expression revealed a similar pattern in the ectoderm and strong staining in the area of the developing apical tuft ( Fig. 2e-g). At early gastrula stages, localized expression was found in the vegetal plate region (Fig. 2e), while in late gastrula stages, a mesodermal expression domain started to appear at the anterior tip of the archenteron (Fig. 2g). These expression patterns were indicative of a population of motile mesendodermal cilia in the archenteron and reminiscent of vertebrate LROs. To investigate additional potentially conserved LRO genes, we cloned P. lividus Bicaudal C homolog 1 (bicc1) [40], which is required for cilia polarization at the vertebrate LRO, and pkd2, a calcium channel required for sensing of the flow [22]. Expression of bicc1 has been recently described in the sea urchin Hemicentrotus Fig. 2 Motile cilia marker genes dnah9 and foxj1 are expressed throughout sea urchin gastrulation. Whole mount in situ hybridization of early (EG), mid (MG) and late (LG) gastrula stage P. lividus embryos, as well as prism stages (k') for mRNA expression of dnah9 (a-d), foxj1 (e-h) and pkd2 (i-l). Schematic representation of staining in mid-gastrula embryos is highlighted in drawings in (d, h and l). White arrowheads highlight vegetal blastopore and archenteron tip expression areas. Schematic drawings adapted from Blum et al. 2014 [3] pulcherrimus, i.e. an unrelated genus. Our analysis in P. lividus confirmed the reported localization, i.e. strong expression of bicc1 mRNA at the vegetal pole of early to late gastrula stage embryos, directly at the site of invagination (Additional file 1: Figure S1 and [41]). P. lividus pkd2 mRNA (also termed suPC2) was expressed in a pattern reminiscent of bicc1 in early to late gastrula stages, namely in vegetal cells of the early gastrula embryo ( Fig. 2i-l). pkd2 mRNA transcription was further activated in the apical tuft cells and at the archenteron tip of late gastrula to early prism stage embryos ( Fig. 2i-k). Cilia in the archenteron are motile In order to directly assess the potential motility of archenteron cilia in live embryos, we analyzed early to midgastrula stage embryos by high-speed videography. To highlight moving objects, movies were processed using a temporal difference imaging method (cf. Material and Methods). Additional file 2: Movie 1 demonstrates that the posteriorly polarized cilia of invaginating archenteron cells (cf. Fig. 1g) were indeed highly motile. Next we analyzed mid to late gastrula stage embryos, focusing on the lumen of the central part of the elongated archenteron. Again, the monocilia detected by SEM analysis (Fig. 1h) were motile, displaying a rotating pattern (Additional file 3: Movie 2). Attempts to visualize a possible effect of cilia motility, i.e. whether or not this resulted in directed movement of extracellular fluids, failed due to technical reasons, as we were not able to introduce fluorescent micro beads into the archenteron (not shown). Despite this shortcoming, our results strongly suggest that the sea urchin embryo harbors a vertebrate-type LRO, i.e. an archenteron epithelium with posteriorly polarized rotating monocilia, which expresses a set of characteristic marker genes, including foxj1, dnah9, bicc1 and pkd2. Archenteron cilia are required for asymmetric nodal induction As an alternative to visualizing cilia-driven fluid flow, we chose to directly test the function of cilia in asymmetric nodal induction. In a first set of experiments we used the pharmacological inhibitor Ciliobrevin D, which inhibits the ATPase activity of axonemal and cytoplasmic dynein motor proteins [42]. Treatment of early gastrula stage embryos with 50 μM Ciliobrevin D efficiently inhibited ciliary motility, as judged by direct microscopic observation of embryonic swimming behavior, which completely ceased within minutes (data not shown). As this treatment efficiently prevented gastrulation movements as well, we were not able to analyze later nodal expression (data not shown). Short of a specific inhibitor of axonemal dynein function, we decided to assess the role of cilia for symmetry breakage by removing cilia from the embryo. To that end, a brief osmotic shock with high-salt (HS) seawater containing twice the normal molarity of NaCl (sodium chloride) was applied, a procedure previously reported to completely deciliate the ectodermal surface of the embryo [43]. Embryos between early gastrula and prism stage were treated with HS seawater for 60-90 s and returned to regular seawater until untreated control embryos reached late gastrula to pluteus stages. Embryos were fixed, assessed for developmental defects and/or processed for IF, SEM or in situ hybridization. First we analyzed whether this procedure was suited to remove archenteron cilia. Swimming behavior was instantly impaired, as described for Ciliobrevin above (not shown). Embryos were fixed 20 min after treatment and subjected to IF analysis of cilia or SEM analysis. While normal ciliation was seen in SEM photographs of untreated embryos, HS-treated specimens were devoid of cilia, both on the outer surface and at the proximal (posterior) end of the archenteron cavity ( Fig. 3a, b). To investigate whether cilia were removed along the entire archenteron, we analyzed optical sections of treated embryos using IF. Control embryos displayed normal ciliation along the extended archenteron. In contrast, about half of the HS-treated embryos lacked cilia altogether, with the remainder of specimens displaying a small number of very short cilia remnants (Fig. 3c, d; Additional file 1: Figure S1i). These experiments demonstrated that HS treatment was an efficient tool to remove archenteron cilia from the embryo. Because of the proven function of cilia in signal transduction in different species [44,45], we asked whether deciliation affected normal embryonic development. The protocol applied here has been previously used to remove ectodermal cilia at different stages of development, without any reports of developmental defects [46][47][48][49]. Stephens et al. (1977) even applied multiple rounds of deciliation (up to ten times) which resulted in phenotypically normal pluteus larvae [50]. In order to confirm that the HS-protocol did not interfere with normal embryogenesis in our experiments as well, >1.500 embryos were deciliated in 11 independent experiments at different time points during development, between late blastula and late gastrula stages. Treated specimens were fixed and assessed for developmental delays or phenotypic alterations at time points when untreated control embryos had reached a) late gastrula with fully elongated archenteron; or b) early pluteus stages. In both sets of experiments, no phenotypical difference was noted between control and HS-treated samples (Additional file 1: Figure S1e-h). Quantitative evaluation of experiments revealed no temporal delay in either of these groups, besides minor and random fluctuations that were sometimes observed (Additional file 1: Figure S1j). Together with the previously published deciliation approaches, these control experiments demonstrate that the protocol applied here efficiently removed archenteron cilia without impairment of normal development. In a final series of experiments, we tested whether deciliation impacted on nodal asymmetry in the archenteron and ectoderm, which we hypothesized based on our descriptive analysis of archenteron cilia. HS treated embryos indeed revealed grossly altered patterns of nodal expression at late gastrula stages, when control embryos displayed unilateral patterns in both, the archenteron and the ectoderm (Fig. 3e, f ). Surprisingly, HS treatment at early to mid-gastrula stages, i.e. just before asymmetric induction of the nodal cascade, resulted in an expanded, bilateral ectodermal expression of nodal in the vast majority of treated specimens. The same result was obtained with pitx2 (Fig. 3g, h), a direct target of Nodal [51]. Bilateral activation of the Nodal cascade was SEM (a, b) and IF (c, d) analyses of cilia in control untreated embryos (a, c) and specimens exposed to a 60-90 s osmotic shock (b, d). Note that cilia were almost completely absent following high salt treatment. Cilia in (c) and (d) were stained with an antibody against anti-acetylated-α-tubulin (purple), nuclei with DAPI, and cell boundaries visualized by phalloidin-green. e, g Unilateral induction of asymmetric nodal cascade genes in control embryos. f, h Bilateral ectodermal nodal and pitx2 expression following deciliation of early to mid-gastrula stage embryos. Black arrowheads highlight expression in the archenteron and ectoderm; white arrowheads indicate lack of expression. i Quantification of expression pattern from (e-h). Unilateral pitx2 expression in coelomic pouches of pluteus stage control specimen (j) and bilateral expression in deciliated embryos (k). Bilateral expression of pitx2 after deciliation is independent of MAPK/p38 inhibition through SB203580 (l-m). o Quantification of expression patterns. Posterior/vegetal is to the top in (a, b) and to the bottom in (c-g) especially evident when analyzed in the developing coelomic pouches of early pluteus stage embryos, using pitx2 as a late LR marker gene (Fig. 3j, k). The archenteron domain of nodal in late gastrula stage embryos, however, was mostly undetectable (Fig. 3e, f ). Next we tested whether high-salt treatments resulted in a stress-induced activation of MAPK/p38-mediated ectopic activation of nodal [52]. If it were, inhibition of MAPK/p38 should rescue the deciliation-induced aberrant activation of the Nodal cascade. Blastula-stage embryos were treated with the MAPK/p38 pathwayspecific inhibitor SB203580 [52], to test the efficiency of the drug, which resulted in dorso-ventral axis defects (Additional file 1: Figure S1k-n). When embryos were treated after deciliation at mid to late gastrula stages, development was not impaired but specimens still exhibited bilateral pitx2 expression in the coelomic pouches at early pluteus stage (Fig. 3l-o). These experiments thus demonstrated that the expanded expression pattern of LR marker genes after deciliation was not due to MAPK/p38-mediated over-activation of Nodal signaling during gastrulation. In order to determine whether there was a sensitive time period, i.e. whether cilia were required only during certain developmental stages, we performed a time course of deciliation and treated specimens with high salt at defined points of development: early gastrula, mid gastrula, late gastrula or prism stage. Based on ciliadriven symmetry breakage in the vertebrates, the time window was expected to be rather narrow. Figure 3i demonstrates that deciliation at very early to midgastrula stages, before full extension of the archenteron and before asymmetric nodal expression, caused aberrant LR development. Embryos treated after this point, during late gastrula to early prism stages, showed normal unilateral expression of nodal, i.e. the sensitive time window closes during late gastrulation. In summary, our work demonstrates that the sea urchin archenteron harbors polarized and motile monocilia, which are required for LR axis determination during gastrulation. Discussion Theoretical considerations and deductive logics have previously led us to propose that sea urchin embryos possess an ancestral LRO, homologous to that of the vertebrates [11]. Here we demonstrate in two species that sea urchin gastrula embryos indeed display a mesendodermal monociliated archenteron, which is reminiscent of the vertebrate LRO (Fig. 4a). Importantly, cilia were required for asymmetric nodal induction, arguing for a conserved cilia-based mechanism for LR symmetry breakage in the deuterostome lineage. Common features between the sea urchin archenteron and vertebrate LROs include polarized and motile monocilia as well as expression of dnah9 and foxj1, two genes which are mandatory for generation of leftward flow. Although we were not able to visualize a fluid flow within the archenteron directly, we propose that ciliadependent symmetry breakage is a conserved feature between sea urchin and the vertebrates. In order to qualify as the nodal inducing event, cilia-driven symmetry breakage should occur at the right time, before asymmetric nodal induction, and in the right place, i.e. at the archenteron. Besides the presence of motile cilia itself, the expression of vertebrate LRO components dnah9, foxj1, and pkd2 in this very region are in perfect agreement with this proposal (Fig. 2). Not every LRO feature seems to be conserved between sea urchins and the vertebrates, though. While flowperceiving (and nodal expressing) cells in vertebrates are located at the posterior pole of the embryo, close to the blastopore or proximal end of the gut, these cells reside at the anterior tip of the archenteron in the sea urchin late gastrula embryo, at its distal end, as deduced from the nodal expression domain (Fig. 3e and [37,53]). Interestingly, an intermediate scenario is encountered in amphioxus. Here, like in amphibians, a bilateral nodal expression domain was found, but in the anterior part of the archenteron like in sea urchins. The expression on the right side of amphioxus later disappears, possibly due to a leftward cilia-driven flow [3,26] (Fig. 4b). This reasoning is further supported by the recent description of a dand5 homolog, which was found co-expressed with nodal but down-regulated on the left side, resembling the situation at the vertebrate LRO [28]. Interestingly, a dand5 orthologue was not reported for P. lividus, nor was it found in S. purpuratus [53]. Considering just two features, localization of the supposed LRO and presence or absence of dand5, sea urchins, cephalochordates and vertebrates could represent three distinct states, indicative of evolutionary transitions: while sea urchins and amphioxus share an anterior (distal) position of the LRO within the archenteron, amphioxus and vertebrates both possess an orthologue of the Nodal antagonist dand5, but only the vertebrates localize their LRO close to the blastopore. What kind of functional adaptations underlie these variations remains to be investigated. There are additional LR components worth being considered in the evolutionary context, namely pkd2 and bicc1, both of which were expressed in early sea urchin gastrula mesendodermal tissues during invagination ( Fig. 2i-k, Additional file 1: Figure S1 and [41]). bicc1 in mouse and frog functions to polarize LR cilia [40]. In that sense the expression in sea urchins correlated with that in frog and mouse: it is precisely the bicc1 expressing cells which harbor cilia polarized to the posterior pole in mid gastrula stage embryos in the sea urchin, and which we here report to be motile (cf. Fig. 1h and Additional file 1: Figure S1 and Additional file 2: Movie 1). Expression differed in another aspect, however, as bicc1 mRNA was not colocalized with nodal expression at the archenteron tip, different from the vertebrates, where bicc1 is co-expressed with nodal in the LRO-flanking cells -and might be involved in sensing of flow [40]. It would be interesting to know where bicc1 is expressed in amphioxus, around the blastopore or in the anterior mesoderm. pkd2 in the sea urchin embryo was expressed in the vegetal mesoderm and endoderm at early to mid-gastrula stages and at the tip of the archenteron. This expression is remarkable, as pkd2 has not been reported to be transcriptionally up-regulated in or at any vertebrate LRO. The encoded protein, the calcium channel Polycystin-2, however, is present in mouse and fish LROs, where it is involved in flow sensing [19,20,22]. Functional conservation of symmetry breakage is further supported by the expression of pkd2 in Xenopus gastrula embryos, where it is highly expressed in the mesendodermal ring around the blastopore, reminiscent of the vegetal mesoderm expression in sea urchins (our unpublished observations). Again, expression in amphioxus has not been reported as yet but should be highly informative. Taken together, the presence of cilia and the expression of conserved LR genes strongly argue for a conserved role of motile archenteron cilia in LR symmetry breakage. The most convincing result, however, is presented by our analysis of cilia function, where deciliation at a time point just prior to asymmetric induction of nodal in the archenteron resulted in altered expression of the Nodal cascade. Molecular asymmetries were lost, with a differential response of nodal and pitx2. While archenteron nodal was mostly absent or below the detection level of the experiment, the ectodermal domain and later on pitx2 in the coelomic pouches became expanded and bilateral (Fig. 3). These observations cannot be attributed to the absence of archenteron Nodal. When Nodal was [3] experimentally manipulated in the archenteron to be absent or expressed in a bilateral manner, the ectodermal domain remained asymmetrical, although in a randomized manner, being activated either on the left or on the right side [33,37]. The ectoderm is still competent to express nodal on both sides during gastrulation, as shown by Activin treatment of early gastrula embryos [33]. The maintenance of unilateral ectodermal nodal expression might thus be explained by an additional ectodermal cilia function, which was ablated in high salt-treated embryos as well, but we can only speculate on this issue. Our results would be explained if external cilia had a function in restricting Nodal cascade activation unilaterally in the ectoderm, and the archenteron cilia provided a biasing cue, as they do during vertebrate LR axis determination [2]. While this manuscript was under review, two papers were published that dealt with symmetry breaking in sea urchin embryos. In agreement with our conclusions, Takemoto et al. deduced that cilia were required for symmetry breakage in sea urchins. Application of an inhibitor of motile cilia to early blastula stage embryos resulted in a loss of nodal asymmetry, archenteron cilia, however, were not analyzed in this study [54]. Warner et al. (2016) injected 1-cell stage embryos with a kinesin-2 antibody, which removed all cilia from the time point of injection onwards. Nodal was strongly reduced in the four embryos analyzed, while asymmetric SoxE expression in the coelomic pouches was retained in the six embryos included in this one experiment. The authors concluded that cilia were not required for symmetry breakage but rather for hedgehog signaling-mediated nodal maintenance [55]. We strongly disagree with this conclusion for the following reasons: (1) Lepage and colleagues have shown that asymmetric SoxE expression occurs independent of archenteron Nodal; (2) the removal of all cilia from the earliest developmental stages onwards potentially impacts on many more signaling pathways, such as for example hedgehog, as shown by Warner et al (2016) [55], and thus most probably impact on stages before archenteron cilia emerge; (3) attenuated nodal expression by permanent cilia ablation, as described by Warner et al., might be the result of a) loss of archenteron cilia function, which causes bilateral expanded nodal expression as shown in our present work, followed in time by b) loss of hedgehog-mediated nodal maintenance, as described by Warner et al. (2016). We like to extend our evolutionary considerations to the precursor tissue of the LRO in the sea urchin. Keller and colleagues previously revitalized the concept of the superficial mesoderm (SM) in the chordate lineage. Best characterized in amphibians, the SM of the pre-gastrula embryo gives rise to the archenteron LRO during gastrula/neurula stages to end up in the axial and paraxial mesoderm during later embryogenesis [8]. Besides the birds, which lack an LRO, the SM has been identified in most chordate lineages [8,11,56] (Fig. 4a-d). In the sea urchin mesenchyme blastula, the nonskeletogenic mesoderm, together with the endoderm, locates superficially at the vegetal pole as well, reminiscent of the situation in amphibians [8,57]. Of the known SM marker genes in the frog, foxj1 [58][59][60], nodal3 [58] and wnt11b [61], only foxj1 has been analyzed in sea urchins (Fig. 2e-g). Its mRNA localization in the early gastrula vegetal cells indeed argues for a conserved blastula stage SM (Fig. 2h). Furthermore, like in vertebrate SM tissues, these cells show accumulation of nuclear ß-catenin at the very time point when the SM is specified [62]. This is of relevance, as foxj1 expression has been shown to be directly induced by canonical Wnt signaling. Conclusions We conclude that the early sea urchin embryo represents an ancestral deuterostome state with a vegetal SM at blastula stages, which transforms into an archenteron LRO, where motile cilia are necessary to break the bilateral symmetry. Cilia-driven symmetry breakage thus should represent a synapomorphy of the deuterostome lineage. Animals and embryo manipulation Adult P. lividus were collected in Pula/Croatia in early summer during the natural breeding season and reared short-term in the laboratory. Adult S. pallidus were collected at the MSU White Sea Biological Station, Kandalaksha Bay/Russia during the summer. Embryos were obtained by artificial fertilization after injecting adult specimen with 1 ml of 0.5 molar KCl into the oral field to obtain sperm and oocytes and raised in artificial sea water (ASW) containing antibiotics at 18-20°C or 4°C for P. lividus and S. pallidus, respectively. High-salt and Ciliobrevin D treatment Early or mid-gastrula stage embryos were transferred to 3 % ASW, which contained high concentrations of NaCl (1 M) but regular seawater molarities of all other ions, and kept for 60-90 s before washing and transferring to regular ASW. Efficiency of deciliation was controlled in the microscope by absence of embryonic swimming behavior. Subsequently, embryos were incubated until the desired stages for treatment or analysis were reached. For dynein motor inhibition, embryos were transferred to ASW containing 50 μM Ciliobrevin D at early gastrula stage and incubated until control untreated specimens reached late gastrula stage. For SB203580 treatment (ENZO Life Sciences), embryos were transferred in sterile sea water containing either 20 μM or 5 μM of SB203580 or DMSO as a control in the same dilution and grown until pluteus stage. Cloning of constructs Partial coding sequences of plbicc1, plfoxj1, pldnah9, plpkd2 and plnodal were cloned in the pGEM-T Easy vector system and verified by sequencing. RNA in situ hybridization Embryos were fixed in paraformaldehyde (PFA) 4 % PFA for 30 min, stored in 100 % Ethanol at -20°C and processed following standard protocols. Digoxigenin-labeled RNA (Roche) probes were prepared with SP6 or T7 RNA polymerase on linearized pGEM-T Easy templates (Promega). In situ hybridization was modified from [63]. Initial rehydration steps were performed with 1 % BSA in PBS to avoid agglutination of embryos. All steps were performed either in glass wells or custom made plastic tubes with meshwork. Scanning electron microscopy Wild-type or HS-treated embryos were fixed in 4 % PFA containing 0.2 % Glutaraldehyde in PBS. Preparation of embryos for SEM followed standard protocols. Prior to sputter-coating with gold, some embryos were manually broken using a pipette tip, in order to visualize the archenteron surface. High-speed videography of cilia motility in gastrula stage embryos To observe ciliary motility directly, gastrula stage embryos were either transferred into a solution containing 0.5-1.0 % methylcellulose (Sigma) in ASW in order to slow down beating of the cilia in the forming archenteron cavity (for early to mid-gastrula stages), or positioned within a nitex screen (SEFAR, Germany) to focus on the central archenteron cavity (mid-late gastrula stages). Imaging was performed using a Hamamatsu ORCA-Flash 4.0 Digital CMOS camera mounted on a Zeiss Imager.M1 microscope equipped with a Plan-Apochromat 100x/1.4 oil objective. Acquisition of frames was performed using the Zeiss ZEN software. Fiji [64] was used for temporal difference imaging, i.e. for visualization of movements against a 'static' background. Each frame of a time-series was subtracted from its consecutive frame (t n -t n+1 ), resulting in different pixel grey values with black indicating no change in between two frames. Photo-documentation and picture analysis IF pictures were taken on a Zeiss Observer. Z1/LSM 700 equipped with a 63x objective (C-Apochromat 63x/1.2 W Corr). Photographs of embryos after in situ hybridization were taken on an Axioskop 2 mot plus (Zeiss, Germany) and processed in Adobe Photoshop. Figures were assembled using Adobe Illustrator. Additional files Additional file 1: Figure S1. mRNA expression of bicc1 (a-d) and overall normal development of Paracentrotus lividus embryos upon high salt induced deciliation (e-j) or upon early MAPK/p38 inhibition (k-n). (PDF 877 kb) Additional file 2: Movie 1. Archenteron cilia are motile. Real-time movie of early to mid gastrula stage embryo in lateral-vegetal view reveals fast movement of polarized cilia on laterally invaginating cells at the posterior aspect of the archenteron (cf. Fig. 1h). Bright-field movie (a) and high-magnification (white box) of the inner archenteron wall (a'). (b) Temporal difference imaging rendered version of the same movie, which highlights pixel grey value differences between two consecutive frames. Movie was acquired at 55fps (cf. methods for details). (AVI 8410 kb) Additional file 3: Movie 2. Monocilia inside the archenteron of gastrula stage embryos rotate Real-time movie of mid to late gastrula stage embryo in vegetal view, focusing on the central archenteron region, reveals fast rotating monocilia inside the archenteron (cf. Fig. 1g). (a) bright-field movie. (b) Temporal difference imaging rendered version of the same movie, which highlights pixel grey value differences between two consecutive frames. (a' , b') High-magnification area as outlined in (a, b). Movie was acquired at 35fps. Please note the rotating cilia on the inner epithelium of this embryo. (AVI 7326 kb)
8,093.4
2016-08-23T00:00:00.000
[ "Biology" ]
Feasibility and Outcome of PSMA-PET-Based Dose-Escalated Salvage Radiotherapy Versus Conventional Salvage Radiotherapy for Patients With Recurrent Prostate Cancer Introduction Prostate-specific membrane antigen-positron emission tomography-(PSMA-PET) imaging facilitates dose-escalated salvage radiotherapy (DE-SRT) with simultaneous-integrated boost (SIB) for PET-positive lesions in patients with prostate cancer (PC). Therefore, we aimed to compare toxicity rates of DE-SRT with SIB to conventional SRT (C-SRT) without SIB and to report outcome. Materials and Methods We evaluated 199 patients who were treated with SRT between June 2014 and June 2020. 101 patients received DE-SRT with SIB for PET-positive local recurrence and/or PET-positive lymph nodes. 98 patients were treated with C-SRT to the prostate bed +/− elective pelvic lymphatic pathways without SIB. All patients received PSMA-PET imaging prior to DE-SRT ([68Ga]PSMA-11: 45.5%; [18F]-labeled PSMA: 54.5%). Toxicity rates for early (<6 months) and late (>6 months) gastrointestinal (GI) toxicities rectal bleeding, proctitis, stool incontinence, and genitourinary (GU) toxicities hematuria, cystitis, urine incontinence, urinary obstruction, and erectile dysfunction were assessed. Further, we analyzed the outcome with disease-free survival (DFS) and prostate-specific antigen (PSA) response. Results The overall toxicity rates for early GI (C-SRT: 2.1%, DE-SRT: 1.0%) and late GI (C-SRT: 1.4%, DE-SRT: 5.3%) toxicities ≥ grade 2 were similar. Early GU (C-SRT: 2.1%, DE-SRT: 3.0%) and late GU (C-SRT: 11.0%, DE-SRT: 14.7%) toxicities ≥ grade 2 were comparable, as well. Early and late toxicity rates did not differ significantly between DE-SRT versus C-SRT in all subcategories (p>0.05). PSA response (PSA ≤0.2 ng/ml) in the overall group of patients with DE-SRT was 75.0% and 86.4% at first and last follow-up, respectively. Conclusion DE-SRT showed no significantly increased toxicity rates compared with C-SRT and thus is feasible. The outcome of DE-SRT showed good results. Therefore, DE-SRT with a PSMA-PET-based SIB can be considered for the personalized treatment in patients with recurrent PC. INTRODUCTION Salvage radiotherapy (SRT) is an integral part of prostate cancer (PC) treatment. Approximately one third to one half of the patients undergoing radical prostatectomy (RP) will develop a biochemical relapse (1). Recently, three randomized controlled trials evaluated observation with SRT versus adjuvant RT (2)(3)(4). The data suggest that observation with SRT can be considered as the standard treatment option for most patients after RP. However, especially for patients with high-risk features adjuvant RT should be discussed as well. With the introduction of the prostate-specific membrane antigen-positron emission tomography (PSMA-PET) imaging, it quickly became a valid diagnostic tool for patients with PC relapse. PSMA tracers allow for detection rates of 58% at prostate-specific antigen (PSA) levels as low as 0.2 to 1.0 ng/ml for [68Ga]-labeled PSMA, increasing with higher PSA values (5). Whereas in the past, the radiation oncologist had to treat the prostate bed (PB) and/or the elective pelvic lymph nodes (ePLNs) in cases of SRT mostly without an imaging correlate and based on statistical probabilities, today, RT of the tumor volume visualized by PSMA-PET is possible. The precise imaging allows for treatment of the macroscopic disease [local recurrence or pelvic lymph nodes (LNs)] with higher doses than the elective PB or ePLNs. With modern intensity-modulated RT (IMRT) a simultaneous-integrated boost (SIB) is possible, without prolonging the total treatment time. However, it remains unknown, if side effects of PSMA-PETbased dose-escalated SRT (DE-SRT) with SIB are increased compared with conventional SRT (C-SRT) without SIB. Therefore, this study aims to compare toxicity of DE-SRT versus C-SRT. Further, we report the outcome of patients receiving PSMA-PET-based DE-SRT. Patients We screened 256 patients who were treated between June 2014 and June 2020 at the University Hospital of the Technical University of Munich (TUM). We included patients with relapse after RP who received either DE-SRT with SIB for PET-positive local recurrence or LNs as well as C-SRT without SIB. Patients had a post-RP PSA nadir of <0.1 ng/ml. We excluded patients due to distant metastases or 3-dimensional RT, as well as the use of Choline-PET instead of PSMA-PET or sequential boost techniques. Further, we excluded patients if they showed PET-positive lesions, but no dose escalation was performed. In line with the recent guidelines (6,7) and to ensure comparability, we excluded patients with doses of EQD2 (1.5 Gy) < 66 Gy to the PB. Patients without follow-up were excluded as well. Analysis was conducted retrospectively and was part of the SIMBA (Simultaneous-Integrated Boost in Salvage Radiotherapy for Patients With Recurrent Prostate Cancer) study. The institutional review board of the Technical University of Munich (TUM) approved the study (No. 564/19-S). PSMA-PET Imaging Before DE-SRT, each patient received PET imaging with [68Ga] PSMA-11 (8) (11)). PET acquisition was performed according to the joint EANM and SNMMI guidelines (12). Imaging was acquired in conjunction with either a diagnostic computed tomography (CT) or magnetic resonance imaging (MRI). Intravenous and oral contrast agents were used if the patient had no contraindications both for PET/CT and PET/MRI. When possible, furosemide 20 mg was given to reduce tracer collection in the urinary tract system. One specialist in nuclear medicine and one radiologist or a dual boarded nuclear medicine physician/radiologist interpreted the scans. Focal tracer uptake higher than the surrounding background and not associated with physiologic uptake was considered as suspect. Radiotherapy RT was performed with intensity-modulated RT (IMRT) as volumetric arc therapy (VMAT) or helical IMRT. Planning CT and RT were performed with a reproducible comfortably filled bladder and empty rectum. We performed image-guided RT (IGRT) with daily online imaging. Target delineation was conducted using the RTOG (13) or EORTC (14) guideline. Planning target volume (PTV) of the SIBs were generated with an additional margin of 5 to 10 mm to the gross tumor volume (GTV). Indication for additive androgen deprivation therapy was discussed in a multidisciplinary tumor board and recommended thereafter to the patient. When organ at risk constraints allowed, we used the following dose concept: Overall, the PB was irradiated with a total of 68 Gy in 2 Gy single doses (34 fractions). The ePLNs were treated with 50.4 Gy in 1.8 Gy single doses (28 fractions). When patients received RT to the PB and ePLNs we treated the PB for 28 fractions up to 56 Gy and the ePLNs up to 50.4 Gy continuing with the PB only up to the total dose of 68 Gy. In the DE-SRT group, we treated the patients with an additional SIB to the PET-positive areas (local recurrence and/or LNs). Then the PB was irradiated with 68 Gy in 2 Gy single doses (34 fractions) and a SIB to the local recurrence with 76.5 Gy in 2.25 Gy doses (34 fractions). ePLNs were treated with 50.4 Gy in 1.8 Gy doses (28 fractions) and a SIB to PET positive areas with 58.8 Gy in 2.1 Gy doses (28 fractions) or 61.6 Gy in 2.2 Gy doses (28 fractions). When patients received RT to the PB and ePLNs with SIB we treated the PB and the ePLNs for 28 fractions continuing with the PB only for a total of 34 fractions. However, changes to the total doses of PB, ePLNs, and SIBs were possible and at the discretion of the treating radiation oncologist. Toxicity Toxicity of SRT was assessed using the Common Terminology Criteria for Adverse Events (CTCAE) version 5 (15). Follow-up was conducted according to our institutional protocol. First follow-up was performed 4 to 6 weeks after termination of RT, thereafter time intervals increased to 3 and 6 months, before continuing with yearly visits. Outpatient urologic aftercare including PSA tests were recommended every 3 months for the first 2 years, every 6 months for the following 2 years continuing with annual appointments. Side effects before 6 months were classified as early/acute toxicity, whereas late/chronic toxicity was defined as side effects after 6 months. Only newly occurred or worsened side effects were defined as related to RT. Outcome We defined PSA response after SRT as a PSA value below or equal 0.2 ng/ml. Disease-free survival (DFS) was defined as either PSA progression (PSA nadir + 0.2 ng/ml and one confirmation value), local relapse, occurrence of metastasis or change/ initiation of ADT. Statistics To compare baseline characteristics and toxicity in both groups we used a Pearson's chi-square test or an independent-samples median test. Patients without follow-up data were excluded from the evaluation of the respective toxicity endpoint. Toxicity rates were compared by Pearson's chi-square test. For the analysis of DFS, we used Cox regression analysis adjusted for the use of additive ADT. The median PSA before RT was significantly different. To ensure comparability, we only included patients in the outcome analysis whose PSA levels met the common definition of a relapse of >0.2 ng/ml (16) (n=148). Median time between ADT and last follow-up was 7 months (range: 0-51 months). Since ADT influences the PSA response, we excluded patients with admission of ADT in follow-up after the termination of additive ADT from evaluation of the PSA response. To compare doses with different fractionation schemes, we used the equivalent dose in 2 Gy fractions with an alpha/beta ratio of 1.5 Gy (EQD2, 1.5 Gy). Wherever possible, we report the EQD2 (1.5 Gy). All statistical analyses were performed with SPSS version 21 (IBM, Armonk, USA). A p-value <0.05 was considered as statistically significant. RESULTS After screening, we evaluated 199 patients with a median age of 71.0 years (range, 49.0-82.0 years). Median follow-up was 13.6 months (range, 0.4-70.0 months). Complete patient characteristics are shown in Table 1. Patients were treated between 06/2014 und 06/2020 with the median doses shown in Table 2. Toxicity Baseline toxicity rates are shown in Table 3. No significant differences were seen in the pre-RT baseline toxicity. The overall rate of early gastrointestinal toxicity ≥ grade 2 was 2.1% and 1.0% for the C-SRT and DE-SRT group, respectively. Late gastrointestinal side effects ≥ grade 2 were 1.4% and 5.3% for C-SRT and DE-SRT group. Early genitourinary toxicity ≥ grade 2 occurred in 2.1% and 3.0% of the cases for C-SRT and DE-SRT group. Late genitourinary side effects ≥ grade 2 were seen in 11.0% and 14.7% for patients with C-SRT and DE-SRT, respectively. Table 4 shows newly occurred or worsened early (<6 months) and late (>6 months) side effects for all patients. No early gastrointestinal or genitourinary fistula was documented. One late genitourinary fistula grade 2 was reported in the DE-SRT group, whereas overall, no late gastrointestinal fistulas were seen. Table 5 shows the newly diagnosed side effects for the subgroup of patients with C-SRT to the PB only versus DE-SRT of the PB with SIB. Toxicity of the remaining patients (PB +ePLNs, PB/SIB + ePLNs, PB + ePLNs/SIB, PB/SIB + ePLNs/ SIB, and ePLNs/SIB) is shown in the supplementary files (see Supplementary Table 1). Outcome We further evaluated the outcome of patients who received DE-SRT and C-SRT. Mean DFS for C-SRT was 41.02 months (95% CI: 30.61-51.43 months) and for DE-SRT 48.12 months (41.86-54.40 months). Figure 1 shows Cox regression of DFS of the overall group (see Figure 1A) and in the subgroup of DE-SRT for the elective PB and local recurrence versus C-SRT for PB alone (see Figure 1B). Figure 2 shows a comparison of DFS for patient with versus without additive ADT in the DE-SRT group (see Figure 2A). Further, we compared DFS of the DE-SRT group with respect to the PET results (Local recurrence only versus pelvic LNs and/or local recurrence, see Figure 2B). Moreover, we analyzed the DFS in the DE-SRT group for patients with PSA at recurrence <0.5 ng/ml versus ≥0.5 ng/ml. There was no significant difference (p=0.39). We analyzed PSA response for patients who received DE-SRT and C-SRT (see Table 6). Overall median PSA at first follow-up was 0.07 ng/ml (range, 0.00-1.09 ng/ml) with a PSA response (≤0.2 ng/ml) of 75.0% for DE-SRT. For C-SRT the overall median PSA at first follow-up was 0.14 ng/ml (range, 0.01-51.72 ng/ml) with a PSA response of 57.5%. Overall median PSA at last follow-up was 0.07 ng/ml (range, 0.00-1.60 ng/ml),resulting in a biochemical response of 86.4% for DE-SRT. For the C-SRT DISCUSSION The aim of this retrospective study was to compare DE-SRT and C-SRT in terms of toxicity rates. Further, we sought to report outcome data of DE-SRT. To our knowledge, this is the first study which attempted to compare DE-SRT and C-SRT. In all toxicity items (rectal bleeding, proctitis, stool incontinence, hematuria, cystitis, urine incontinence, urinary obstruction, and erectile dysfunction), no significant difference was present neither for early nor for late side effects. One late genitourinary Over the last years, the PSMA-PET has become an important diagnostic tool for patients with PC, especially in a recurrence setting. We previously reported the high clinical impact on disease staging and RT management (17). Both the impact as well as the higher diagnostic efficacy compared with other imaging techniques triggered the recommendation of PSMA-PET for patients with biochemical recurrence after prior definitive treatment in the European (18) and German (7) guidelines. With the higher sensitivity of the PSMA-PET dose escalation to specific areas became possible. The rationale behind the dose escalation derives from the PC dose-response data. The alpha/beta ratio for PC is described to be low (19). A low alpha/beta ratio implies that the target is more resistant to low doses. Therefore, higher total doses and hypofractionated schemes for PC have been increasingly used (20,21). In the case of SRT, the elective PB and pelvic LNs are commonly treated for microscopic disease spread with doses of 66 to 72 Gy (6, 7) and 45 to 50.4 Gy (22)(23)(24), respectively. However, keeping the low alpha/beta ratio in mind: Why should we not treat macroscopic PC in the salvage situation with the same doses as PC in the definitive situation? The European and German guideline recommend an EQD2 of 74 to approximately 80 Gy for definitive treatment of the prostate (6,7). In our study, we used a median dose of 76.5 Gy in fractions of 2.25 Gy for a local recurrence which translates into an EQD2 (1.5 Gy) of 81.96 Gy and therefore is an appropriate dose for macroscopic PC. The guideline of the Australian and New Zealand Faculty of Radiation Oncology Genito-Urinary group (FROGG) recommends a dose escalation for local recurrence with an EQD2 of 70 to 74 Gy. Dose escalation of pelvic LNs is also recommend; however, the dose remains to be unspecified (25). (30). They showed a 3year rate of ≥ grade 2 rectal and ≥ grade 2 genitourinary toxicity of 6.6% and 26.3%, respectively (30). The recent SAKK 09/10 evaluated the impact of dose intensified SRT for the whole PB with 64 Gy versus 70 Gy on toxicity and outcome. The trial showed similar acute side effects, except for a significantly greater worsening in patient-reported urinary symptoms after 70 Gy (31). However, no SIB was used in the SAKK 09/10 trial. A previous study by Cozzarini et al. evaluated the urinary toxicity for hypofractionated RT to the whole PB after RP (32). Patients with hypofractionated RT showed significantly more late urinary toxicities Grad 3/4 (18.1%) than patients with conventional fractionation (6.9%). These data predate PSMA-PET imaging and therefore a focal treatment to PET-positive areas might accomplish a survival benefit with acceptable toxicity. PSA response and DFS showed good results for patients with PSMA-PET guided DE-SRT in our cohort of patients. This might be related to the potential of PSMA-PET localizing the site of recurrence, whereas in patients without pre-RT imaging, empiric dose planning was performed. Nevertheless, in 43.9% of the patients in the C-SRT group pre-RT PSMA-PET imaging was negative potentially including a bias. However, even with the high rate of negative PSMA-PETs in the C-SRT group the DFS is reduced which speaks in favor of dose escalation. Additionally, the patients in the DE-SRT group might benefit from a dose escalation for SRT > 70 Gy as described above and was postulated by King et al. (26). When we stratified for additive ADT in patients with DE-SRT, patients with simultaneous hormonal deprivation showed no significant better DFS (p=0.32). However, the hazard ratio of 2.86 suggests a trend in favor of an additive ADT. This is in line with the data by Shipley et al. (33) and Carrie et al. (34) which suggest additive ADT for patients with SRT. Nevertheless, both trials did not use PSMA-PET imaging for staging before RT, but the underlying principle remains the same: ADT treats the microscopic tumor spread. However, PSMA-PET might help to identify the patients who will benefit from ADT. This should be further investigated. When comparing sites of relapse (local recurrence only versus pelvic LNs and/or local recurrence), the data showed that patients with LNs exhibit a decreased DFS in comparison to patients with local relapse only. Affection of the LNs might indicate wider spread than within patients with confined disease to the PB. Such oligorecurrent patients might benefit from additional ADT (35) and therefore this topic should be further investigated. Overall, since our data are retrospective and not powered to show superiority the results on outcome must be interpreted cautiously. The small sample size likely leads to large hazard ratios and 95% confidence intervals for the Cox regression analysis. However, the results may be understood as a hint for a better outcome for patients with PSMA-PET guided DE-SRT. Previous studies have also shown favorable outcome for patients with PSMA-PET guided DE-SRT. (36). The authors reported the outcome of patients with negative as well as positive PSMA-PET. For patients with local recurrence treatment response was 81% and for patients with LN involvement +/− local recurrence the treatment response was 38.5%. The treatment response was defined as PSA ≤ 0.1 ng/ml and a greater than 50% reduction from pre-RT PSA level. Our data confirm the reduced outcome for patients with LN involvement. Recently, Emmett et al. (37) published data of a prospective trial on [68Ga]PSMA-11-PETbased SRT in 260 patients. External beam RT as well as stereotactic body radiotherapy were allowed. Freedom from progression was defined as PSA not more than 0.2 ng/ml above the post-RT nadir. The overall 3-year freedom from progression was 64.5%, with 79% in patient with local recurrence, and 55% in patients with pelvic LNs (37). Patients with negative PSMA-PET showed the highest rates of freedom from progression with 82.5%. Recently, the EMPIRE-1 trial (38) PSMA-11 and showed that the overall detection rate for PC recurrence is similar with an advantage for Fluciclovine-PET in terms of local recurrence (39). Our study has certain limitations. The median follow-up is relatively short, and a future analysis with longer follow-up is planned. Although the groups are well balanced for most factors (see Table 1), the retrospective cohort design of our study is a limitation. To supplement the retrospective data, only a prospective randomized controlled trial comparing patients with and without dose escalation would be helpful and therefore should be performed in the future. However, it will remain difficult to justify not performing dose-escalation in PET positive lesions. There was a significant difference in the use of PET imaging in both groups (see Table 1). Patients with PET are more likely to be diagnosed with the cause of PSA rise. Therefore, patients with PET are more likely to be in the DE-SRT group. There was an imbalance for coverage of the ePLNs (PB only in 86.7% in C-SRT versus 53.9% in DE-SRT group). However, we accounted for that by evaluating the data for the respective subgroups. In our study patients underwent PET with both [68Ga]PSMA-11 and [18F]-labeled PSMA-ligands. This might include a bias; however, this study focused on PSMA-PET-based DE-SRT and current literature indicates relative similar detection efficacy for these different PSMA-ligands (40,41). Further, there was a significant difference concerning the admission of additive ADT in both groups (see Table 1). Additive ADT to SRT is based on two recent publications (33,34). Patients in the C-SRT group received their treatment earlier than the patients in the DE-SRT group and therefore less patients with additive ADT are in C-SRT group. This might be a bias for the outcome analysis; however, we accounted for this fact by evaluating the outcome for patients with additive ADT as well as without ADT and used adjusted Cox regression analysis. Moreover, patients in the C-SRT group had a significantly shorter time from RP to RT as well as a lower PSA before SRT (see Table 1). To account for that, we only included patients with PSA >0.2 ng/ml at relapse for the outcome analysis. Currently, data of a phase III trial on [68Ga]PSMA-11-PET/ CT-based SRT after RP are on the way (NCT03582774). The trial compares standard SRT to PSMA-PET-based SRT. A focal dose escalation to the PSMA-positive lesions may be performed on the discretion of the treating radiation oncologist if feasible (42). DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the institutional review board of the Technical University of Munich (TUM). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by MV. The first draft of the manuscript was written by MV and all authors commented on previous versions of the manuscript. All authors contributed to the article and approved the submitted version.
4,951.8
2021-07-30T00:00:00.000
[ "Medicine", "Engineering" ]
A Thermodynamic Analysis on the Roasting of Pyrite A series of thermodynamic calculations are performed for the roasting of pyrite in changing temperatures and atmospheres. The relationship between ∆rG and temperature in the range of T = 300–1200 K shows that, depending on the atmosphere it is in, reactions of pyrolysis, oxidation or reduction can occur. Both the pyrolysis of pyrite in an inert atmosphere and its oxidation by oxygen can form pyrrhotite (mainly Fe0.875S and FeS), but the temperature required for oxidation is much lower than that for pyrolysis. In an oxygen-containing atmosphere, the isothermal predominance areas for the Fe–S–O system indicate that a change in temperature and oxygen partial pressure can lead the pyrite to undergo desulphurization to pyrrhotite (FeS2→ Fe0.875S/FeS) or iron oxides (FeS2→ Fe3O4/Fe2O3), or sulphation to iron sulphates (FeS2 → FeSO4/Fe2(SO4)3). The presence of carbon is beneficial to the desulphurization of pyrite under an oxidizing atmosphere since iron sulphates can be converted to iron oxides at very low levels of PCO/PCO2. Results presented in this paper offer theoretical guidance for the optimization of roasting of pyrite for different purposes. Introduction Pyrite (FeS 2 ) is one of the most common and widely distributed sulphide minerals [1].In industry, pyrite is a chief raw material to produce sulphur, sulphur dioxide and sulphuric acid.Also, pyrite is usually found in association with valuable metallic elements such as Au, Ag and Cu that can be recovered in a comprehensive utilization of resources [2].In terms of the auriferous pyrite (i.e., sulfidic gold ore), gold often occurs as submicroscopic particles that are easily enclosed in crystal lattices of pyrite [3].Consequently, gold is difficult to be exposed unless undergoing ultrafine grinding, resulting in high energy consumption [4].Gold is also sometimes associated with pyrite and preg-robbing carbonaceous matters (mainly carbon) that readily adsorb gold complexes from the leach solution.Under such a circumstance, carbonaceous sulphide gold ores that are regarded as the most refractory ores [5] emerge and render their gold extraction challenging.Pretreatments are thus necessary for improving the gold extraction from this type of refractory gold ores. Oxidative roasting of carbonaceous sulphide gold ores has currently been one of the most widespread and effective pretreating methods [6][7][8][9].The effect of oxidative roasting is mainly twofold.On the one hand, as a result of the oxidation of sulphur in pyrite to form sulphur dioxide (SO 2 ), porous iron oxides (i.e., Fe 2 O 3 and Fe 3 O 4 ) are formed that expose the gold particles locked in pyrite.On the other hand, the oxidation of carbonaceous matter eliminates its preg-robbing effect on leached gold.It is clear that the formation of SO 2 , hence the subsequent production of sulphuric acid, and the comprehensive recovery of associated valuable metals from pyrite are closely related to the roasting behaviour of pyrite.If carbonaceous matters are also present, the possible impacts of carbon (C) or its oxides (CO and CO 2 ) on the roasting of pyrite should also be taken into consideration.Experimentally, the roasting behaviours of pyrite have been studied by a number of researchers.Dunn and De [10,11] investigated the effect of temperature and atmosphere on the oxidation of pyrite in different particle size ranges by differential thermal analysis (DTA) and thermogravimetric analysis (TGA).It was observed that pyrite less than 0.045 mm in size could be directly and completely oxidized to hematite at 776 K in an air atmosphere.In the size range of 0.09-0.125mm and under an air atmosphere, hematite formed at temperatures lower than 788 K whilst pyrrhotite formed at temperatures higher than 788 K.With increasing partial oxygen pressure in the atmosphere, the oxidation of pyrite was enhanced significantly even at relatively low temperatures and iron sulphates (Fe 2 (SO 4 ) 3 and FeSO 4 ) were easily produced.Similarly, by means of DTA and TGA, Jorgensen and Moyle [12] studied the phase transformation of pyrite for its oxidation in air in the particle size 0.053-0.074mm.It was found that the pyrite surface was transformed into hematite at 702 K and pyrrhotite at 850 K, but with the temperature increasing to 881 K and 942 K, the resultant species were ferrous sulphate and ferric sulphate, respectively.The X-ray diffraction (XRD) analysis conducted by Schorr and Everhart [13] proved that the oxidation of pyrite in air with a low heating rate up to a temperature of 753 K in a furnace took place in a direct oxidation way to form iron oxides.The technique of Mössbauer spectroscopy used in the work of Prasad et al. [14] showed that pyrrhotite was detected after roasting pyrite in air at a temperature of 883 K.The roasting of pyrite was further investigated in a gas mixture of CO 2 and O 2 by Hong and Fegley [15], revealing that both pyrrhotite and hematite were formed at a temperature range 665-733 K while only pyrrhotite was found when the temperature was controlled in the range 757-811 K. The roasting of pyrite is intimately associated with the process variables such as temperature, atmosphere and mineral particle size.In different research, various roasting reactions and phase transformations of pyrite occur under different reaction conditions.Distinctly, no comprehensive and definite information on the possible reactions with the corresponding conditions has been offered for the roasting of pyrite. Thermodynamic analysis can provide significant information on the possibility of chemical reactions that may occur, pyrometallurgical conditions relevant to the predominance area of mineral phase, and phase transformations of mineral during the roasting process.Few endeavours have recently focused on studying the roasting behaviour of pyrite from systematic thermodynamic calculations.The thermodynamic modelling of Fe-S system was studied by Waldner and Pelton [16], but little information was involved for the roasting of pyrite.The effects of CO, mixture of CO and CO 2 , and solid C on the thermodynamic behaviour of arsenopyrite (FeAsS) were researched by Chakraborti and Lynch [17].The roasting of pyrite in the presence of C or CO, however, has seldom been researched by thermodynamic analysis. This paper uses thermodynamic calculations to analyse the roasting behaviour of pyrite.The possible involved chemical reactions are discussed.The effect of roasting temperature and atmosphere on the pyrolysis and oxidation of pyrite as well as that of carbon on the roasting of pyrite are also studied.It can provide a theoretical basis to better understand and guide the optimization of the roasting of pyrite for a specific purpose. A Preliminary Analysis of Possible Chemical Reactions during the Roasting of Pyrite Under different roasting conditions, pyrite can be transformed to a variety of solid phases such as pyrrhotite (Fe 1−x S, mainly Fe 0.875 S or FeS which is the commonest form [16,18]), magnetite (Fe 3 O 4 ), hematite (Fe 2 O 3 ), ferrous sulphate (FeSO 4 ) and ferric sulphate (Fe 2 (SO 4 ) 3 ), and gas phases such as sulphur vapour (S 2 ) and sulphur dioxide (SO 2 ).In the presence of C or other phases such as CaO, MgO and Al 2 O 3 , various reduction reactions by C/CO or sulphur-fixation reactions can also take place.According to the relevant species of reactants and resultants, 45 possible chemical reactions can be deduced as listed in Table 1.They can be divided into mainly three categories: (i) pyrolysis in an inert atmosphere (Equations ( 1),(2)), (ii) oxidation by O 2 (Equations ( 3)- (30)) and (iii) reduction by C or CO (Equations ( 31)-( 45)).In addition, based on the standard Gibbs free energies of formation for species (∆ f G θ , kJ•mol −1 ) at different temperatures (T = 300-1200 K), the corresponding ∆ r G θ for each reaction can be obtained as a function of ∆ r G θ and T (listed in Table 1).The variation of ∆ r G θ with T for the possible reactions is also clearly depicted in Figure 1.Thermodynamically, ∆ r G θ > 0 means that a chemical reaction cannot occur; on the contrary (∆ r G θ < 0), the reaction will spontaneously occur, and the more negative the ∆ r G θ value is, the more easily the reaction takes place. of which ν i is the stoichiometric ratio of reactants (−ν i ) and resultants (+ν i ), and ∆ f G i θ of all species were based on the thermochemical data of pure substances from Barin [19].As shown in Table 1 and Figure 1, the pyrolysis of pyrite to sulphur vapour (S 2 ) and pyrrhotite Fe 0.875 S (Equation (1)) or FeS (Equation ( 2)) can proceed spontaneously only when the temperature exceeds around 900-1000 K (∆ r G θ < 0).However, their ∆ r G θ values are slightly negative even at temperatures >1000 K, indicating that the pyrite pyrolysis is thermodynamically difficult to occur.The kinetic observations from Lambert et al. [20] and Boyabat et al. [21] suggested that the rate-controlling step of pyrite pyrolysis in an inert atmosphere was the desorption of S 2 from the pyrite surface.However, under an oxygen-containing atmosphere, the oxidation of S 2 by O 2 to volatile SO 2 (Equation ( 3)) is apt to take place due to its rather negative ∆ r G θ as presented in Figure 1.Not surprisingly, with the formation of SO 2 , pyrite is readily oxidized by O 2 to FeS (Equation ( 4)) or Fe 0.875 S (Equation ( 5)).Research also found that the oxidation rate of pyrite core to pyrrhotite (FeS) was relatively fast at moderate oxygen concentration levels (e.g., 5 vol % of O 2 ) [22].It can thus be considered that, in the presence of O 2 , pyrite firstly undergoes partial desulphurization to produce pyrrhotite and S 2 (Equations ( 1) and ( 2)), and then the easy oxidation of S 2 (Equation ( 3)) occurs with S 2 acting as an intermediate in Equations ( 4) and ( 5).In addition, Fe 0.875 S can be further oxidized by O 2 to FeS and SO 2 (Equation ( 6)), although the corresponding ∆ r G θ is much less negative than that from the oxidation of FeS 2 to Fe 0.875 S/FeS.Table 1 and Figure 1 also show that pyrite and its pyrolysis product (FeS or Fe 0.875 S) are readily oxidized by O 2 to iron oxides (mainly Fe 3 O 4 and Fe 2 O 3 ) (Equations ( 7)-( 12)) with rather negative ∆ r G θ values in the order of FeS 2 << FeS < Fe 0.875 S << 0. The formed Fe 3 O 4 can be further oxidized to Fe 2 O 3 (Equation ( 13)).In addition, the iron sulphates of Fe 2 (SO 4 ) 3 and FeSO 4 can be generated directly from FeS 2 (Equations ( 14) and ( 15)) or indirectly from the intermediates such as FeS/Fe 0.875 S (Equations ( 16)-( 19)) and Fe 3 O 4 /Fe 2 O 3 (Equations ( 20)-( 23)).Most of ∆ r G θ for these sulphation reactions are rather negative.Thermodynamically, the formation of iron sulphates from the iron sulphides (Equations ( 14)-( 19)) tends to be easier than from the iron oxides (Equations ( 20)-( 23)).When the temperature is overhigh (>1000-1100 K), the sulphation of Fe 3 O 4 /Fe 2 O 3 cannot occur spontaneously due to ∆ r G θ > 0.Moreover, the formation of Fe 2 (SO 4 ) 3 is thermodynamically easier than that of FeSO 4 from the sulphating roasting of pyrite, and thus FeSO 4 can be further oxidized to Fe 2 (SO 4 ) 3 as shown in Equation (24).In the presence of some common gangue phases such as CaO, MgO and Al 2 O 3 , they are also shown to readily react with SO 2 to form sulphates (Equations ( 25)-( 27)), capturing SO 2 during the pyrite roasting and thus preventing its release into the atmosphere. When there are carbonaceous matters, the existence of C further complicates the conditions of pyrite roasting.C can be easily oxidized by O 2 to CO and/or CO 2 (Equations ( 28)-( 30)), and at temperatures >1000 K, C can also react with CO 2 to form CO (Equation ( 31)).Thus, various reduction reactions involved with C or CO may occur during the roasting of pyrite.As shown by Equations ( 32) and (33), pyrite can be reduced by CO to pyrrhotite and oxysulphide (COS) at relatively high temperatures (>650-850 K) with mildly negative ∆ r G θ values, and an increasing temperature is shown to favour the occurrence of these reduction reactions.S 2 , apart from being oxidized by O 2 (Equation ( 3)), can also be readily reduced by CO to COS (Equation ( 34)).Similar with the oxidation of pyrite (Equations ( 4) and ( 5)), S 2 is also likely an intermediate during the reduction of pyrite by CO.However, COS has been shown to be unstable in the presence of O 2 , and easily oxidized by O 2 to CO 2 and SO 2 [23,24].It is therefore not difficult to consider that the formation of COS and its effects are possibly negligible during the pyrite roasting in an O 2 -containing atmosphere.In addition, as the temperature increases, the presence of C or CO is conducive to the reduction of Fe 2 O 3 to Fe 3 O 4 (Equations ( 35) and ( 36)), iron sulphates to iron oxides (Equations (37)-( 44)), and Fe 2 (SO 4 ) 3 to FeSO 4 (Equation (45)). Therefore, thermodynamically, various reactions may occur during the roasting of pyrite under different temperatures and atmospheres.The pyrolysis of pyrite is retarded unless at high temperatures (>900-1000 K).In contrast, most reactions of oxidation by O 2 and reduction by C/CO can proceed spontaneously.With respect to the roasting of an auriferous pyrite to expose gold, the S is normally expected to be oxidized as SO 2 with the formation of porous and insoluble iron oxides instead of Minerals 2019, 9, 220 5 of 12 soluble iron sulphates.The presence of carbonaceous matters may be advantageous to the formation of iron oxides due to the reduction of iron sulphates by C or CO, which will be discussed later. Therefore, thermodynamically, various reactions may occur during the roasting of pyrite under different temperatures and atmospheres.The pyrolysis of pyrite is retarded unless at high temperatures (>900-1000 K).In contrast, most reactions of oxidation by O2 and reduction by C/CO can proceed spontaneously.With respect to the roasting of an auriferous pyrite to expose gold, the S is normally expected to be oxidized as SO2 with the formation of porous and insoluble iron oxides instead of soluble iron sulphates.The presence of carbonaceous matters may be advantageous to the formation of iron oxides due to the reduction of iron sulphates by C or CO, which will be discussed later. Thermodynamic Behaviours for the Roasting of Pyrite Based on the preliminary analysis of chemical reactions that may occur during the pyrite roasting, a better understanding is allowed by a further thermodynamic analysis for the processes of pyrolysis, oxidation by O 2 and reduction by C/CO. Pyrolysis of Pyrite A number of studies on the pyrolysis of pyrite [21,25] have demonstrated that the resultants are pyrrhotite (Fe 1−x S) and sulphur vapour (S 2 ) as shown in Equation (46): There are various allotropes of elemental sulphur that can be represented by S m with m varying from 1 to 8 or higher.Hu et al. [26] have pointed out that the sulphur vapour from the thermal decomposition of pyrite mainly occurs as S 2 .Similarly, pyrrhotite Fe 1−x S can be FeS, Fe 11 S 12 , Fe 10 S 11 , Fe 9 S 10 or Fe 7.016 S 8 (i.e., Fe 0.875 S), but a wide range of studies [16,18] have suggested that the most common forms of Fe 1−x S are Fe 0.875 S and FeS.So Fe 0.875 S/FeS and S 2 were considered as the main resultants for the pyrite pyrolysis, which had also been adopted as discussed in Section 2. Based on the above, the relevant mechanism for the pyrolysis of pyrite was analysed in detail. According to the pyrolysis reactions (Equations ( 1) and ( 2)), the equilibrium constant (lnK θ ) could be obtained, that is, lnK θ = ln{[P S2 /P θ ] ν } (ν is the stoichiometric ratio of gaseous S 2 ).After taking lnK θ into the Van't Hoff equation of ∆ r G θ = −RTlnK θ , the relationship between P S2 and temperature was presented as The variation of P S2 with T for the pyrite pyrolysis is clearly shown in Figure 2.With the formation of S 2 and pyrrhotite, the thermal decomposition of pyrite occurs only at relatively high temperatures.Thermodynamically, the formation of Fe 0.875 S (>~800 K) is easier than that of FeS (>~900 K).As the temperature is higher than 895 K and 1020 K, with a pronounced increase of P S2 (≥100 kPa) the pyrite decomposes markedly to Fe 0.875 S and FeS, respectively.This is consistent with the analysis in Section 2 and previous experimental observations [11,12,27].In addition, the formed FeS and Fe 0.875 S may further decompose to Fe and S 2 as shown by Equations ( 47) and (48) (Table 2).The relationship formulas between P S2 and T are also listed in Table 2.The further pyrolysis of pyrrhotite is, however, very difficult since the calculated P S2 for the pyrolysis of Fe 0.875 S and FeS is separately as low as 5.554 × 10 −7 kPa and 4.9383 × 10 −4 kPa even at a high temperature of 1200 K. Pyrolysis of Pyrite A number of studies on the pyrolysis of pyrite [21,25] have demonstrated that the resultants are pyrrhotite (Fe1−xS) and sulphur vapour (S2) as shown in Equation (46): There are various allotropes of elemental sulphur that can be represented by Sm with m varying from 1 to 8 or higher.Hu et al. [26] have pointed out that the sulphur vapour from the thermal decomposition of pyrite mainly occurs as S2.Similarly, pyrrhotite Fe1−xS can be FeS, Fe11S12, Fe10S11, Fe9S10 or Fe7.016S8 (i.e., Fe0.875S), but a wide range of studies [16,18] have suggested that the most common forms of Fe1−xS are Fe0.875S and FeS.So Fe0.875S/FeS and S2 were considered as the main resultants for the pyrite pyrolysis, which had also been adopted as discussed in Section 2. Based on the above, the relevant mechanism for the pyrolysis of pyrite was analysed in detail. The variation of PS2 with T for the pyrite pyrolysis is clearly shown in Figure 2.With the formation of S2 and pyrrhotite, the thermal decomposition of pyrite occurs only at relatively high temperatures.Thermodynamically, the formation of Fe0.875S (>~800 K) is easier than that of FeS (>~900 K).As the temperature is higher than 895 K and 1020 K, with a pronounced increase of PS2 (≥100 kPa) the pyrite decomposes markedly to Fe0.875S and FeS, respectively.This is consistent with the analysis in Section 2 and previous experimental observations [11,12,27].In addition, the formed FeS and Fe0.875S may further decompose to Fe and S2 as shown by Equations ( 47) and (48) (Table 2).The relationship formulas between PS2 and T are also listed in Table 2.The further pyrolysis of pyrrhotite is, however, very difficult since the calculated PS2 for the pyrolysis of Fe0.875S and FeS is separately as low as 5.554 × 10 −7 kPa and 4.9383 × 10 −4 kPa even at a high temperature of 1200 K. Pyrrhotite, as a typical pyrolysis product from pyrite, is also often found from the oxidative roasting of pyrite.Its formation is largely affected by the heterogeneous atmosphere, the heating effect of reactions and the particle size of pyrite.This can be illuminated from the aspects of thermodynamics and kinetics as follows: (i) A partial inert atmosphere may be formed due to the restricted mass transfer of O 2 , so pyrrhotite can be generated from the pyrolysis of pyrite (Equations ( 1) and ( 2)).In an oxidizing atmosphere where O 2 is freely accessible, pyrite can also be oxidized to pyrrhotite as shown by Equations ( 4) and (5).As mentioned in Section 2, with S 2 being an intermediate, pyrrhotite can be easily formed from the oxidation of pyrite by O 2 at much lower temperatures compared to the pyrolysis of pyrite. (ii) The thermal decomposition of pyrite is endothermic whilst the oxidation of pyrite by O 2 is exothermic.In particular, the oxidation of intermediate S 2 by O 2 (Equation ( 3)) is typically accompanied with the release of a large amount of heat.The exothermic effect may cause partial overhigh temperatures that favour the pyrite pyrolysis under a partial inert atmosphere. (iii) During the roasting of pyrite particles, the S 2 desorption from the pyrite surface has been suggested to be the rate-controlling step for pyrite pyrolysis [20].In an oxidative roasting process, the formation of pyrrhotite likely conforms to a shrinking-core reaction model with pyrite as the core and pyrrhotite as the shell [22].In addition, the rate of pyrrhotite formation from the pyrite oxidation by O 2 is two orders of magnitude larger than that from the pyrite pyrolysis [22].This is possibly due to the fact that in an O 2 -containing atmosphere, once the intermediate S 2 makes contact with O 2 , it is easily oxidized as volatile SO 2 , which will rapidly decrease the S 2 concentration in the reaction interface of pyrite and thus improve the formation of pyrrhotite.At moderate O 2 concentrations, the produced pyrrhotite was found to be porous, which is beneficial to the diffusion of O 2 and SO 2 [21]. Oxygen can expedite the formation of pyrrhotite, but under relatively high O 2 concentrations, the nonoxidized pyrrhotite continues to oxidize or the pyrite is oxidized by O 2 without forming pyrrhotite as an intermediate.As described in Section 2, the oxidation products may be iron oxides or iron sulphates and SO 2 .The oxidation of pyrite by O 2 was further discussed in detail as will be shown in the following section. Phase Transformation of Pyrite Roasting During the pyrite oxidation by O 2 , FeS 2 may be converted to various iron phases that include sulphides (Fe 0.875 S/FeS), oxides (Fe 3 O 4 /Fe 2 O 3 ) and sulphates (FeSO 4 /Fe 2 (SO 4 ) 3 ) as mentioned in Section 2 (Equations ( 4)-( 24)).In addition, the produced SO 2 changes the roasting atmosphere and hence has a great impact on the phase transformations for pyrite roasting. The equilibrium constant (lnK θ ) from the relevant oxidation reactions could be attained, that is, lnK θ = ln{[P SO2 /P θ ] (±ν1) /[P O2 /P θ ] ν2 }, where ±ν 1 (−ν 1 for the reactant and +ν 1 for the resultant) and ν 2 are the stoichiometric ratio of SO 2 and O 2 , respectively.Based on ∆ r G θ = −RTlnK θ , the relationship between P SO2 /P θ and P O2 /P θ was rearranged as lg[P SO2 / At a constant temperature, the isothermal predominance areas for the Fe-S-O system (Figure 3) were determined as a function of lg[P SO2 /P θ ] and lg[P O2 /P θ ]. Figure 3 shows that, in a wide range of lg[PSO2/P θ ] (= −20-8) and lg[PO2/P θ ] (= −32-8), an increasing temperature from 600 K to 1000 K observably enlarges the stability regions of FeS2, Fe0.875S/FeS and Fe3O4 but shrinks those of Fe2O3, FeSO4 and Fe2(SO4)3.The stability area of FeS appears only as the temperature increases to 1000 K (Figure 3c).At a constant temperature, low PO2 is shown to benefit the stability of iron sulphides while iron oxides and sulphates tend to be stable under relatively high PO2.Under low PO2, pyrite is stable at relatively high PSO2; the decrease of PSO2 favours the existence of pyrrhotite.When PO2 is relatively high, a high PSO2 obviously benefits the occurrence of iron sulphates.On the contrary, low PSO2 is evidently advantageous to stabilise the iron oxides. Depending upon the reaction conditions, thermodynamically, pyrite may experience three routes (1-3 marked in Figure 3c) of phase transformation during its roasting process.(i) Under insufficient SO2, pyrite can be directly oxidized with enough O2 to iron oxides (Equations ( 7) and ( 8)) via Route 1.This is consistent with the research results [10,12,13,27] showing that only hematite is observed during the roasting of pyrite in an air atmosphere.(ii) When SO2 and O2 are both inadequate, pyrite is oxidized to pyrrhotite (Equations ( 4) and ( 5)) by Route 2 as discussed in Section 3.1.(iii) In the presence of sufficient SO2 and O2, pyrite can be directly transformed to iron sulphates (Equations ( 14) and ( 15)) through Route 3, which is also supported by previous experimental studies [10][11][12]. The practical roasting process of pyrite is complex due mainly to the influence of mineral particle size and heterogeneous atmosphere.Taking the most common roasting of pyrite in excess of air/oxygen for an example, O2 is easily accessible to the surface of the pyrite particle, so iron oxides can be produced via Route 1.The diffusion of O2 into the interior of pyrite, however, is not easy due to the resistance from the outer layer of the particle.Consequently, an inert or weak O2-containing atmosphere is formed, and hence the particle nucleus tends to decompose as pyrrhotite via Route 2. Figure 3 shows that, in a wide range of lg[P SO2 /P θ ] (= −20-8) and lg[P O2 /P θ ] (= −32-8), an increasing temperature from 600 K to 1000 K observably enlarges the stability regions of FeS 2 , Fe 0.875 S/FeS and Fe 3 O 4 but shrinks those of Fe 2 O 3 , FeSO 4 and Fe 2 (SO 4 ) 3 .The stability area of FeS appears only as the temperature increases to 1000 K (Figure 3c).At a constant temperature, low P O2 is shown to benefit the stability of iron sulphides while iron oxides and sulphates tend to be stable under relatively high P O2 .Under low P O2 , pyrite is stable at relatively high P SO2 ; the decrease of P SO2 favours the existence of pyrrhotite.When P O2 is relatively high, a high P SO2 obviously benefits the occurrence of iron sulphates.On the contrary, low P SO2 is evidently advantageous to stabilise the iron oxides. Depending upon the reaction conditions, thermodynamically, pyrite may experience three routes (1-3 marked in Figure 3c) of phase transformation during its roasting process.(i) Under insufficient SO 2 , pyrite can be directly oxidized with enough O 2 to iron oxides (Equations ( 7) and ( 8)) via Route 1.This is consistent with the research results [10,12,13,27] showing that only hematite is observed during the roasting of pyrite in an air atmosphere.(ii) When SO 2 and O 2 are both inadequate, pyrite is oxidized to pyrrhotite (Equations ( 4) and ( 5)) by Route 2 as discussed in Section 3.1.(iii) In the presence of sufficient SO 2 and O 2 , pyrite can be directly transformed to iron sulphates (Equations ( 14) and ( 15)) through Route 3, which is also supported by previous experimental studies [10][11][12]. The practical roasting process of pyrite is complex due mainly to the influence of mineral particle size and heterogeneous atmosphere.Taking the most common roasting of pyrite in excess of air/oxygen for an example, O 2 is easily accessible to the surface of the pyrite particle, so iron oxides can be produced via Route 1.The diffusion of O 2 into the interior of pyrite, however, is not easy due to the resistance from the outer layer of the particle.Consequently, an inert or weak O 2 -containing atmosphere is formed, and hence the particle nucleus tends to decompose as pyrrhotite via Route 2. When the generated pyrrhotite contacts sufficient O 2 , it can be further oxidized to iron oxides. Minerals 2019, 9, 220 9 of 12 Thus, a complex route of FeS 2 → Fe 0.875 S/FeS (intermediates) → Fe 3 O 4 /Fe 2 O 3 occurs during the pyrite roasting, which is consistent with a number of studies [10,11,14].Similarly, during the sulphating roasting of pyrite, the pyrrhotite and/or iron oxides can also be produced as intermediates. Desulphurization of Pyrite to Iron Oxides Refractory auriferous pyrites have been extensively roasted to porous calcines (iron oxides) in order to expose the enclosed gold [9].This roasting process is often accompanied by sintering and some side-reactions of the sulphation of iron oxides (Equations ( 20)-( 23)).It has been suggested from Section 3.2.1 and many other studies [28][29][30][31][32][33][34][35] that the desulphurization of pyrite and sulphation of iron oxides are largely determined by the roasting temperature and atmosphere.As shown in Figure 3a-c, under a certain range of lg[P SO2 /P θ ] and lg[P O2 /P θ ], the increase of temperature (600-1000 K) destabilises the iron sulphates by significantly reducing their stability areas, but high temperatures also easily cause sintering and hence the secondary encapsulation of gold.Assuming that P SO2 /P θ was constant, the effects of temperature and oxygen on the roasting of pyrite were further investigated. Desulphurization of Pyrite to Iron Oxides Refractory auriferous pyrites have been extensively roasted to porous calcines (iron oxides) in order to expose the enclosed gold [9].roasting process is often accompanied by sintering and some side-reactions of the sulphation of iron oxides (Equations ( 20)-( 23)).It has been suggested from Section 3.2.1 and many other studies [28][29][30][31][32][33][34][35] that the desulphurization of pyrite and sulphation of iron oxides are largely determined by the roasting temperature and atmosphere.As shown in Figure 3a-c, under a certain range of lg[PSO2/P θ ] and lg[PO2/P θ ], the increase of temperature (600-1000 K) destabilises the iron sulphates by significantly reducing their stability areas, but high temperatures also easily cause sintering and hence the secondary encapsulation of gold.Assuming that PSO2/P θ was constant, the effects of temperature and oxygen on the roasting of pyrite were further investigated. We could also attain a relationship formula of lg[PO2/P θ ] = ΔrG θ /[(ν2ln10)RT] + [(±ν1)/ν2]lg[PSO2/P θ ] for the desulphurization reactions of iron sulphides (Equations ( 7)-( 12)) and sulphation reactions of iron oxides (Equations ( 20)-( 23)) according to the equilibrium constant lnK θ = ln{[ PSO2/P θ ] (±ν1) /[ PO2/P θ ] ν2 } and ΔrG θ = −RTlnK θ .At a constant of PSO2/P θ (0.05 or 0.5), the effects of T and O2 on the pyrite roasting as a function of lg[PO2/P θ ] and T are shown in Figure 4.As seen in Figure 4, iron oxides are produced from the oxidation of pyrite and its pyrolysis product, that is, pyrrhotite in the areas above Lines 7-12 (i.e., Equations ( 7)-( 12)) and also from the decomposition of iron sulphates in the areas below Lines 20-23 (i.e., Equations ( 20)-( 23)).As a result, an intersected area (i.e., shaded Area A) was obtained that represents the stability area of iron oxides.Similarly, iron sulphides and sulphates are thermodynamically stable in Area B and Area C, respectively.Comparing Figure 4a with Figure 4b, the decrease of PSO2/P θ enlarges Area A and hence improves the thermodynamical stability of iron oxides, which is consistent with the results in Figure 3.As seen from Area A, an increasing T and O2 partial pressure appears to favour the formation of iron oxides.Thermodynamically, the reaction conditions of O2 partial pressure (or concentration) and T should be controlled within Area A to ensure the roasting of pyrite to iron oxides.In practice, besides minimizing the pressure or concentration of SO2, the temperature should be not too high in order to avoid the occurrence of sintering during roasting.As seen in Figure 4, iron oxides are produced from the oxidation of pyrite and its pyrolysis product, that is, pyrrhotite in the areas above Lines 7-12 (i.e., Equations ( 7)-( 12)) and also from the decomposition of iron sulphates in the areas below Lines 20-23 (i.e., Equations ( 20)-( 23)).As a result, an intersected area (i.e., shaded Area A) was obtained that represents the stability area of iron oxides.Similarly, iron sulphides and sulphates are thermodynamically stable in Area B and Area C, respectively.Comparing Figure 4a with Figure 4b, the decrease of P SO2 /P θ enlarges Area A and hence improves the thermodynamical stability of iron oxides, which is consistent with the results in Figure 3.As seen from Area A, an increasing T and O 2 partial pressure appears to favour the formation of iron oxides.Thermodynamically, the reaction conditions of O 2 partial pressure (or concentration) and T should be controlled within Area A to ensure the roasting of pyrite to iron oxides.In practice, besides minimizing the pressure or concentration of SO 2 , the temperature should be not too high in order to avoid the occurrence of sintering during roasting. Effect of Carbon on Pyrite Roasting As analysed in Section 2, carbon can impact the roasting of pyrite by the reduction from C/CO (Equations ( 35)-( 44)).The reduction reactions may proceed by the direct reduction of C or the indirect reduction of CO produced from the gasification of C (Equation ( 31)).It is assumed that the direct reduction by C during the roasting process was negligible due mainly to the limited solid-solid reaction interfaces.Therefore, C influences the pyrite roasting mainly in a two-step way of firstly the gasification of C to CO and then the reducing action of CO. of Carbon on Pyrite Roasting As analysed in Section 2, carbon can impact the roasting of pyrite by the reduction from C/CO (Equations ( 35)-( 44)).The reduction reactions may proceed by the direct reduction of C or the indirect reduction of CO produced from the gasification of C (Equation ( 31)).It is assumed that the direct reduction by C during the roasting process was negligible due mainly to the limited solidsolid reaction interfaces.Therefore, C influences the pyrite roasting mainly in a two-step way of firstly the gasification of C to CO and then the reducing action of CO. Using the same calculation method as mentioned before, based on lnK θ = ln{[PSO2/P θ ] ν1 [PCO2/PCO] ν2 } (ν1 and ν2 are the stoichiometric ratios of SO2 and CO2/CO, respectively) and ΔrG θ = −RTlnK θ , the relationship formula of lg[PCO/PCO2] = ΔrG θ /[(ν2ln10)RT] + (ν1/ν2)lg[PSO2/P θ ] was obtained for the relevant reduction reactions.Under a constant PSO2/P θ (= 0.05), the effect of C on the pyrite roasting as a function of lg[PCO/PCO2] and T is shown in Figure 5.It is clearly shown in Figure 5 that the iron sulphates are readily transformed to the iron oxides due to the reduction of CO at very low levels of PCO/PCO2.When T is lower than ~800 K, FeSO4 easily changes to Fe3O4 or Fe2O3 with an increasing T. The required PCO/PCO2 for this transformation is reduced from 10 −6 at 500 K to 10 −10.6 at 800 K.In addition, CO is liable to reduce Fe2(SO4)3 to FeSO4 once PCO/PCO2 is higher than 10 −18 -10 −10.6 and then further reduce from FeSO4 to Fe3O4/Fe2O3.As T exceeds ~800 K, Fe2(SO4)3 tends to be more thermodynamically stable than FeSO4, but it is apt to be directly oxidized to Fe2O3 at PCO/PCO2 > ~10 −10 .Therefore, during the desulphurizing roasting of pyrite to iron oxides, the presence of a certain amount of carbon is likely conducive to prevent the formation of the by-products of iron sulphates, which is preliminarily verified by a recent research on the roasting of a refractory carbonaceous sulphide gold concentrate [36]. Conclusions The roasting behaviour of pyrite under different temperatures and atmospheres is analysed by a series of thermodynamic calculations.The ΔrG θ -T (300-1200 K) relationship suggests that the pyrite roasting can include pyrolysis, oxidation, sulphation and reduction reactions under different atmospheres.In an inert atmosphere, the pyrolysis of pyrite to pyrrhotite spontaneously proceeds only at a relatively high T (>900-1000 K).Pyrrhotite can also be formed in an O2-containing atmosphere.However, comparing with the pyrite pyrolysis, the formation of pyrrhotite from oxidation can occur at much lower temperatures due mainly to the easy oxidation of S2 by O2 to SO2.The isothermal predominance areas for the Fe-S-O system indicate that pyrite may experience three It is clearly shown in Figure 5 that the iron sulphates are readily transformed to the iron oxides due to the reduction of CO at very low levels of P CO /P CO2 .When T is lower than ~800 K, FeSO 4 easily changes to Fe 3 O 4 or Fe 2 O 3 with an increasing T. The required P CO /P CO2 for this transformation is reduced from 10 −6 at 500 K to 10 −10.6 at 800 K.In addition, CO is liable to reduce Fe 2 (SO 4 ) 3 to FeSO 4 once P CO /P CO2 is higher than 10 −18 -10 −10.6 and then further reduce from FeSO 4 to Fe 3 O 4 /Fe 2 O 3 .As T exceeds ~800 K, Fe 2 (SO 4 ) 3 tends to be more thermodynamically stable than FeSO 4 , but it is apt to be directly oxidized to Fe 2 O 3 at P CO /P CO2 > ~10 −10 .Therefore, during the desulphurizing roasting of pyrite to iron oxides, the presence of a certain amount of carbon is likely conducive to prevent the formation of the by-products of iron sulphates, which is preliminarily verified by a recent research on the roasting of a refractory carbonaceous sulphide gold concentrate [36]. Conclusions The roasting behaviour of pyrite under different temperatures and atmospheres is analysed by a series of thermodynamic calculations.The ∆ r G θ -T (300-1200 K) relationship suggests that the pyrite roasting can include pyrolysis, oxidation, sulphation and reduction reactions under different atmospheres.In an inert atmosphere, the pyrolysis of pyrite to pyrrhotite spontaneously proceeds only at a relatively high T (>900-1000 K).Pyrrhotite can also be formed in an O 2 -containing atmosphere.However, comparing with the pyrite pyrolysis, the formation of pyrrhotite from oxidation can occur at much lower temperatures due mainly to the easy oxidation of S 2 by O 2 to SO 2 .The isothermal predominance areas for the Fe-S-O system indicate that pyrite may experience three routes of the phase transformation during its roasting.Firstly, pyrite is directly oxidized with sufficient O 2 to iron oxides under low levels of P SO2 /P θ (i.e., Route 1: FeS 2 → Fe 3 O 4 /Fe 2 O 3 ).Secondly, pyrite is oxidized to pyrrhotite under low levels of P O2 /P θ and P SO2 /P θ (i.e., Route 2: FeS 2 → Fe 0.875 S/FeS). Figure 2 . Figure 2. Relationship of P S2 and T during the pyrolysis of pyrite. Figure 3 . Figure 3. Isothermal predominance area of the Fe-S-O system as a function of lg[PSO2/P θ ] and lg[PO2/P θ ] at a temperature of (a) 600 K, (b) 800 K and (c) 1000 K. Figure 3 . Figure 3. Isothermal predominance area of the Fe-S-O system as a function of lg[P SO2 /P θ ] and lg[P O2 /P θ ] at a temperature of (a) 600 K, (b) 800 K and (c) 1000 K. Figure 4 . Figure 4. Effects of O2 and T on the roasting of pyrite as a function of lg[PO2/P θ ] and T under (a) PSO2/P θ = 0.05 and (b) PSO2/P θ = 0.5. Figure 4 . Figure 4. Effects of O 2 and T on the roasting of pyrite as a function of lg[P O2 /P θ ] and T under (a) P SO2 /P θ = 0.05 and (b) P SO2 /P θ = 0.5. Figure 5 . Figure 5.Effect of C on the roasting of pyrite as a function of lg[PCO/PCO2] and T under PSO2/P θ = 0.05. Figure 5 . Figure 5.Effect of C on the roasting of pyrite as a function of lg[P CO /P CO2 ] and T under P SO2 /P θ = 0.05. Table 1 . Possible chemical reactions and corresponding ∆ r G θ at temperatures of 300-1200 K *. Table 2 . Pyrolysis of FeS and Fe 0.875 S and the corresponding relationship between P S2 and T.
9,352
2019-04-08T00:00:00.000
[ "Materials Science" ]
EXCEPTIONAL DIRECTIONS FOR THE TEICHM ¨ULLER GEODESIC FLOW AND HAUSDORFF DIMENSION . We prove that for every flat surface ω , the Hausdorff dimension of the set of directions in which Teichm¨uller geodesics starting from ω exhibit a definite amount of deviation from the correct limit in Birkhoff’s and Oseledets’ Theorems is strictly less than 1. This theorem extends a result by Chaika and Eskin where they proved that such sets have measure 0. We also prove that the Hausdorff dimension of the directions in which Te-ichm¨uller geodesics diverge on average in a stratum is bounded above by 1 / 2, strengthening a classical result due to Masur. Moreover, we show that the Hausdorff codimension of the set of non-weakly mixing IETs with permutation ( d, d − 1 , . . . , 1), where d is an odd number, is exactly 1 / 2 and strengthen a result by Avila and Leguil. The problem of determining the size of the set of points with non-dense orbits under a partially hyperbolic transformation has a long history.These include orbits which escape to infinity, remain confined inside a proper compact set or simply miss a given open set.In the most studied setting, the transformation preserves a natural ergodic measure and hence these non-dense orbits have measure zero.Thus, it is natural to ask whether different types of non-dense orbits are more abundant than others with respect to other notions of size among which Hausdorff dimension is the most common. Many instances of this problem have been studied for algebraic partially hyperbolic flows on homogeneous spaces.For such flows, Margulis conjectured in his 1990 ICM address that orbits with closure a compact subset of a (non-compact) homogeneous space that misses a countable set of points have full Hausdorff dimension [Mar91, Conjectures A,B].A full resolution of these conjectures was provided in subsequent papers of Kleinbock and Margulis [KM96] and Kleinbock and Weiss [KW13].This phenomenon of abundance of non-dense orbits also takes place in the setting of hyperbolic dynamical systems.For example, Urbański showed in [Urb91] that non-dense orbits of Anosov flows on compact manifolds have full Hausdorff dimension.Then, in [Dol97], Dolgopyat studied the Hausdorff dimension of orbits of Anosov flows and diffeomorphisms which do not accumulate on certain low entropy subsets.It was shown that these trajectories have full Hausdorff dimension in many cases. On the other hand, non-dense orbits of divergence type tend to be less abundant.In the homogeneous setting, it was shown in [KKLM17] that the divergent on average trajectories for certain flows on SL(n, R)/SL(n, Z) do not have full Hausdorff dimension.In fact, an explicit upper bound on the Hausdorff dimension is given, generalizing earlier papers by Cheung [Che11] and Cheung and Chevallier [CC16].In the setting of strata of quadratic differentials, Masur showed that the Hausdorff dimension of the set of non-uniquely ergodic directions for the associated translation flow is bounded above by 1/2.These correspond to divergent orbits for the Teichmüller geodesic flow. In this article, we quantify the abundance of non-dense orbits in the setting of Teichmüller dynamics.Theorem 1.7 is the analogue of the result of [KKLM17] on the dimension of directions in which orbits of the Teichmüller geodesic flow are divergent on average.It provides a strengthening of Masur's result mentioned above.As for non-dense orbits, we study the more general problem concerning the set of directions at a fixed basepoint in which trajectories exhibit a definite amount of deviation from the correct limit in Birkhoff's and Oseledets' Theorems.Theorems 1.1 and 1.4 show that the Hausdorff dimension of these sets of directions is bounded away from 1 uniformly as the basepoint varies in the complement of certain proper submanifolds of the stratum.In particular, this implies that the intersection of the set of non-dense orbits with any Teichmüller disk in the complement of these finitely many proper submanifolds has positive Hausdorff codimension (see Corollary 1.3 and Section 10). These results generalize prior work of Chaika and Eskin [CE15] in which the aforementioned exceptional sets were shown to have measure 0. The work of Chaika and Eskin was used in [DHL14] to study the diffusion rate of billiard orbits in periodic wind-tree models.It was shown that for any choice of side lengths of the periodic rectangular obstacles, diffusion of orbits has a constant polynomial rate in almost every direction.Theorems 1.1 and 1.4 imply that the directions exhibiting different diffusion rates do not have full Hausdorff dimension.Prior to the work of Chaika and Eskin, Athreya and Forni [AF08] established a polynomial bound on the deviation of Birkhoff averages of sufficiently regular functions along orbits of translation flows on flat surfaces in almost every direction.This full measure set of directions was chosen so that the average of a certain continuous function along the Teichmüller flow orbits is close to its expected value.Theorem 1.1 can be used to show that the directions which do not satisfy this bound are of dimension strictly smaller than 1. It is well known that Teichmüller dynamics is closely tied to interval exchange transformations.In particular, Theorem 1.7 allows us to derive a lower bound on the Hausdorff codimension of the set of non-weakly mixing IETs with permutation (d, d − 1, . . ., 1), where d is an odd number.In combination with the result of [CM18] establishing the upper bound, this allows us to compute the precise Hausdorff codimension. Formulation of Results. Let g 1 and let α = (α 1 , . . ., α n ) be an integral partition of 2g − 2.An abelian differential is a pair (M, ω), where M is a Riemann surface of genus g and ω is a holomorphic 1-form on M whose zeroes have multiplicities α 1 , . . ., α n .Throughout this paper, H 1 (α) will denote a stratum of Abelian differentials with area 1 with respect to the induced area form on M. We refer to points of H 1 (α) as translation surfaces.For the sake of brevity, we will often refer to ω itself as an element of H 1 (α). We recall that there are well-defined local coordinates on a stratum, called period coordinates (e.g., see [FM14, Section 2.3] for details), such that all changes of coordinates are given by affine maps.In period coordinates, SL 2 (R) acts naturally on each copy of C.Moreover, the closure of any SL 2 (R) orbit is an affine invariant manifold [EMM15], i.e., a closed subset of H 1 (α) that is invariant under the SL 2 (R) action and looks like an affine subspace in period coordinates.Therefore, it is the support of an ergodic SL 2 (R) invariant probability measure. The action of the following one parameter subgroups of SL 2 (R) will be referred to throughout the article. We recall that the actions of g t , r θ , h s and ȟs correspond to the Teichmüller geodesic flow, the rotation of the flat surface by the angle θ, and the expanding and contracting horocycle flows, respectively. In this paper, we show that the Hausdorff dimension of the set of directions exhibiting a definite amount of deviation from the correct limit in (1.1) is strictly less than 1. Theorem 1.1.Suppose M ⊆ H 1 (α) is an affine invariant submanifold and ν M is the affine measure whose support is M.Then, for any bounded continuous function f on M and any ε > 0, there exist affine invariant submanifolds N 1 , . . ., N k , properly contained in M, and 0 < δ < 1, such that for all ω ∈ M\ ∪ k i=1 N i , the Hausdorff dimension of the set θ ∈ [0, 2π] : lim sup Remark 1.2.We note that the upper bound in Theorem 1.1 is uniform as the basepoint ω varies in the complement of finitely many proper affine invariant submanifolds in M. This, in particular, includes points ω whose SL(2, R) orbit is not dense in M. In Theorem 6.7, we obtain a version of Theorem 1.1 for discrete Birkhoff averages which is needed for later applications.It is worth noting that the exceptional sets considered in Theorem 1.1 are non-empty in most examples and can, in fact, have positive Hausdorff dimension.By using the results in [KW04], one can find a compact set K such that the Hausdorff dimension of trajectories which are contained completely in K is at least 1 − δ ′ for some 0 < δ ′ < 1.By taking f to be supported in the complement of K and to have ν M (f ) = 0, these bounded trajectories will belong to the exceptional set for all ε sufficiently small.A similar argument shows that directions in which geodesics diverge on average (Definition 1.6) belong to the exceptional sets of compactly supported function with nonzero average. Using the uniform dimension estimate in Theorem 1.1, we obtain the following corollary. Corollary 1.3.Suppose M ⊆ H 1 (α) is an affine invariant submanifold and ν M is the affine measure whose support is M.Then, for any bounded continuous function f on M and any ε > 0, there exist affine invariant submanifolds N 1 , . . ., N k , properly contained in M, and 0 < δ < 1, such that for all ω ∈ M\ ∪ k i=1 N i , the Hausdorff dimension of the set In particular, by a standard approximation argument, we see that for any non-empty open subset U of a connected component C of the stratum H 1 (α), the Hausdorff dimension of the set {x ∈ SL 2 (R) • ω : g t x / ∈ U for all t > 0} is strictly less than the dimension of SL 2 (R), which is 3.This being true uniformly over all Teichmüller curves SL 2 (R)•ω in the complement of finitely many lower dimensional invariant submanifolds of C. Oseledets' Theorem for the Kontsevich-Zorich Cocycle.The next object of our study is the Lyapunov exponents of the Kontsevich-Zorich cocycle.Consider the Hodge bundle whose fiber over every point (X, ω) ∈ H 1 (α) is the cohomology group H 1 (X, R).Let Mod(X) be the mapping class group, i.e. the group of isotopy classes of orientation preserving homeomorphisms of X. Fix a fundamental domain in the Teichmüller space for the action of Mod(X).Consider the cocycle à : SL 2 (R) × H 1 (α) → Mod(X), where for x in the fundamental domain, Ã(g, x) is the element of Mod(X) that is needed to return the point gx to fundamental domain.Then, the Kontsevich-Zorich cocycle A(g, x) is defined by where ρ : Mod(X) → Sp(2g, Z) is given by the induced action of Mod(X) on cohomology.We recall the notion of a strongly irreducible SL 2 (R) cocycle. Definition (Strongly Irreducible Cocycle).Let (X, ν) be a probability space admitting an action of a locally compact group G which leaves ν invariant.Let π : V → X be a vector bundle over X on which G acts fiberwise linearly.We say that V admits a ν-measurable almost invariant splitting if there exists n > 1 and for ν-almost every x, the fiber π −1 (x) splits into non-trivial subspaces V 1 (x), . . ., V n (x) satisfying V i (x) ∩ V j (x) = {0} for all i = j and gV i (x) = V i (gx) for all i, ν-almost every x ∈ X and for almost every g ∈ G with respect to the (left) Haar measure on G. And, finally, the map x → V i (x) is required to be ν-measurable for all i. The G action on V is said to be strongly irreducible with respect to ν if the G-action doesn't admit any ν-measurable almost invariant splitting. In this setting, we prove the following statement about deviations in the Lyapunov exponents of the Kontsevich-Zorich cocycle. Theorem 1.4.Suppose M ⊆ H 1 (α) is an affine invariant submanifold and ν M is the affine measure whose support is M. Let V be a continuous (on M) SL 2 (R) invariant sub-bundle of (some exterior power of ) the Hodge bundle.Assume that A V is strongly irreducible with respect to ν M , where A V is the restriction of the Kontsevich-Zorich cocycle to V .Then, for any ε > 0, there exist affine invariant submanifolds N 1 , . . ., N k , properly contained in M, and 0 < δ < 1, such that for all ω ∈ M\ ∪ k i=1 N i , the Hausdorff dimension of the set This complements a result in [CE15] where they show that under the same hypotheses, for every ω and for Lebesgue almost every θ ∈ [0, 2π], the following limit exists It is shown in [EM13,Theorem A.6] that the Kontsevich-Zorich cocycle is in fact semisimple, which means that, after passing to a finite cover, the Hodge bundle splits into ν Mmeasurable SL 2 (R) invariant, strongly irreducible subbundles.Moreover, it is shown in [Fil16] that such subbundles can be taken to be continuous (and in fact real analytic) in period coordinates.Additionally, it is well-known that the top Lyapunov exponent of the k th exterior power of the cocycle is a sum of the top k exponents of the cocycle itself.In this manner, we can deduce the deviation statement for all Lyapunov exponents by examining the top Lyapunov exponents of exterior powers of the cocycle.The following Corollary is the precise statement.For more details on this deduction, see the proof of Theorem 1.4 in [CE15]. Corollary 1.5.Suppose (M, ω) ∈ H 1 (α) and ν M is the affine measure whose support is M = SL 2 (R)ω.Let A be the Kontsevich-Zorich cocycle over M. Denote by λ i the Lyapunov exponents of A (with multiplicities) with respect to ν M .For any θ ∈ [0, 2π], suppose ψ 1 (t, θ) ≤ • • • ≤ ψ 2g (t, θ) are the eigenvalues of the matrix A * (g t , r θ ω)A(g t , r θ ω).Then, the Hausdorff dimension of the set Divergent Trajectories.The study of exceptional trajectories in Birkhoff's and Oseledets' theorems lends itself naturally to studying trajectories which frequently miss large sets with good properties.This problem is closely connected to studying divergent geodesics, i.e. geodesics which leave every compact subset of H 1 (α).Masur showed in [Mas92] that, for every translation surface ω, the set of directions θ for which g t r θ ω is divergent has Hausdorff dimension at most 1/2.Cheung [Che03] showed that this upper bound is optimal by constructing explicit examples for which this upper bound is realized. In this paper, we study divergent on average geodesics, i.e., geodesics that spend asymptotically zero percent of the time in any compact set.Definition 1.6.A direction θ ∈ [0, 2π] corresponds to a divergent on average geodesic g t r θ ω if for every compact set K ⊂ H 1 (α), lim Note that the set of divergent on average geodesics contains the set of divergent geodesics.Therefore, Theorem 1.7 below strengthens [Mas92]. Theorem 1.7.For any translation surface the Hausdorff dimension of the directions that correspond to divergent on average geodesics is at most 1/2.See also Theorem 3.2 where we consider the set of directions with a prescribed divergence behavior in open strata with finitely many invariant submanifolds removed, which may be of independent interest. Combining Theorem 1.7 with the results in [BN04], we derive the following bound on the dimension of non-weakly mixing interval exchange transformations (IETs) whose permutation is of type W .We refer the reader to Section 9 for detailed definitions. Corollary 1.8.Suppose π is a type W permutation.Then, the Hausdorff codimension of the set of non-weakly mixing IETs (with respect to the Lebesgue measure) with permutation π is at least 1/2.For d ∈ N, we say a permutation π on {1, . . ., d} is a rotation if π(i + 1) = π(i) + 1 mod d for 1 ≤ i ≤ d.Avila and Forni [AF07] showed that for any irreducible permutation, which is not a rotation, Lebesgue almost every IET is weakly mixing.In [AL16], this result was extended to show that for all such permutations, non-weakly mixing IETs have positive Hausdorff codimension.Thus, Corollary 1.8 is an improvement of [AL16] in the case of type W permutations.Moreover, it is shown in [CM18] that if π is the permutation (d, d−1, . . ., 1) for d ≥ 5, then the Hausdorff codimension of the set of non-weakly mixing IETs with permutation π is at most 1/2 (the case d = 4 was done in [AC15]).When d is odd, the permutation (d, d − 1, . . ., 1) is type W . Thus, we identify the exact Hausdorff dimension in this case. Outline of Proofs and Paper Organization.Our general approach is to deduce the desired results (Theorems 1.1, 1.4 and 1.7) from the analogous results for horocycle arcs (Theorems 2.1, 2.2 and 2.3, respectively).The reason is that horocycles are more convenient to work with as the geodesic flow normalizes the horocycle flow in SL 2 (R).This is carried out along with the proof of Corollary 1.3 in Section 2. The strategy for proving Theorem 2.1 on deviations of Birkhoff averages consists of three main steps.First, we show that the convergence in (1.1) holds uniformly as the basepoint ω varies over compact sets in the complement of finitely many proper affine submanifolds.Theorem 5.1 is the precise statement.This result strengthens a result in [CE15] and may be of independent interest. Next, we show that the Hausdorff dimension of directions whose geodesics frequently miss large compact sets, chosen with the help of a height function, is bounded away from 1.This statement is made precise in Theorem 3.2 whose proof is the main content of Section 3. Using similar techniques, Theorem 2.3 is proved in Section 4. Theorem 2.1 is proved in Section 6.The idea is to treat a long orbital average as a sum of orbital averages over shorter orbit segments.With the help of Theorem 5.1, we show that most orbit segments which start from a suitably chosen large compact set with good properties, will have an orbital average close to the correct limit.Using Theorem 3.2, we control the dimension of those orbit segments which miss our good compact set. A key step is to show that the sum of such averages over orbit segments behaves like a sum of weakly dependent random variables, which is achieved by Lemma 6.5.This allows us to show that the measure of badly behaved long orbit averages decays exponentially. The proof of Theorem 2.2 treating deviations in Oseledets' theorem spans Section 7 and Section 8.It follows the same strategy as the one outlined above.It is shown in [CE15] that Oseledets' theorem holds uniformly in the basepoint over large open sets for random walk trajectories.Using Egorov's and Lusin's theorems, we translate these results into results about the Teichmüller geodesic flow.This relies on the classical fact that a random walk trajectory is tracked by a geodesic, up to sublinear error. Finally, we show that trajectories which frequently miss such a large set with good properties exhibit deviation in the discrete Birkhoff averages of its indicator function.The dimension of those trajectories is in turn controlled by Theorem 6.7. In Section 9 we prove Corollary 1.8.In Proposition 9.3 we relate the criterion for weak mixing of IETs with a type W permutation in [BN04] and recurrence of Teichmüller geodesics in a stratum.The combination of this relation and our Theorem 1.7 finishes the proof. In Section 10, we describe how to modify the proof of Theorem 1.1 to show that the Hausdorff dimension of abelian differentials ω for which ergodic integrals along their Teichmüller flow orbit exhibit a definite amount of deviation from the correct limit in Birkhoff's theorem is strictly less than the dimension of SL(2, R)ω. Acknowledgements.The authors would like to thank Jon Chaika for suggesting the problems addressed in this article and for generously sharing his ideas on the project.This work grew out of the AMS Mathematics Research Communities workshop "Dynamical Systems: Smooth, Symbolic, and Measurable" in June 2017, and we are grateful to the organizers.This material is based upon work supported by the National Science Foundation under Grant Number DMS 1641020.All authors are thankful for their support.This material is also based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant Number DGE-1144082.P.A. gratefully acknowledges their support.C.U. gratefully acknowledges support from the NSF grants DMS-1405146 and DMS-1510034. Recall that g t contracts ȟ− tan θ , i.e., g t ȟ− tan θ g −t = ȟ−e −2t tan θ , and g t g log cos θ = g t+log cos θ .Therefore, we have that in each theorem formulated in the introduction θ belongs to the exceptional set if and only if tan θ belongs to the exceptional set in the corresponding theorem formulated below.Finally, the bounds for the Hausdorff dimensions of the corresponding sets are preserved as the map θ → tan θ is bi-Lipschitz on [−π/4, π/4]. Theorem 2.1 (Analogue of Theorem 1.1).Suppose M ⊆ H 1 (α) is an affine invariant submanifold and ν M is the affine measure whose support is M.Then, for any bounded continuous function f on M and any ε > 0, there exist affine invariant submanifolds N 1 , . . ., N k , properly contained in M, and δ ∈ (0, 1), such that for all ω ∈ M\ ∪ k i=1 N i , the Hausdorff dimension of the set We remark that minor modifications of the proof of Theorem 2.1 also yield an upper bound on the Hausdorff dimension of the set of directions for which the lim inf is less than the correct limit by a definite amount.Moreover, the exceptional set in Theorem 1.1 can be written as Thus, Theorem 1.1 follows from the reduction to horocycles, Theorem 2.1, and its variant for the lim inf. Theorem 2.2 (Analogue of Theorem 1.4).Suppose (M, ω) ∈ H 1 (α) and ν M is the affine measure whose support is M = SL 2 (R)ω.Let V be a continuous (on M) SL 2 (R) invariant sub-bundle of (some exterior power of ) the Hodge bundle.Assume that A V is strongly irreducible with respect to ν M , where A V is the restriction of the Kontsevich-Zorich cocycle to V .Then, for any ε > 0, there exist affine invariant submanifolds N 1 , . . ., N k , properly contained in M, and δ ∈ (0, 1), such that for all ω ∈ M\ ∪ k i=1 N i , the Hausdorff dimension of the set for a sequence of subsets A n of the real line. In this section, we reduce the problem of finding an upper bound on the Hausdorff dimension of such sets to the problem of finding efficient covers of the A n (see Lemma 2.5). First, we recall the definition of the Hausdorff dimension.Let A be a subset of a metric space X.For any ρ, β > 0, we define Then, the β-dimensional Hausdorff measure of A is defined to be Definition 2.4.The Hausdorff dimension of a subset A of a metric space X is equal to The following lemma provides an upper bound on the Hausdorff dimension of a set for which we have efficient covers. Lemma 2.5.Let {A n } n≥1 be a collection of subsets of R. Suppose there exist constants C, C ′ , t > 0 and λ ∈ (0, 1) such that for each n, A n can be covered with Ce 2(1−λ)tn intervals of radius C ′ e −2tn .Then, the Hausdorff dimension of the set A = lim sup n→∞ A n is at most 1 − λ. Proof.Let β ∈ (1 − λ, 1) and H β denote the β-dimensional Hausdorff (outer) measure on R. We show that H β (A) = 0, and that implies the Lemma.For any ρ ∈ (0, 1), let n 0 = n 0 (ρ) be a natural number such that e −2tn < Cρ for all n ≥ n 0 .Notice that n 0 tends to infinity as ρ goes to 0. Denote by U n a cover of the set A n by Ce 2(1−λ)tn intervals of radius C ′ e −2tn .Then, U = n≥n 0 U n is a cover of A for which the following holds. where #U n is the number of intervals in the cover U n . Let us also recall some basic facts about Hausdorff dimension which will be useful for us.The first concerns the dimension of product sets. If, in addition, the upper packing dimension of B is equal to its Hausdorff dimension, then We remark that the lower bound on the dimension of the product is a classical fact while the upper bound can be obtained directly when B is an open ball, which is the case we will be interested in. 2.3.Proof of Corollary 1.3.Using a simple approximation argument, we may assume that f is Lipschitz.By a similar argument to the one following Theorem 2.1, it suffices to prove that the following set has positive Hausdorff codimension in M. Let δ > 0 and N 1 , . . ., N k be the affine invariant submanifolds properly contained in M which are provided by Theorem 2.1, depending on f and ε and suppose ω ∈ M\ ∪ k i=1 N i .Since the action of SL(2, R) is locally free1 , we can find a small neighborhood of identity O ω ⊂ SL(2, R) such that the map g → gω is injective on O ω .By making O ω smaller if necessary, we may assume that O ω is the diffeomorphic image of an open bounded neighborhood of 0 in the Lie algebra of SL(2, R) under the exponential map.In particular, there are bounded neighborhoods ω and write g = ȟz g r h s .Since g t contracts ȟz and commutes with g r , using the fact that f is Lipschitz, we see that s ∈ B(f, ε) u ω .Conversely, for all s ∈ B(f, ε) u ω and all (z, r) In particular, we have the identification under the smooth coordinate map in (2.1).Thus, by Proposition 2.6, since the upper packing dimension of an open interval in R is equal to its topological and Hausdorff dimension, we get that The above argument shows that that dimension of the intersection of The Contraction Hypothesis and Analysis of Recurrence In this section, we study the problem of the Hausdorff dimension of trajectories with prescribed divergence behavior.We prove an abstract result for SL 2 (R) actions on metric spaces which satisfy the Contraction Hypothesis (Definition 3.1) in the terminology of Benoist and Quint [BQ12, Section 2].The results in this section closely follow the ideas in [KKLM17]. Let X be a manifold equipped with a smooth SL(2, R) action.For t, δ > 0, N ∈ N, Q ⊂ X a (compact) set and x ∈ X, define the following set where χ Q denotes the indicator function of Q. Definition 3.1 (The Contraction Hypothesis).Let Y be a proper SL(2, R)-invariant submanifold of X (Y = ∅ is allowed).The action of SL 2 (R) on X is said to satisfy the contraction hypothesis with respect to Y if there exists a proper, SO(2)-invariant function α : X → [1, ∞] satisfying the following properties: (1) (2) There is constant σ > 0 such that for all x ∈ X\Y and all t > 0, 2) (3) There exists a constant b = b(Y ) > 0, such that for all a ∈ (0, 1) there exists t 0 = t 0 (a) > 1 so that for all t > t 0 and all x ∈ X\Y , We remark that the study of height functions as in Definition 3.1 originated in [EMM98] in the context of homogeneous spaces. Throughout this section X is a manifold equipped with a smooth SL(2, R) action and satisfies the contraction hypothesis with respect to Y , which is a proper SL(2, R)-invariant submanifold of X. Our goal in this section is to prove the following theorem. The proof of Theorem 3.2 can be found in Section 3.3.It should be noted that the difference between this theorem and [KKLM17, Theorem 1.5] is the flexibility in the step size t.As a result, the upper bound on the Hausdorff dimension of the considered set depends on t.In fact, the proof of Theorem 3.2 gives an explicit value for λ as a function of t and δ.See also remark 3.8 for an explicit choice for M 0 depending on δ. 3.1. Estimates for integrals over horocycle orbits.In this section we obtain an integral estimate similar to (3.3) for integrals over an entire horocycle orbit. From the KAK decomposition of SL 2 (R), K-invariance of α and Property (2) in Definition 3.1, it is easy to see that there exists a positive constant c 0 ≥ 1, that is independent of t, such that for all θ ∈ [−π/4, π/4] and all x ∈ X, Thus, we get that Using a change of variable s = tan(θ) and noting that the Jacobian of this change of variable is uniformly bounded on [−π/4, π/4], we obtain That implies the lemma with b = 2c 0 b, a = ā 2c 0 ∈ (0, 1) and t0 = t 0 (a). Our next lemma replaces integration over a compact subinterval of the horocycle with an integral over the entire horocycle orbit against a Gaussian measure.This is a technical step needed to carry over some results from [KKLM17] to our setting.Lemma 3.4.Let α : X → [1, ∞] be a height function.Then, there is a constant b > 0, such that for all a ∈ (0, 1), there exists t 0 = t 0 (a) > 0 so that for all t > t 0 and all where dρ 1 (s) = e −s 2 ds is a mean 0, variance 1 Gaussian. Proof.By Property (2) in Definition 3.1, the KAK decomposition of SL 2 (R) and K-invariance of α, there exist constants σ > 0 and C > 1, such that for all q ∈ R, Let b and t 0 = t0 (ā) for ā ∈ (0, 1) be given by Lemma 3.3.For any n ∈ Z, we define Then, for any t > t 0 and x ∈ X we have the following. This completes the proof by setting Coverings and Long Excursions. In this section, we aim to find efficient coverings for the set of directions for which geodesics take long excursions outside of certain fixed compact sets.We closely follow [KKLM17, Section 5]. Throughout this section, we fix x in X\Y and use Z x (M, N, t) to denote the set Z x (X ≤M , N, t, 1) defined in (3.1).Moreover, let b > 0 and t 0 = t 0 (a) > 1 for a ∈ (0, 1) be as in Lemma 3.4. The following is the main result of this section. Proposition 3.5.There exist constants C 1 , C 2 ≥ 1 (independent of x and a) such that for all M > C 2 b/a, all t ≥ t 0 and all N ∈ N, Here we relax the restrictions on the height of x and on the dependence of M on t in comparison with [KKLM17, Proposition 5.1] Proof.The proof is the same as that of [KKLM17, Proposition 5.1] with minor modifications that we discuss now.Using Property (2) in Definition 3.1, let C 2 ≥ 1 be such that for all s ∈ [−2, 2], Consider M > C 2 b/a and t > t 0 .Let y ∈ X\Y be so that α(y) > b/a.By Lemma 3.4, we have R α(g t h s y) ds ≤ aα(y) + b ≤ 2aα(y). (3.7) Let N ∈ N and define the following set. On the last line, we applied Lemma 3.4, but since we don't insist that α(x) > b/a, we get the extra term with b.The rest of the proof is identical to that of [KKLM17, Proposition 5.1], where we take the constant C 1 to be the implicit constant in that proof depending only on the Radon-Nikodym derivative of the Lebesgue measure with respect to a certain bounded variance Gaussian on [−1, 1]. As a corollary, we obtain the following covering result. Corollary 3.6.There exists b > 0 such that for all a ∈ (0, 1), there exists t 0 > 1 such that for all x ∈ X, all M > C 2 2 b/a, all N ∈ N, and all t > t 0 , the set Z x (M, N, t) can be covered by 2C 1 C 2 (2a) N e 2tN max 1, α(x) M intervals of radius e −2tN , where C 1 , C 2 > 1 are the absolute constants in Proposition 3.5. Proof.By taking into account the different upper bound obtained in Proposition 3.5, the proof is identical to that of [KKLM17, Corollary 5.2]. Proposition 3.7 (cf.Theorem 1.5 in [KKLM17]).Suppose x ∈ X\Y .Then, for any δ, a ∈ (0, 1) there exist M 0 > 1 and t 0 > 1, depending only on a, such that for all M > M 0 , all t > t 0 and all N ∈ N, the set Z x (X ≤M , N, t, δ) can be covered with 2 N C N 1 (2a) δN e 2tN C(x) intervals of radius e −2tN , where C 1 > 1 is as in Corollary 3.6 and C(x) = max 1, α(x) M .Proof.We now describe the modifications needed on the proof of [KKLM17, Theorem 1.5] in order to prove the proposition.In the same notation as in Corollary 3.6, we take M 0 = C 2 2 b/a and let M > M 0 .Then, the rest of the proof follows the same induction scheme used in [KKLM17, Theorem 1.5] with the base case being Corollary 3.6.The only modification on the scheme is to skip the steps involving enlarging M depending on the largeness of the step size t and instead work directly with the bound on covers provided by the preceding corollary. In particular, in the second case of the inductive step in [KKLM17, Theorem 1.5], M is assumed large enough depending on t to apply their covering result [KKLM17, Corollary 5.2] which only applies to x ∈ X with α(x) sufficiently large.Since Corollary 3.6 above works for all x, such restriction on M is not needed. Let M 0 = M 0 (a) and t 0 = t 0 (a) be as in Proposition 3.7.Let M > M 0 and t > t 0 .Define Q = X ≤M , γ = − ln(2a)/2t and β = ln(2C 1 )/2t, i.e., 2C 1 = e 2βt .Then, by Proposition 3.7, for all N ∈ N, we can cover the set Z x (Q, N, t, δ) with C ′ e 2tN (1+β−δγ) intervals of radius e −2tN , for some constant C ′ depending only on x.Note that C ′ is finite by our assumption that x ∈ X\Y .Therefore, by Lemma 2.5, the Hausdorff dimension of the set lim sup N →∞ Z x (Q, N, t, δ) is at most 1 + β − δγ > 0. By the choice of a, this upper bound is strictly less than 1.Finally, we note that by definition of β and γ, our upper bound is uniform over all x ∈ X\Y . Remark 3.8.The proofs of Proposition 3.7 and Theorem 3.2 show that one can choose M 0 = c ′′ be c ′ /δ in the conclusion of Theorem 3.2, for some positive constants c ′ and c ′′ , where b is as in Definition 3.1. Hausdorff Dimension of The Divergent on Average Directions In this section we prove Theorem 2.3 which implies Theorem 1.7 (see Section 2). Proof of Theorem 2.3.Consider the set Notice that for all compact sets Q ⊂ H 1 (α), all 0 ≤ δ ≤ 1 and all t > 0, where Building on earlier work of [EM01], it is shown in [Ath06] that the SL 2 (R) action on X = H 1 (α) satisfies the contraction hypothesis with respect to Y = ∅.The precise statement is the following: Lemma 4.1 (Lemma 2.10 in [Ath06]).For every 0 < η < 1, there exists a function α η : X → R + satisfying item (1), ( 2) and (4) of Definition 3.1.Moreover, there are constants c = c(η), and t 0 = t 0 (η) > 0 so that for all t > t 0 , there exists b = b(t, η) > 0 such that for all x ∈ X, By using the integral estimate in (4.1) in place of the one in (3.3), we can prove the following analogue of Lemma 3.4.Corollary 4.2.For every 0 < η < 1, let α η : X → R + be as in Lemma 4.1.Let σ be as in Definition 3.1.Then, there are constants c ′ = c ′ (η, σ), and t 0 = t 0 (η) > 0 so that for all t > t 0 , there exists b where dρ 1 (s) = e −s 2 ds is a mean 0, variance 1 Gaussian.In particular, if As a result, we deduce the upper bound on dim H (Z) from a covering result for the sets Z ω (Q, N, t, δ), where Q will be a sublevel set of a height function α η for η ∈ (0, 1).Moreover, Equation (4.3) is the exact analogue of the integral estimate in [KKLM17, Corollary 4.2].Fixing any choice of the parameter η ∈ (0, 1), the rest of the proof of [KKLM17, Theorem 1.1] applies verbatim in our setting to get that dim 2 . By sending δ to 1 and η to 0, we get the theorem. Uniformity in Birkhoff's Theorem The purpose of this section is to prove a uniform version of [CE15, Theorem 1.1] due to Chaika and Eskin on the pointwise equidistribution of Teichmüller geodesics with respect to the Lebesgue measure on a horocycle arc.This step is crucial for our Hausdorff dimension estimates in the large deviation problems. Throughout this section, suppose M ⊂ H 1 (α) is a fixed SL 2 (R) invariant affine submanifold.For an affine invariant submanifold N ⊂ H 1 (α), we denote by ν N the unique SL 2 (R) invariant Lebesgue probability measure supported on N .For any bounded continuous function φ on H 1 (α), let ν N (φ) = H 1 (α) φ dν N .For any T > 0, s ∈ [−1, 1] and x ∈ H 1 (α), we denote by A T s (x) the measure defined by for any bounded continuous function ϕ on M. Similarly, for N ∈ N and l > 0, we define the measure S N s (x) in the following way. Notice that S N s (x) depends on the step size l, though we do not emphasize this in the notation. For any h ∈ SL 2 (R), we define hA T s (x) and hS N s (x) in the following way. The following theorem is the main result of this section. Theorem 5.1.Suppose f is a bounded continuous function on H 1 (α).Then, for any ε > 0 there exist finitely many proper affine SL 2 (R) invariant submanifolds of M, denoted by N 1 , . . ., N l such that for any compact set F ⊂ M\ ∪ l i=1 N i and any κ > 0, there exists The proof of Theorem 5.1 (see Section 5.3) is based on a combination of the techniques used to prove [CE15, Theorem 1.1] and [EMM15, Theorem 2.11], paying additional care to the unipotent invariance of limiting distributions.Following the same idea, we also prove the following discrete version of Theorem 5.1 (see Section 5.4 for the proof). Theorem 5.2.Suppose f is a bounded continuous function on H 1 (α).Then, for any ε > 0 there exist finitely many proper affine SL 2 (R) invariant submanifolds of M, denoted by N 1 , . . ., N k such that for any compact set F ⊂ M\ ∪ k i=1 N i , any κ > 0 and all l > 0, there exists N 0 = N 0 (F, κ, l, ε, f ) > 0 such that for all x ∈ F and all N > N 0 , Theorems 5.1 and 5.2 are in the spirit of the results of [EMM15] and [DM93]. 5.1.Some Finiteness and Recurrence Results.In this section we formulate some facts that we use throughout Section 5. The following lemma will provide us with the finite exceptional collection of invariant submanifolds in Theorem 5.1. Lemma 5.3 (Lemma 3.4 in [EMM15]).Given ε > 0 and ϕ ∈ C c (H 1 (α)).There exists a finite collection C of proper affine invariant submanifolds of M with the following property: The following proposition shows that most geodesic trajectories avoid any given finite collection of proper submanifolds of M. Proposition 5.4 (Proposition 3.8 in [EMM15]).Given ε > 0 and any (possibly empty) proper affine invariant submanifold N , there exists an open neighborhood Ω N ,ε of N with the following property: the complement of Ω N ,ε is compact and for any compact set F ⊂ H 1 (α)\N , there exists T 0 = T 0 (F ) > 0, so that for any T > T 0 and any x ∈ F , where χ Ω N ,ε denotes the indicator function of the set Ω N ,ε . The following discrete version of Proposition 5.4 also holds. Proposition 5.5.Given ε > 0 and any (possibly empty) proper affine invariant submanifold N , there exists an open neighborhood Ω N ,ε of N with the following property: the complement of Ω N ,ε is compact and for any compact set F ⊂ H 1 (α)\N and any l > 0, there exists N 0 > 0, so that for any N > N 0 and any x ∈ F , where χ Ω N ,ε denotes the indicator function of the set Ω N ,ε . The proof of Proposition 5.5 is similar to that of Proposition 5.4, i.e., it is a consequence of the contraction hypothesis (see Definition Proof.By Proposition 2.13 in [EMM15], there exists a height function f N with X = H 1 (α) and Y = N (see Definition 3.1).Let m F = sup {f N (x) : x ∈ F }. Notice that m F ≥ 1 as, by definition, f N (x) ≥ 1 for any x ∈ H 1 (α).Then, by Lemma 3.3, there exists t 1 > 0 such that for all t > t 1 and all x ∈ F , Moreover, by Property (2) in Definition 3.1, there exists M = M(t 1 ) > 0 such that for all 0 ≤ t ≤ t 1 and all x, f N (g t h s x) ≤ Mf N (x). Let L > 0 be such that b+2 L < ε.Define a set Ω N ,ε in the following way. where {•} o denotes the interior of a set.Then, by Property (4) in Definition 3.1, Ω N ,ε is an open neighborhood of N with compact complement.Let N 0 ∈ N be sufficiently large so that Then, using the above estimates, we get (5.9) Notice that for any n ∈ N and s ∈ [−1, 1], we have (5.10) Therefore, by (5.9) and (5.10), we obtain 5.2.Effective Unipotent Invariance.In this section, we show a quantative version of [CE15, Proposition 3.1] (Proposition 5.7) regarding almost sure unipotent invariance of limit points of measures of the form (5.1).Also, we state an analogue of it for discrete averages (Proposition 5.8), whose proof is identical to the flow case.See [Kha17] for a generalization of this phenomenon to semisimple Lie group actions. (5.11) The following lemma formulated for horocycle arcs is an analogue of Lemma 3.3 in [CE15] which is proved for circle arcs. We note that the proof of Lemma 5.6 is identical to the proof of [CE15, Lemma 3.3] and simpler if one takes into account that the group of elements h s is normalized by g t . Proposition 5.7 (Quantative version of Proposition 3.1 in [CE15]).Suppose β ∈ R.Then, there exists a constant C > 0, such that for all T > 0, all x ∈ H 1 (α) and all ϕ ∈ C ∞ c (H 1 (α)), the Lebesgue measure of the set The version of Proposition 5.7 for discrete averages is the following. Proposition 5.8.Suppose β ∈ R.Then, there exists a constant C > 0, such that for all N > 0, all x ∈ H 1 (α) and all ϕ ∈ C ∞ c (H 1 (α)), the Lebesgue measure of the set Proof of Proposition 5.7.By Fubini's theorem and Lemma 5.6, one has where we used the facts that |f t (s)| ≤ 2||ϕ|| ∞ , the measure of the region |t 1 − t 2 | < T 1/2 is at most 2T 3/2 , and C 2 > 16C 1 is a constant such that for all T > 0, one has Using the Chebyshev's inequality, we obtain the proposition.5.3.Proof of Theorem 5.1.Fix positive constants ε and κ.Let C be the finite collection of affine invariant submanifolds N 1 , . . .N k of M given by Lemma 5.3 applied to the given function f and ε/2.Consider a compact subset F ⊂ M\ ∪ i N i . Let ε ′ > 0 be a sufficiently small number such that √ ε ′ < min κ 3 , ε 9 f ∞ .By Proposition 5.4, since C is a finite collection, there exists an open neighborhood Ω C,ε ′ of ∪ i N i and T 0 > 0 depending on ε ′ and F such that for all T > T 0 and all x ∈ F , we have and hence, by Chebyshev's inequality, we get that the measure of the set Proposition 5.7, there exists a constant C > 0 such that for all T > 0 and all x, the measure of the sets , where S(•) is a Sobolev norm (see (5.4)) and We prove the theorem by contradiction.Suppose that the conclusion of the theorem does not hold for our choice of F and κ.Then, there exists a sequence x n ∈ F and T n → ∞ such that for each n ∈ N, the measure of the set has measure at least κ. By our estimates on the measures of the sets in (5.12), (5.13) and (5.14), and the choice of ε ′ such that √ ε ′ < κ/3, then the following holds.For all n sufficiently large so that CT −1/8 n < κ/3, we have where for a set A ⊂ [−1, 1], we use A c to denote its complement.Therefore, for all n sufficiently large we can choose a point s n that belongs to the intersection in (5.15).Since the space of Borel measures on H 1 (α) of mass at most 1 is compact in the weak- * topology, after passing to a subsequence if necessary, we may assume that there is a Borel measure ν such that Note that a priori ν may be the 0 measure.We show that this is not the case.We claim that ν is SL 2 (R) invariant.By Theorem 1.4 due to Eskin and Mirzakhani [EM13], it is sufficient to show that ν is invariant by P , the subgroup of upper triangular matrices.Clearly, ν is invariant by g t for all t.Moreover, by the dominated convergence theorem, it suffices to show that ν is invariant by h β 1 and h β 2 as they generate a dense subgroup of U = {h s : s ∈ R}. Since smooth functions are dense in the set of compactly supported continuous functions, it suffices to show that for i = 1, 2, for all ϕ k ∈ Φ, our countable dense collection of smooth compactly supported functions. Fix some ϕ k ∈ Φ.Note that for all n sufficiently large, we have that As a result, we have We show that this is not possible.By Proposition 2.16 in [EMM15], there are countably many affine invariant submanifolds in H 1 (α).Thus, since ν is SL 2 (R) invariant, it has a countable ergodic decomposition of the form where the sum is taken over all such proper (possibly empty) affine invariant submanifolds and a N ∈ [0, 1] for all N .Note that since s n / ∈ D(x n , T n , ε ′ ), we have for all N not contained in any member of C, by definition of the collection C. Let |ν| := N a N be the total mass of ν.Then, we have that We get the desired contradiction by our choice of ε ′ . 5.4.Proof of Theorem 5.2.The proof is similar to the proof of Theorem 5.1 (the flow case), and relies on using Propositions 5.5 and 5.8 instead of Propositions 5.4 and 5.7, respectively.The proof also goes by contradiction.Assuming that the conclusion of the theorem does not hold, we construct a SL 2 (R) invariant measure ν.The analysis of its ergodic decomposition implies a contradiction as in Section 5.3. The following lemma allows us to show that the constructed measure ν is SL 2 (R) invariant. Lemma 5.9.Let l > 0 and let P l be the group generated by elements of the form g ln h s for n ∈ Z and s ∈ R.Then, ν is SL 2 (R) invariant if ν is a P l ergodic invariant probability measure on M. Proof.Denote by ν the measure defined by where (g t ) * ν is the pushforward of ν. Then, ν is invariant by the group of upper triangular matrices P .Notice that for any t ∈ (0, l) we have (g t ) * ν is invariant by the group U = {h s : s ∈ R} due to the fact that U is normalized by g t and ν is invariant by U.That implies that ν is invariant by U as it is a convex combination of U invariant measures.Similarly, we can show that ν is invariant under Z action of g l .To show the invariance under the group A = {g t : t ∈ R}, we write t = ml + r for some m ∈ Z and r ∈ [0, 1) and use the invariance by {g nl : n ∈ Z}. As a result, by [EM13, Theorem 1.4], ν is SL 2 (R)-invariant.Thus, ν has the following ergodic decomposition with respect to the SL 2 (R) action: (5.17) where each ν N is ergodic under the SL 2 (R) action.But, by Mautner's phenomenon, each ν N is ergodic under the action of h s for all s = 0. On the other hand, (g t ) * ν is h s -invariant for all t and s.Hence, equations (5.16) and (5.17) give two decompositions of ν for the action of h s , one of which is a countable decomposition into ergodic measures. Thus, by uniqueness of the ergodic decomposition, there exists a set A ⊆ [0, l] of positive Lebesgue measure |A| and an affine invariant manifold N so that a N = |A|/l and But, by ergodicity of ν N under the action of h s , we have that (g t ) * ν = ν N for almost every t ∈ A. Since ν N is SL 2 (R) invariant, then so is ν. Dimension of Directions with Large Deviations in Birkhoff's Theorem The goal of this section is to prove Theorem 2.1.We also outline the modifications on the proof needed to prove Theorem 6.7 in Section 6. 6. In what follows, M ⊆ H 1 (α) is a fixed affine invariant manifold.By a simple approximation argument, it is enough to prove Theorem 2.1 when f is a Lipschitz function.We let S(f ) denote the Sobolev norm(see (5.4)), and Throughout this section we use the following notation.For any positive ε, N ∈ R, M ∈ N and a subset Q ⊆ M, we define the following sets. It is straightforward to check that B ω (f, N, ε) is equal to the exceptional set considered in Theorem 2.1.The sets Z ω (Q, M, N, ε) are the same as the ones defined in (3.1). Next, for any s ∈ [−1, 1], i ∈ N and positive β, N ∈ R, we define the corresponding functions and sets: Here, we drop the dependence on the basepoint ω from the notation for simplicity. Strategy.The strategy for proving Theorem 2.1 consists of two steps.The first step is to use Theorem 5.2 to control the measure of the sets F i (β).This is carried out in Lemma 6.2.The next step is to show that the sets F i (ε/2) behave like level sets of independent random variables (Proposition 6.5).This will allow us to bound the measure of finite intersections of these sets.The proof of this independence property also yields a mechanism for controlling the number of intervals needed to cover such finite intersection using its measure (Lemma 6.6). In order to apply Theorem 5.2, we need to insure that our trajectories land in a prechosen compact set.Hence, we are forced to run the above argument but restricted to the "recurrent directions".This restriction to recurrent directions is shown in Lemma 6.1.Applying Theorem 3.2, we control the Hausdorff dimension of the non-recurrent directions. 6.1.Sets and Partitions.For N > 0 and i ∈ N, let P i denote the partition of [−1, 1] into intervals of radius e −2iN .For a set Q ⊂ H 1 (α), define the following sub-partitions Here R signifies recurrence and D signifies divergence.We note that the definition of R i depends on the basepoint ω but we suppress this dependence in our notation. By (6.4), it suffices to show the following to prove the lemma. where for a set E ⊆ [−1, 1], E c denotes its complement.The set Z ω (Q, M, N, δ) was defined to be the set of directions s such that g iN h s ω / ∈ Q for at least δM natural numbers i < M. Hence, we get that Indeed, the right hand side describes the set of directions s which belong to J∈R j J for at least (1 − δ)M natural numbers j < M. By definition of R j , this certainly contains the set of directions s for which g jN h s ω ∈ Q for at least (1 − δ)M natural numbers j < M, that is the set on the left hand side. Notice that the following inclusions hold. where for the last inclusion we used the fact that for two sets A, B ⊆ {0, . . ., M − 1} with |A| > 2δM and |B| > (1 − δ)M, we have that |A ∩ B| > δM.Moreover, notice that This completes the proof. 6.2.Measure Bounds for F i .The next lemma allows us to control the measure of the proportion of a set F i in an element of the partition R i (Q) for a suitably chosen large compact set with good properties.This will be a direct application of Theorem 5.1. Let N 1 , . . ., N k be proper affine invariant submanifolds as in Theorem 5.1 applied to ε and f .By [EMM15, Proposition 2.13], for any i = 1, . . ., k there exist height functions f N i such that for all ℓ > 0, the sets The following is the main result of this section which is the form we will use Theorem 5.1 in.Recall the definition of the sets F i (β) in (6.3).Lemma 6.2.For all ℓ > 0 and all a > 0, there exists T 0 > 0 such that for all N > T 0 , β > ε, i ∈ N, all ω ∈ H 1 (α) and all J ∈ R i (C ℓ ), we have where ν is the Lebesgue probability measure on [−1, 1]. Proof.Denote by B 1 a neighborhood of radius 1 around identity in SL 2 (R).Fix ℓ > 0 and a > 0. Let ℓ ′ > ℓ be such that By Theorem 5.1 applied to f , ε, a and the compact set where N i are given by that theorem, there exists T 0 such that for all N > T 0 and x ∈ F , we have For any i ∈ N, we define R i := R i (C ℓ ).Fix J ∈ R i .Let s 0 ∈ J be such that g iN h s 0 ω ∈ C ℓ .By our choice of ℓ ′ , we have the following holds for any s ∈ J. In particular, the above holds for the center c 0 of the interval J. Let s ∈ J − c 0 be such that s + c 0 ∈ F i (β).Then, we get that Thus, we obtain the following. The following corollary is an immediate consequence of Lemma 6.2 and the fact that elements of R i are disjoint.Corollary 6.3.For all ℓ > 0 and all a > 0, there exists T 0 > 0 such that for all N > T 0 , β > ε, i ∈ N, we have that 6.3.Independence of the Sets F i .The goal of this section is to prove that the sets F i (β) behave as if they are independent.More precisely, we will prove that the measure of the intersection of such sets is bounded above by the product of their measures, up to controlled error.Recall the definition of partitions P i in Section 6.1. We start with the following simple but key observation. Lemma 6.4.Suppose i < j, where i and j are natural numbers, and β > 0. Let J ∈ P j be such that Proof.Let s ∈ J ∩ F i (β).Then, |s − η| ≤ e −2jN for any η ∈ J. Hence, since f is Lipschitz, we have that for all t ∈ [iN, (i + 1)N] where we use d(g, h) to be the metric on SL 2 (R) defined by the maximum absolute value of the entries of the matrix gh −1 − Id.Averaging the above inequality in t, we get that which implies the lemma. The following lemma is the main result of this section.Let the notation be the same as in Lemma 6.2.Lemma 6.5 (Independence Lemma).Suppose ε is given.Then, for all ℓ > 0 and all a > 0, there exists T 0 > 0 such that for all ω ∈ H 1 (α), N > T 0 , β > ε + S(f ) N and finite sets A ⊂ N, we have where |A| is the number of elements in A. For any β > ε + S(f ) N and i ∈ N, we define where we use R i to denote R i (C ℓ ).We proceed by induction on p. Since elements of R p are disjoint, we have ∅ and for any i = 1, . . ., p − 1 there exists J ′ ∈ R i such that J ∩ J ′ = ∅.Hence, by Lemma 6.4, By enlarging N if necessary, we may assume that P j is a refinement of P i for i ≤ j.Hence, we see that In particular, we obtain the following base step in our inductive procedure. Note here that our assumption that A = {1, . . ., p} maximizes the sum in the above inequality.In other words, our choice of β guarantees that the above inequality holds where the sum is taken over any set of natural numbers A of cardinality p. Hence, by induction on our base measure estimate in (6.8), via repeated application of Lemma 6.4, ≤ a p as desired. 6.4.A Covering Lemma.As a consequence of Lemma 6.5, we obtain the following bound on the number of intervals needed to cover intersections of the recurrent parts of the sets F i .More precisely, we obtain the following. where |A| is the number of elements in A. Proof.Fix ℓ and a.Let T 0 > 0 be as in Proposition 6.5, N > T 0 , M ∈ N and β > ε As in the proof of Lemma 6.5, a combination of Lemma 6.4 and the fact that the partitions P i form a refining sequence of partitions (which we may assume by enlarging N slightly if necessary) shows that for all J ∈ P M , In particular, for any Therefore, by our condition on β and Lemma 6.5, we get Recall that P M is a partition of [−1, 1] into intervals of radius e −2M N .In particular, for J ∈ P M , ν(J) = e −2M N .Combined with (6.9), this implies the lemma.6.5.Proof of Theorem 2.1.Let us fix the following parameters so that we can apply Lemmas 6.5 and 6.6.Fix ε > 0. Let δ, a > 0 be sufficiently small so that the following holds. and 2 < a −δ .(6.10) Let N 1 , . . ., N k be proper affine invariant submanifolds as in Theorem 5.1 applied to ε/50 and f .By [EMM15, Proposition 2.13], for any i = 1, . . ., k there exists a height functions f N i .For ℓ > 0, let The function α = k 1 f N i satisfies all the properties in Definition 3.1 (see [EMM15, Proposition 2.13]).Suppose ω ∈ M\ ∪ k i=1 N i .Thus, α(ω) < ∞.In particular, Theorem 3.2 applies and guarantees the existence of some ℓ = ℓ(δ) and t 0 > 0 so that for all t > t 0 , one has dim H (Z ω (C ℓ , t, δ)) 1. (6.11) where the bound is uniform over all ω ∈ M\ ∪ k i=1 N i .Let ℓ > 0 be such that (6.11) holds.Let T 0 > 0 be as in Lemma 6.6 applied to f , ε/50.Let N > max {T 0 , t 0 }.Over the course of the proof, we will enlarge N as necessary, depending only on ε, a and f .Fix some ω ∈ M\ ∪ k i=1 N i .Recall the definition of the sets F i (see (6.3)), partitions P i and R i := R i (C ℓ ) (see Section 6.1).By enlarging N if necessary, we may assume that P i form a refining sequence of partitions.For each i ∈ N and β > 0, define By Lemma 6.1, we get that Thus, by (6.11), it suffices to bound the Hausdorff dimension of the second set on the right hand side.Let M ∈ N and define The number of sets of the form A in the above union is at most M ⌈δM ⌉ .Moreover, we may assume N is large enough so that ε/2 > ε/50 + 2S(f ) N Hence, we may apply Lemma 6.6 with ε/50 in place of ε to get that when N is large enough, we have Let β = ln(2)/2N and γ = − 1 2N ln a δ .Then, (6.13) can be rewritten in the following way.# J ∈ P M : J ∩ F R M = ∅ ≤ e 2(1+β−γ)M N By Lemma 2.5, we get that the Hausdorff dimension of lim sup M F R M is at most 1 + β − γ.This bound is strictly less than 1 if and only if 2 < a −δ , which holds by our choice of a in (6.10).Finally, we note that our upper bound depends only on f and ε and is uniform in the choice of ω in M\ ∪ k i=1 N i .This completes the proof.6.6.Deviations of Discrete Birkhoff Averages.The same methods used in this section to prove Theorem 2.1 also imply the following analogous statement for discrete Birkhoff averages. Theorem 6.7.Suppose M ⊆ H 1 (α) is an affine invariant submanifold and ν M is the affine measure whose support is M.Then, for any bounded continuous function f on M and any ε > 0, there exist affine invariant submanifolds N 1 , . . ., N k , properly contained in M, and δ ∈ (0, 1), such that for all ω ∈ M\ ∪ k i=1 N i and all l > 0, the Hausdorff dimension of the set We note that by modifying the definition of the functions f i in (6.2) to be the rest of the proof of Theorem 6.7 follows verbatim as in the case of flows and as such we omit it. Random Walks and Oseledets' Theorem In this section, we recall some results on the growth of the Kontsevich-Zorich cocycle along random walk trajectories on H 1 (α) which were proved in [CE15].Using the fact that a typical random walk trajectory is tracked by a geodesic up to sublinear error, we translate such results to results concerning the Teichmüller geodesic flow.Suppose (M, ω) ∈ H 1 (α) and ν M is the affine measure whose support is M = SL 2 (R)ω.Let V be a continuous SL 2 (R)-invariant subbundle over H 1 (α) of (an exterior power of) the Hodge bundle.Denote by A V : SL 2 (R) × M → GL(V ) the restriction of the Kontsevich-Zorich cocycle to V .Let A V (•, •) be the Hodge norm on V (see [FM14, Section 3.4]). Denote by λ V the top Lyapunov exponent of this cocycle under the Teichmüller geodesic flow with respect to ν M .In particular, by Oseledets' multiplicative ergodic theorem, for ν M almost every x ∈ M, The cocycle A V satisfies the following (Lipschitz) property with respect to the Hodge norm: there exists a constant K ∈ N such that for all x ∈ M and all g ∈ SL 2 (R), where for g ∈ SL 2 (R), we use g to denote the norm of g in its standard action on R 2 .This follows from [For02, Lemma 2.1'] (see also [FM14,Corollary 30]).We note that the power K appears since we are considering the action of an exterior power of the cocycle.Moreover, Forni's variational formula for the derivative of the cocycle along geodesics implies (7.1) for general elements of SL 2 (R) by the KAK decomposition, the cocycle property and the fact that A(r θ , •) = 1 for all θ. Since A V (id, x) = id for all x, we see that A V (g, x) −1 = A V (g −1 , gx) for all g ∈ SL 2 (R) and x ∈ M. Hence, by (7.1), we get We shall need the following facts about matrix norms which follow from the KAK decomposition and the bi-invariance of • under K. (2) g −1 = g where d denotes the right invariant metric on SL 2 (R) and id is the identity element. 7.1.Random Walks.In the remainder of this section and the next section, we fix a compactly supported probability measure µ on SL 2 (R) which is SO(2) bi-invariant and absolutely continuous with respect to the Haar measure.Let SL 2 (R) N be the space of infinite sequences of elements in SL 2 (R) equipped with the probability measure µ N .For each n define the random variable The measure ν M is an ergodic µ-stationary measure i.e. it cannot be written as a non-trivial convex combination of other µ-stationary measures.By a variant of Oseldets' theorem, due to [GM89] in the setting of random walks, there exists λ µ V ∈ R such that for ν M -almost every x and for µ N almost every (g 1 , g 2 , . . . The following sets were introduced in [CE15] as a way to quantify uniformity in the above limit. The Sets E good (ε, L).Let ε > 0 and L ∈ N. Denote by E good (ε, L) the set of points y ∈ M such that for all v ∈ V , there exists a set The following lemma is an important part of our proof as it is a key step in the proof of the Oseledets part of [CE15]. Lemma 7.2 (Lemma 2.11 in [CE15]).For any fixed ε > 0, the sets E good (ε, L) are open and From Random Walks to Flows.Since we will be concerned with metric properties of the exceptional set, it will be important for us to translate random walk results into the language of Teichmüller geodesics.It is a classical fact that random walk trajectories induced by a stationary measure on SL 2 (R) tracks (up to sublinear error) a Teichmüller geodesic.This is made precise in the following: Lemma 7.3 (Lemma 4.1 in [CE15]).There exists λ > 0, depending only on µ, such that there exists a measurable map Θ : SL 2 (R) N → [−π/2, π/2] , defined µ N -almost everywhere, so that for µ N -a.e.g = (g 1 , g 2 , . . . Furthermore, Θ * µ N coincides with the normalized Lebesgue measure.In particular, for any interval Remark 7.4.The relationship between the Lyapunov exponent of the random walk λ µ V and the Lyapunov exponent of the Kontsevich-Zorich cocycle under the Teichmüller flow λ V is provided by the parameter λ in Lemma 7.3 as follows. The following Lemma uses Lemma 7.3 to show that geodesic trajectories which start within the sets E good (ε, L) also exhibit good properties with respect to the cocycle. For simplicity, throughout this section we use the notation A := A V Lemma 7.5.There exists a constant C > 0, depending only on the constants of the cocycle such that the following holds: for every ε > 0, there exists L 0 > 0 such that for all L ∈ N with L ≥ L 0 , for all y ∈ E good (ε, L) and all v ∈ V , there exists Proof.Let λ is as in Lemma 7.3.Using Egorov's theorem, we can find a set U ⊆ SL 2 (R) N with µ N (U) > 1 − ε so that the convergence in (7.3) is uniform over U.In particular, we can choose L ∈ N sufficiently large so that for all g ∈ U: as in the definition of E good (ε, L).We will regard H(v) as a cylinder subset of SL 2 (R) N in the natural way.The set H(v) will be essentially the image of H(v) ∩ U under Θ, except that Θ is only a measurable map.To go around this, we use Lusin's theorem to find a compact set Since Θ is continuous on K and H(v) is a Borel subset of K, we see that H(v) is Lebesgue measurable.Moreover, by Lemma 7.3, one has To see that H(v) satisfies the conclusion of the Lemma, let g ∈ H(v) ∩ U ∩ K.For all L sufficiently large so that (7.5) holds for all g ∈ U, define ε L ∈ SL 2 (R) by the following equation Then, using the cocycle property, we get Hence, since g ∈ H(v), by definition of the set E good (ε, L) and by (7.1), we get Dividing both estimates by λ and noting that by remark 7.4, λ V = λ µ V /λ, we get the desired conclusion. As a corollary, we obtain the following statement for horocycles. Corollary 7.6.There exists a constant C 2 > 0, depending only on the constants of the cocycle such that the following holds: for every ε > 0, there exists L 0 > 0 such that for all L ∈ N with L ≥ L 0 , for all y ∈ E good (ε, L) and all v ∈ V , there exists Proof.Fix ε > 0. Suppose L ∈ N is sufficiently large so that Lemma 7.5 holds, y ∈ E good (ε, L) and v ∈ V .Let H(v) ⊆ [−π/2, π/2] and C > 0 be as in the conclusion of Lemma 7.5.Consider We verify that the corollary holds for this set.Let For every θ ∈ H(v) ∩ [−ρ, ρ] we write r θ = ȟ− tan θ g log cos θ h tan θ .Then, using the cocycle property, we see the following. The purpose of this section is to prove Theorem 2.2 concerning the Hausdorff dimension of directions whose geodesics exhibit deviation of the top Laypunov exponent for the Kontsevich-Zorich cocycle.The structure of the proof is very similar to that of Theorem 2.1.The idea is to relate the directions exhibiting deviation in Oseledets theorem along a Teichmüler geodesic to the directions exhibiting deviation in Birkhoff's theorem for the indicator function of a large open set with good properties with respect to the cocycle.The proof is written in such a way so as to mirror the proof of Theorem 2.1 on deviations in Birkhoff's theorem. Throughout this section we retain the notation from the previous section and also use the following.For any positive ε, L ∈ R and M ∈ N, we define the following sets. Using the cocycle property, it is easy to check that for any L > 0 Moreover, for any s ∈ [−1, 1], β, L > 0 and i ∈ N, we define the corresponding functions and sets. The functions a i and sets A i play the role of the functions f i (see (6.2)) and the sets F i (see (6.3)), respectively, in the proof of large deviations in Birkhoff's theorem. 8.1.Sets and Partitions.For L > 0 and i ∈ N, let P i denote the partition of [−1, 1] into intervals of radius e −2iL , By enlarging L if necessary, we may assume e L ∈ N and that P i+1 is a refinement of P i for all i.For ε > 0, define the following sub-partitions Here E signifies recurrence to the set E good . The following Lemma is an analogue of Lemma 6.1. Proof.First, we notice that for any ε > 0 Using the cocycle property and submultiplicativety of matrix norms, we have the following inequalities log From this point on, using (7.1) to bound A(g L , •) , the proof is identical to that of Lemma 6.1. 8.2.Measure Bounds for A i .The goal of this section is to obtain a uniform bound on the measure of sets of the form A i ∩ J for any J ∈ E i and any i.This step is analogous to Lemma 6.2. The following is the main result of this section.The key input in the proof is Lemma 7.5. Proof.Let L 0 > 0 and λ > 0 be as in Corollary 7.6 and Lemma 7.3, respectively.Define L 1 := L 0 /λ.Suppose γ ∈ R and L ∈ N are such that γ ≥ 2C 2 ε and L ≥ L 1 .Let i ∈ N, J ∈ E i := E i (ε, L), and s 0 ∈ J be such that y 0 := g iL h s 0 ω ∈ E good (ε, L).Let v ∈ V and G(v) ⊆ [−2, 2] be as in Corollary 7.6.Choose η ∈ J − s 0 such that s 0 + η ∈ A i (γ) ∩ J.Then, we have Note that e 2iL (J − s 0 ) is a subinterval of [−2, 2] of length 2. In particular, we get that Thus, since the Lebesgue measure of G(v) is at least 4(1 − 30ε), we get the following measure estimate This concludes the proof in the case L ∈ N.For the L ≥ L 1 with L / ∈ N, write L = ⌊L⌋+{L} where ⌊L⌋ is the largest natural number less than L and {L} = L − ⌊L⌋.Then, using the cocycle property, submultiplicativety of the norm and the Lipschitz property of the cocycle (7.1), we get Thus, we see that the conclusion follows in this case from the case when L ∈ N by choosing L 1 sufficiently large depending on ε. 8.3.Independence of the Sets A i .As a consequence of the Lipschitz property of the cocycle (7.1), we are able to prove an analogue of Lemma 6.4. Lemma 8.3.There exists a constant C 3 > 0, depending only on the constants of the cocycle A so that the following holds.Suppose i < j, where i and j are natural numbers, L > 0, and γ > 0. Let J ∈ P j be such that Then, by definition of the partition P j in Section 8.1, |s 0 − η| ≤ e −2jL for any η ∈ J. Using the cocycle property, we have the following. Hence, by (7.1), (7.2) and Lemma 7.1, there exists a constant C 1 so that which concludes the proof. As a consequence, we obtain exponential decay in the measure of intersections of the sets A i , similarly to Lemma 6.5.Lemma 8.4 (Independence Lemma for A i ).Let C 3 > 0 be as in Lemma 8.3 and C 2 > 0 be as in Corollary 7.6.Then, for all ε > 0, there exists L 1 > 0 such that for all L ≥ L 1 , all finite sets B ⊂ N and all γ > 2C 2 ε Proof.The proof is identical to the proof of Lemma 6.5 which is a formal consequence of two results: Lemma 6.2 that gives an upper bound on the measure of F i , and Lemma 6.4.The analogues of those two results are Lemma 8.2 and Lemma 8.3, respectively.8.4.A Covering Lemma.The following lemma shows existence of efficient covers for intersections of the sets A i , similarly to Lemma 6.6. where |B| is the number of element in B. Proof.The proof is a direct consequence of Proposition 8.4 and proceeds as in the proof of Lemma 6.6. 8.5.Proof of Theorem 2.2.Fix ε > 0. Suppose ε ′ > 0 is a sufficiently small number (depending only on ε).Define δ := ε/4K, where K is the exponent in (7.1).By Lemma 7.2, choose L > 0 large enough, depending on ε ′ , so that Let χ E denote the indicator function of the open set E good (ε ′ , L).Then, using a variant Urysohn's lemma, we can find a Lipschitz compactly supported continuous function Moreover, we have that for all M ∈ N and all ω ∈ M, where these sets are defined in (3.1) and (6.1) (for discrete Birkhoff averages).Note that δ − ν M (1 − f ) > 0 by (8.4).Thus, by Theorem 6.7, there exist 0 < η < 1 and finitely many proper affine invariant manifolds N 1 , . . ., N k ⊂ M, depending on f and ε, so that the following holds uniformly for all ω ∈ M\ ∪ k i=1 N i .These are the affine manifolds appearing in the conclusion of Theorem 2.2.Now, fix one such ω. Recall the definition of the sets A i and partitions P i in Section 8.1.By enlarging L if necessary, we may assume that P i form a refining sequence of partitions.For i ∈ N and γ > 0, define Thus, it remains to control the Hausdorff dimension of the second set on the right side.We apply Lemma 8.5 to ε ′ in place of ε and γ = ε/2.By choosing ε ′ to be sufficiently small and L sufficiently large, we can insure that ε/2 > 2C 2 ε ′ + 2C 3 L where C 2 and C 3 are constants depending only on the cocycle as in the statement of Lemma 8.5. As a result, choosing L sufficiently large, we can apply Lemma 8.5 and proceed as in the proof of Theorem 2.1 to get that By choosing ε ′ < 2 −1/δ /120 (thus depending only on ε), we get that this upper bound is strictly less than one.Moreover, observe that the parameters δ, ε ′ , L appearing in the upper bound above are independent of ω.This completes the proof. Weak Mixing IETs This section is dedicated to the proof of Corollary 1.8.We first recall some definitions and the results of [BN04] which connect weak mixing properties of IETs with the recurrence of Teichmüller geodesics in an appropriate stratum. Throughout this section, we fix a natural number d ≥ 2. Given a permutation π on d An IET T λ,π has finitely many points of possible discontinuity Define (See [Vee78] and [MW14, Section 2.2]) an alternating bilinear form on R d × R d by its value on the standard basis elements e i as follows The cone R d + can be viewed as the space of IETs with a given permutation π with a natural euclidean metric and Lebesgue measure.IETs preserve the Lebesgue measure on the unit interval and we shall refer to ergodic properties (ergodicity, weak mixing, etc) of IETs with respect to it.9.1.A criterion for weak mixing.A permutation π on d letters {1, . . ., d} is irreducible if for every 1 ≤ j < d, π({1, . . ., j}) = {1, . . ., j} .Definition 9.1.Suppose π is an irreducible permutation on d letters.Define inductively a finite sequence {a p } p=0,1,...,l of natural numbers as follows. Following [BN04], we say that T λ,π satisfies IDOC (the infinite distinct orbit condition) if each discontinuity point β i has an infinite orbit under T λ,π and for i = j, the orbits of β i and β j are disjoint. Using the orbits of the points β i under the IET T λ,π , we define a sequence of partitions of [0, 1] as follows: for each n ≥ 1, P n denotes the partition into subintervals whose endpoints are the successive elements of the sets For each n, we define ǫ n (T λ,π ) to be the length of the shortest interval in the partition P n . The following criterion of weak mixing was proved in [BN04]. Motivated by this criterion, we will say that an IET T λ,π has short intervals if lim n→∞ nǫ n (T λ,π ) = 0 (9.2) 9.2.A Compactness criterion for strata.Suppose H is a stratum of abelian differentials. We recall here a description of standard compact subsets of H. Given ω ∈ H, denote by L ω the set of all of its saddle connections, i.e., the set of all flat geodesic segments joining a pair of the singularities of ω.Then, we can naturally regard L ω as a subset of vectors in C. Note that L ω is a discrete set.Moreover, using the standard action of SL(2, R) on C, for any g ∈ SL(2, R), the set L gω can be identified with g For any ε > 0, we use the following notation It is known that the sets K ε with ε > 0 are compact subsets of H and that any bounded subset of H is contained in K ε for some ε.9.3.Short intervals and recurrence.Given an abelian differential ω ∈ H on a surface S, there is a well-defined vector field given by the imaginary part Im(ω).This vector field is defined at all points in S except for the (finitely many) zeros of ω.This vector field defines a singular flow on S, called the vertical flow, by moving points at linear speed in the direction Im(ω).By fixing a straight line segment I (a geodesic segment in the flat metric defined by ω) which is transversal to the vertical flow lines, the first return map T : I → I of the flow defines an IET.One can pick I parallel to the real part of ω so that the resulting IET has 2g + k −1 intervals, where g is the genus of S and k is the number of zeros of ω.This remains true if the angle between I and the real part is sufficiently small.This process allows us to define a map from a neighborhood of ω in the stratum H to the space of IETs R 2g+k−1 + as follows.Pick a segment I parallel to the real part of ω as above and let π be the permutation associated to the IET on 2g + k − 1 intervals defined by the first return map of the vertical flow defined by ω to I.Then, we can find a sufficiently small open neighborhood U ω of ω in H so that for all x ∈ U ω , the first return map of the vertical flow defined by x to the segment I is an IET with 2g + k − 1 intervals and with the same permutation π. This defines a Lipschitz map T : U ω → R 2g+k−1 + (9.5) in the Teichmüller and Euclidean metrics respectively.Conversely, using Veech's zippered rectangles construction, one can find suspension of any IET using a piecewise linear roof function to obtain an abelian differential on a compact surface.However, this construction is not unique and in general the pre-image of an IET T λ,π under the map T is a positive dimensional subset of U ω . The following proposition allows us to relate the criterion in Theorem 9.2 to the recurrence of Teichmüller geodesics in strata.A similar result was obtained in [MW14, Proposition 7.2] using a slightly different proof.We include a proof here for completeness. Proof of Proposition 9.3.Suppose λ is as in the statement and let ω ∈ T −1 (λ).Fix some ε > 0 and let n 0 ≥ 1 be such that nǫ n (T λ,π ) < ε for all n ≥ n 0 .We construct a sequence of saddle connections v n in ω so that the length of g For this we use an argument similar to the one found in [Bos85, Section 10].Let P 1 , . . ., P k be a collection of polygons in the plane representing ω and let Ĩ be a lift of the transversal I under the covering map ∪P i → S which glues parallel sides by translations.We recall that S is the surface of genus g 1 that supports the abelian differentials in the proposition. For each n ≥ n 0 , denote by I n ⊂ I be a subinterval such that We use Ĩn to denote a lift of I n inside Ĩ. Denote by C the open cylinder consisting of the union of the vertical flow orbits of the points in the interior of I n up to the n th time these orbits hit the transversal I.By definition of the endpoints of the interval I n , the cylinder C contains no zeros of ω. Let C denote a lift of C to the complex plane which we unfold to a parallelogram in the following manner.Let x be an arbitrary point in the interior of Ĩn and denote by x t := x + it for t > 0. Define t(x, n) to be the time t > 0 corresponding to the n th return of x to I under the vertical flow. Next, we define a finite sequence of times q i ∈ (0, t(x, n)) and polygons L i with 1 ≤ i ≤ n by induction as follows.Let L 1 ∈ {P 1 , . . ., P k } denote the polygon containing x. Define q 1 = inf {0 < t < t(x, n) : x t meets a side of L 1 } As the endpoints of I n are discontinuities of the first return IET, the set on the right-hand side is necessarily non-empty.Let l 1 denote the side of L 1 such that x q 1 ∈ l 1 .Let r 1 denote the unique side of a polygon R 1 ∈ {P 1 , . . ., P k } which is identified to l 1 by a translation T 1 (which defines the gluing of parallel sides). Once (q j , L j , l j , r j , T j , R ) have been defined for all 1 ≤ j ≤ i − 1 < n, we define q i = inf {q i−1 < t < t(x, n) : x t meets a side of , let l i denote the side of L i such that x q i ∈ l i .Note that l i is the image of a side l ′ i of a polygon in {P 1 , . . ., P k } by a translation A, i.e., A brings l i back to a side l ′ i of one of the original polygons {P 1 , . . ., P k }.Denote by r i the unique side of a polygon R i ∈ {P 1 , . . ., P k } which is identified to l ′ i by a translation B. Define the i th translation T i by T i = A • B. Now, consider the parallelogram where Int( Ĩn ) denotes the interior of Ĩn .By definition of the endpoints of I n , each of the two vertical sides of P n necessarily meets a vertex of one of the polygons L 1 , . . ., L n .On the other hand, the interior of P n is free from the vertices of the polygons.In particular, if we let v n denote a straight line segment joining two of the vertices on the two vertical sides of P n , we see that v n represents a saddle connection for x which is contained entirely in P n . If we regard v n as a vector in C, we see that the imaginary part |Im(v n )| is at most the height of the parallelogram P n .Thus, in particular, we get that where the implied constant depends only on the lengths of the sides of the polygons P 1 , . . ., P k .Moreover, the real part |Re(v n )| satisfies where the implied constant here depends on the angle between the segment Ĩ and the horizontal axis, which, in turn, depends only on the neighborhood U ω .Therefore, we see that the length of the saddle connection g log(n/ √ ε) v n is ≪ √ ε as desired.9.4.Horocycles and Lines in the Space of IETs.It was shown by Minsky and Weiss in [MW14] that the image of short horocycle arcs under the map (9.5) is short line segments in R d + .This result was used in the work of Athreya and Chaika in [AC15] to relate the dimension of divergent directions for the Teichmüller flow to the dimension of non-uniquely ergodic IETs.We use a similar idea to obtain the following proposition. The idea of the proof of Proposition 9.4 is the following.First, we use Proposition 9.7 to relate the dimension of the set of interest to the dimension of its intersection with line segments.Then, Proposition 9.5 allows us to relate the dimension of sets on line segments to the dimension of subsets of horocycle arcs.Finally, using Proposition 9.3, we show that the sets of interest on the horocycle arcs correspond to points with divergent g t orbits.As a result, Theorem 2.3 concludes the argument. The suggested outline of the proof is a modified version of an argument given in [AC15, Section 6].The main difference is the use of Lemma 9.6 to bypass the use of Rauzy induction (Lemma 6.5 in [AC15]) which we believe makes the approach more direct. Proof of Proposition 9.4.Denote by A the set of λ ∈ R d + such that T λ,π is uniquely ergodic, IDOC, and has short intervals and note that A is Borel measurable.Suppose that for some 0 < c < 1, we have codim H (A) c Then, by Proposition 9.7, there exists a positive measure set L of lines in R d such that for each line ℓ ∈ L a set ℓ ∩ A has Hausdorff dimension at least 1 − c.By Lemma 9.6, we may assume that for each line ℓ ∈ L there exists some point λ ∈ ℓ so that Q(λ, b) = 0, where b is a vector in R d parallel to ℓ.Let ℓ ∈ L be a line such that it passes through a point λ ∈ R d Hence, by Proposition 9.5, we can find ε 0 > 0 and a local Lipschitz inverse q of the map T so that q(x + sb, b) = h s (q(x, b)) for |s| < ε 0 .But, by Proposition 9.3, the forward g t orbit of the set {q(x + sb, b) : |s| < ε 0 , x + sb ∈ A} is divergent (on average) in the stratum H. It is shown in [Kea75] that if the components of λ are linearly independent over Q, then T λ,π is IDOC.In particular, the set of IETs which are not IDOC is contained in the intersection of the simplex R d + with countably many codimension 1 subspaces of R d which are defined over Q.This implies that the set of non-IDOC IETs has Hausdorff codimension at least 1. Large Deviations in Birkhoff's Theorem in Strata -An Outline The scheme suggested in this paper is quite flexible and can be applied to get similar results about the Hausdorff dimension in various settings.For example, using our approach for the proof of Theorem 1.1, one should be able to answer the following question affirmatively.∈ U for all t > 0} is strictly less than the dimension of C. For clarity, we briefly outline how to apply our techniques to answer Question 10.1.The idea is to translate all the results on horocycle arcs obtained in Section 6 to results on open bounded subsets of the strong unstable manifold for the Teichmüller flow.Then, one obtains the desired result from the analogue of Theorem 1.1 for the Hausdorff dimension of the following set in the strong unstable leaf W su (ω) of ω ∈ H 1 (α). x ∈ W su (ω) : lim sup where ω satisfies SL(2, R)ω = M.We recall that W su (ω) = x ∈ H 1 (α) : d H 1 (α) (g t ω, g t x) → 0 as t → −∞ where d H 1 (α) denotes the Teichmüller metric.Any such leaf is foliated with orbits of the horocycle flow h s .In particular, we can locally find foliation charts for h s -orbits within a leaf of the unstable foliation, which also provide immersed local transversals for the horocycle orbits.As a result, we obtain that a neighborhood W su loc (ω) of a point ω inside W su (ω) has a product structure.This allows the disintegration of the probability measure of Lebesgue class on small bounded open sets of unstable leaves with parameter measures as conditionals along horocycles. One can then introduce subsets of W su loc (ω) and the corresponding functions analogous to (6.1),(6.2) and (6.3).Using Fubini's theorem, one should be able to translate the measure bounds on exceptional subsets of horocycle arcs (see Section 6.2) into bounds for exceptional subsets of W su loc (ω).The product structure of W su loc (ω) can be similarly used for translating the results for horocycles in Section 3 into results for W su loc (ω). + and is parallel to b ∈ R d , i.e., ℓ = {λ + sb : s ∈ R}, and Q(λ, b) = 0.By Lemma 9.8 ((1)⇒(2)), there exists a measure µ supported on ℓ ∩ A so that for all x ∈ ℓ and all r > 0, we haveµ(B(x, r)) r 1−c Note that the linearity of Q implies that Q(λ + sb, b) = 0 for all s = −Q(λ, b)/Q(b, b) and for all s ∈ R if Q(b, b) = 0.Hence, since µ is not a Dirac mass, we can find x ∈ supp µ ⊂ ℓ∩A such that Q(x, b) = 0.In particular, T x,π is uniquely ergodic, IDOC, and has short intervals.Notice that a priori λ ∈ ℓ may not belong to supp µ.By replacing b with −b if necessary, we may assume Q(x, b) > 0. f Question 10.1.Suppose M ⊆ H 1 (α) is an affine invariant submanifold and ν M is the affine measure whose support is M. Let f be a bounded continuous function f on M and ε > 0. Is the Hausdorff dimension of the setx ∈ M : lim sup (g t x) dt − M f dν M εstrictly less than the dimension of M?Note that, by a standard approximation argument, the affirmative answer to the above question implies that for any non-empty open subset U of a connected component C of the stratum H 1 (α), the Hausdorff dimension of the set {x ∈ C : g t x / where λ V denotes the top Lyapunov exponent for A V with respect to ν M .
22,773.8
2017-11-28T00:00:00.000
[ "Mathematics" ]
Attention module improves both performance and interpretability of four‐dimensional functional magnetic resonance imaging decoding neural network Abstract Decoding brain cognitive states from neuroimaging signals is an important topic in neuroscience. In recent years, deep neural networks (DNNs) have been recruited for multiple brain state decoding and achieved good performance. However, the open question of how to interpret the DNN black box remains unanswered. Capitalizing on advances in machine learning, we integrated attention modules into brain decoders to facilitate an in‐depth interpretation of DNN channels. A four‐dimensional (4D) convolution operation was also included to extract temporo‐spatial interaction within the fMRI signal. The experiments showed that the proposed model obtains a very high accuracy (97.4%) and outperforms previous researches on the seven different task benchmarks from the Human Connectome Project (HCP) dataset. The visualization analysis further illustrated the hierarchical emergence of task‐specific masks with depth. Finally, the model was retrained to regress individual traits within the HCP and to classify viewing images from the BOLD5000 dataset, respectively. Transfer learning also achieves good performance. Further visualization analysis shows that, after transfer learning, low‐level attention masks remained similar to the source domain, whereas high‐level attention masks changed adaptively. In conclusion, the proposed 4D model with attention module performed well and facilitated interpretation of DNNs, which is helpful for subsequent research. | INTRODUCTION For many years, decoding the brain's activities has been one of the major topics in neuroscience. Inferring brain states consists of predicting the tasks subjects performed and identifying brain regions related to specific cognitive functions (Friston et al., 1994;Lv et al., 2015;McKeown et al., 1998;Norman, Polyn, Detre, & Haxby, 2006). Deep learning (DL) methods based on a variety of artificial neural networks have gained considerable attention in the scientific community for more than a decade, breaking benchmark records in several domains, including vision, speech, and natural language processing (Krizhevsky, Sutskever, & Hinton, 2017;LeCun, Bengio, & Hinton, 2015). In this context, deep neural networks (DNNs), especially convolutional neural networks (CNNs), have been recruited for brain decoding Li & Fan, 2018;Yin, Li, & Wu, 2020;Zhang, Tetrel, Thirion, & Bellec, 2021), and achieved high accuracy (>90%) in brain multiple state decoding (Nguyen, Ng, Kaplan, & Ray, 2020;X. Wang et al., 2020). It is important to note, however, several open challenges still need to be addressed while using deep learning to investigate functional magnetic resonance imaging (fMRI) data. The first challenge is the abstraction of complex temporo-spatial features within the fMRI time series. A fMRI time series is a fourdimensional (4D) data that consists of three-dimensional (3D) spatial and one-dimensional (1D) temporal information, which means brain regions engage and disengage in time during coherent cognitive activity (Chen, Kreutz-Delgado, Sereno, & Huang, 2019;Shine et al., 2016). Inspired by this, Mao et al. (2019) developed a model of 3D CNN stacks and a long short-term memory (LSTM) for spatial and temporal feature abstraction, respectively. A bit more reasonable approach would be to jointly leverage the inherent spatial-temporal information in fMRI data (Ismail Fawaz, Forestier, Weber, Idoumghar, & Muller, 2019). However, designing and optimizing architectures for 4D fMRI decoding is difficult due to the lack of systematic comparisons of various spatiotemporal processing and the substantial explosion of computational and memory requirements. The second challenge is the researchers' requirement for a higher degree of accountability of the model, which is the core of the feasibility and reproducibility of brain decoding (Lindsay, 2020). Deep learning is regarded as a black-box model, and recent efforts have been made to develop an interpretable brain decoding model through feature ranking (Li & Fan, 2019), visualizing the convolutional kernels (Vu, Kim, Jung, & Lee, 2020), guided back-propagation (X. Wang et al., 2020), and so on. Improved DNN interpretability in fMRI analysis could lead to more accountable usage, better algorithm maintenance and improvement, and more open science (Tjoa & Guan, 2021). Another challenge is the conflict between the DNNs' requirement for large amounts of data and the relatively modest quantity of datasets in typical cognitive research (Yotsutsuji, Lei, & Akama, 2021). Most fMRI experiments comprise tens to hundreds of participants due to experimental costs or participant selection. It is natural to use transfer learning to alleviate the data scarcity problem in the target domain (e.g., small sample datasets) by utilizing the knowledge acquired in the source domain (e.g., large cohorts; Gao, Zhang, Wang, Guo, & Zhang, 2019;Svanera et al., 2019;Thomas, Müller, & Samek, 2019;X. Wang et al., 2020). The fMRI data vary across datasets (e.g., scanner, scanning parameters, task design, template space), so it remains an open question how far the DNN can transferlearn in fMRI. Inspired by these challenges, the main contributions to this article are threefold. First, we extended the problem of temporal modeling and spatial feature extraction to the 4D convolution module and compared various approaches to fMRI data processing. Second, we employed the mixed attention modules to improve the decoding performance, which not only enhanced the ability to distinguish and focus on specific features but also presented an in-depth interpretation of CNN. Third, we explored the benefits of transfer learning in fMRI analysis under different problem definitions and task design, demonstrating that the model that captures cognitive similarities can extend to distinguish individual trait differences. | Human Connectome Project dataset The minimally preprocessed 3T data from the S1200 release of the Human Connectome Project (HCP; Glasser et al., 2013) were used in this research. The present study included task fMRI of 1,034 subjects during seven tasks: emotion, gambling, language, motor, relational, social, and working memory (WM). The seven tasks, which lasted for about 20-30 frames under different conditions during each block, provided a high degree of brain activation coverage (Barch et al., 2013). Thus, the parameter estimates of the model trained on this dataset contained similarities to multiple cognitive domains and were utilized as the source domain in the transfer learning experiment. The HCP S1200 dataset has been preprocessed with the HCP functional pipeline and normalized to the Montreal Neurological Institute's (MNI) 152 space. According to the previous studies X. Wang et al., 2020), only one condition was selected for each task (Table 1) and resulted in 14,821 fMRI 4D instances across all subjects and tasks. To save computing memory, a bounding box with the size of [80,96,88] voxels was applied to each fMRI volume, and the blank parts that did not contain brain tissues were cropped out. | BOLD5000 dataset The BOLD5000 (Chang et al., 2019) dataset was also used for transfer learning of the proposed model. The dataset selected event-related design paradigms to investigate visual perception, which collected the fMRI data of four participants while viewing 5,000 real-world images. Each image was presented for 1 s and followed by a 9 s blank screen with a fixation cross. Thus, a single trial lasted five frames (repetition time, TR = 2 s). Two conditions of stimulus images were employed in this study: Scene containing whole scenes and ImageNet focusing on a single object. Implicit image attributes can provide category selectivity in high-level visual regions. Using fMRIPrep (Esteban et al., 2017), the preprocessing including motion correction, distortion correction, and co-registration to the corresponding T1w of the fMRI data was applied. Then each volume was also cropped to the size of [80,96,88] voxels, and each segmented fMRI input covered the entire trial and included two extra TRs extended forward and backward. | The proposed neural network The proposed model consists of a 4D convolution layer and four 3D attention modules, followed by a fully-connected layer ( Figure 1a). | 4D convolution The 4D convolution kernel K ℝ k l Âk h ÂkwÂk d Âkc was applied to the input x ℝ lÂhÂwÂdÂc , where l is the temporal length, h is the height, w is the width, d is the depth, and c is the length of the channels. The 4D convolution operation, Conv4D, was implemented by two loops of the native 3D convolution operation, Conv3D, of the Pytorch (Paszke et al., 2019): where s t is the temporal strides (s t = 1, 2, …) and Conv3D employed 3D convolution with a spatial stride of s = 2. A stride of >1 leads to a down-sample in the designated dimension. After the 4D convolution, the temporal dimension was squeezed and flattened to channel dimension of the subsequent 3D attention module. | The attention module The attention mechanism in the DNN selects focused regions and thus enhances the discriminative representation of objects (Vaswani et al., 2017). The attention module is also beneficial for optimizing by serving as a gradient update filter to prevent gradients from noisy Naive dot production of two branches degrades the value of features. Attention residual learning is used to ease this problem by constructing the attention branch as an identical mapping. Formally, the output of attention module x iþ1 serving as the input of the next layer is modified as: What's more, the attention mask branch can be viewed as an identical mapping that changes adaptively as layers go deeper. What the neural network learns at each level can be demonstrated by the distribution of attention. The attention masks of each channel were visualized to present an in-depth interpretation of the network by upsampling the feature map corresponding to A(x) and mapping it to T1w. | Training and evaluation The implementation of the different model variants is based on the PyTorch framework. Training was performed on an NVIDIA GTX 1080Ti graphic card. To conduct a fair comparison, the batch size was set to 16 and each model was trained for 60 epochs using the Adam algorithm with the standard parameters (β 1 = 0.9 and β 2 = 0.999). The learning rate was initialized at 0.0001 and decayed by a factor of 5 when the validation loss plateaued after 15 epochs. The loss converged well and overfitting was not observed during validation experiments. Our validation strategy employed a fivefold cross-validation across subjects and the dataset was categorized into subsets as follows: training set (70%), validating set (10%), and testing set (20%). Control experiments were conducted on various model variants (Table 2) to verify whether the 4D convolution and attention modules brought a substantial improvement. We also analyzed a set of 4DResNet consisting of different sizes of 4D kernels and presented comparison results using different frames as input. A segment of k continuous frames, which was randomly split from each instance, was used as input for training. During the testing stages, the predictions for all segmentations of one instance are summed up, and the task label with the majority vote is predicted to represent the final class of the instance. | Transfer learning Transfer learning describes a process in which a network is trained on a source dataset and subsequently reuses the parameters of the The key idea of this workflow is similar to that mentioned above. We fine-tuned the model to decode binary types of stimulus images (scene vs. object) seen by subjects and employed the leaveone-subject-out (LOSO) cross-validation, which means that the data from three subjects was used to train and one to test. | Performance evaluation on HCP dataset The performance of various models was compared by the mean and SD of accuracy ( Table 2). All of the proposed models effectively distinguished seven tasks, with the 4DResNet-Att outperforming the others with an accuracy of 97.4% ± 0.4% (mean ± SD). Figure 2a shows the decoding performance of 4DResNet-Att on seven cognitive tasks, and the confusion matrix shows a nice block diagonal architecture. The cognitive tasks were accurately identified with the accuracy of: Emotion (96.2 ± 0.2%), gambling (99.4 ± 0.3%), language (98.7 ± 0.4%), motor (96.0 ± 0.4%), relational (93.6 ± 0.9%), social (99.4 ± 0.3%), and WM (98.9 ± 0.4%). Furthermore, the confusion matrix showed misclassifications of the relational and the gambling, the emotion and the gambling, the motor and the gambling, and the relational and the WM. The superior performance of the 4DResNet-Att model in comparison to the 3DResNet (X. Wang et al., 2020) and other recent researchers is possibly due to the capability to handle complex spatiotemporal dynamics in fMRI series via 4D convolution operations and the use of the attention mechanism to adaptively select a focused location. Specifically, the 4DResNet is able to capture dynamic changes in hemodynamic response on temporal dimension and to integrate these representations from interconnected brain regions on spatial dimension. To evaluate whether 4DCNN brings a substantial improvement over 3DCNN, the 4DResNet-Att model was compared with the 3DResNet-Att model on the same brain decoding tasks using different lengths of frames as input (Figure 2b). Overall, the 4DResNet substantially enhanced classification performance compared to the 3DResNet, except for the 7-frame condition. The low performance at shorter fMRI input could be caused by two factors: (1) few information in short input, especially in series shorter than a hemodynamic response; (2) the 4DResNet tends to measure the relative dynamic change over a long range. Besides, we also evaluated a set of 4DResNet consisting of different sizes of 4D kernels to decode brain activity. Our results revealed that decoders with a short 4D-kernel size achieved lower decoding performance than decoders using a relatively longer 4D-kernel ( Figure 2c). Furthermore, to establish whether the use of attention mechanisms could enhance fMRI decoding, we compared the 4DResNet with attention modules and the naive 4DResNet. Figure 2c shows the The bolded values indicate the highest accuracy of different models. | Visualization of attention mask on the HCP dataset Previous studies have employed some visualizations to build an interpretable brain decoding model in fMRI analysis (Vu et al., 2020; X. Wang et al., 2020;Yin et al., 2020). Here, we visualized the focused regions of the attention module in each convolution layer to present an in-depth interpretation of the DNN. Each channel obtained seven attention masks for different tasks, which were averaged across all of the input samples from all of the subjects. Overall, the resulting attention masks at the low-level (first and second stages) have excellent coverage of the brain and prefer to highlight the areas containing the useful BOLD signal, such as the whole brain structure (Figure 3a), and diminish the noise areas like the brainstem or cerebrospinal fluid areas ( Figure S1b,c). The masks also focused on some functional networks and cerebral cortex related to different cognitive functions ( Figure S1), such as the default mode network, sensorimotor network, temporal lobe, and occipital lobe. The enhancement of gray matter areas helped to preserve the important features that could be further refined to distinguish between different cognitive states at high-level. The attention masks at the high-level (third and fourth stages) are getting more focused to cover task-specific brain areas (Figure 3c). It is notable, however, the focused layouts of the attention masks varied across different tasks and were remarkably task-specific. A channel could generate specific focused regions for different tasks, such as the left motor cortex areas in motor task, the ventral lateral prefrontal cortex and both superior and inferior temporal cortex in language task, the prefrontal cortex in relational task, and the temporal parietal junction and superior temporal cortex regions in social task (Figures S2 and S3). At the fourth stage, the attention masks become more abstract due to the stride in the convolution operation (Figure 3d), and the weights of attention have a narrower range, which could be due to the fact that the masks also serve as gradient update filters. A small range of attention weights in the high-level feature map could prevent some gradient problems. | Transfer learning Two different approaches were used to explore the benefits of transfer learning in fMRI analysis under different problem definitions or task design. What's more, the initial model, which used the same architecture and was trained from scratch by initializing random weights achieved a lower correlation coefficient in prediction (r s = .306, p < .001). The comparisons of predictions between different models were shown in Table 3. Furthermore, the visualization analysis shows that low-level attention masks remained distributed similarly to the source domain, whereas high-level attention masks changed adaptively as knowledge transferred from group similarities to individual differences ( Figure 4b). Second, the pretrained model from the HCP dataset was finetuned to decode different types of stimulus images on BOLD5000. The knowledge learned from the source domain is highly applicable to the target domain, and the transferred model achieved 77.6 ± 3.4% (4DResNet-Att), 73.5 ± 2.1% (4DResNet), and 64.3 ± 3.8% (3DResNet-Att) accuracy. However, all initial models trained from scratch failed to converge to satisfactory accuracy (<60%) across a wide range of choices of hyper-parameters. Furthermore, the visualizations demonstrated that the attention masks changed adaptively to fit individual subjects' brain structures, despite the fact that the fMRI data were registered to the corresponding T1w space rather than the standard MNI152 space ( Figure 5). As the model was fine-tuned to decode visual tasks, the attention masks from the high-levels also changed adaptively to reweight task-related brain regions. CNNs and passed these latent features to an LSTM network to take into account the temporal dependencies within task-evoked brain activity. The model we proposed includes a 4D convolution layer to detect temporo-spatial features, and puts the features into the channel dimension of the following 3D layers to reduce memory consumption. The above results suggest that the proposed model has a good balance | Attention module and interpretation of networks The attention mechanism helps humans to mainly focus on the most useful information in the human perception process. Inspired by this, attention mechanisms have been studied extensively in many deep learning fields (Vaswani et al., 2017;F. Wang et al., 2017;Woo et al., 2018). In this research, the proposed 3D mixed attention module consisted of a main branch and an attention branch and considered both channel and spatial features. The experimental results demonstrate that attention modules have many advantages. For example, the architecture with attention modules was trained to converge faster and more easily and achieve better performance, which could be due to the attention mechanism reweighting the focused areas to enhance discriminative features. The attention module is also beneficial for optimizing during back-propagation, which serves as a gradient update filter to prevent noisy gradients and enhance gradients from important regions. What's more, the attention modules not only improve decoding performance but also serve as a visualization tool to investigate how neural networks work in fMRI decoding. Cognitive neuroscience research requires a higher degree of accountability, while an end-to-end trainable network has always been regarded as a black-box in neuroscience. Presenting an in-depth interpretation of a method can demonstrate the feasibility and reproducibility of fMRI studies (Li & Fan, 2019;Vu et al., 2020). A good visual explanation should not only be treated as a localization method but also allow researchers to investigate how the neural network works. The analysis shows that the low-level masks provide excellent coverage of the brain to highlight useful structures while pruning noisy areas. As the layers go deeper, the attention masks get finer to cover various specific cortexes. The high-level attention masks varied across different tasks, re-weighting more attention to the areas related to the specific target task. What's more, the attention masks adapted to fit different subjects' brain structures. This also suggests that our architecture could be a suitable approach to avoid individual variability across subjects in the raw and minimally preprocessed fMRI series without spatial normalization. Besides, the attention areas that could present biologically meaningful interpretations of cognitive neuroscience demonstrated that the proposed CNN decoded states from task-related activations but not from nuisance variables. | Transfer learning Transferability has been demonstrated to be a significant advantage of DL methods over traditional methods in fMRI decoding (Gao et al., 2019;X. Wang et al., 2020). To this end, we explored the benefits of transfer learning under various conditions. The transferred regression model yielded significant predictions of individual trait differences and achieved better Spearman's correlation coefficient than the previous study (Greene et al., 2018). This could be due that the previous study relied on the discriminative power of feature selections, and not all connectivity parameters are relevant for prediction, while the transferred model could automatically capture the full range of individual trait differences. This also suggests that the group cognitive similarities among intrinsic brain states could generally be reused to predict individual differences, which is important for precision medicine in clinical research. Furthermore, previous studies most commonly applied transfer learning between the block-design dataset. On the BOLD5000, the pretrained model from the HCP dataset was fine-tuned to decode different visual tasks and obtained 77.6%. Despite the fact that the model was trained using the block-design dataset, the internal properties of human hemodynamic responses contained in the parameters are consistent and could be reused in the event-design dataset. | Limitations and future applications In this project, the proposed model outperformed other architectures. Despite the 4D convolution processing dynamic changes more efficiently, some limits remain, such as a substantial increase in computational and memory requirements. What's more, we only chose one condition for each cognitive domain in order to be comparable to previous studies, while the BOLD signals might be a mixture of hemodynamic responses evoked by different task events. A decoding model with fine cognitive granularity would generalize similarities and differences among task-induced brain states from multiple cognitive domains, which is important for transfer learning. The visualization result demonstrated that the high decoding performance was driven by the response of biologically meaningful brain regions. However, the statistical property of the attention mask remains unclear. We could have the results of qualitative analysis and should be cautious until further investigations into its reliability and statistical properties. The transfer learning method, which successfully extended similarities in brain activity to individual differences, showed potential for research in psychiatry and neurology. The pretrained model based on cognitive state can serve as a brain information retrieval system to distinguish differences in neurologic diseases and classify different psychiatric categories. | CONCLUSION In this study, we designed a 4DResNet with attention module for brain decoding. After investigating the efficacy of some alternative classifiers, the proposed 4DResNet-Att achieved 97.4% on the HCP dataset. We further demonstrated the model's transferability to a variety of tasks and datasets and presented an in-depth interpretation of the network. The visualization analysis of attention distributions illustrated the hierarchical emergence of task-specific masks with depth. After transfer learning, the adaptively changed attention distribution demonstrated the representation could be general extended from cognitive similarities to individual differences.
5,219.2
2022-02-25T00:00:00.000
[ "Computer Science" ]
Landscape genetics reveal broad and fine‐scale population structure due to landscape features and climate history in the northern leopard frog (Rana pipiens) in North Dakota Abstract Prehistoric climate and landscape features play large roles structuring wildlife populations. The amphibians of the northern Great Plains of North America present an opportunity to investigate how these factors affect colonization, migration, and current population genetic structure. This study used 11 microsatellite loci to genotype 1,230 northern leopard frogs (Rana pipiens) from 41 wetlands (30 samples/wetland) across North Dakota. Genetic structure of the sampled frogs was evaluated using Bayesian and multivariate clustering methods. All analyses produced concordant results, identifying a major east–west split between two R. pipiens population clusters separated by the Missouri River. Substructuring within the two major identified population clusters was also found. Spatial principal component analysis (sPCA) and variance partitioning analysis identified distance, river basins, and the Missouri River as the most important landscape factors differentiating R. pipiens populations across the state. Bayesian reconstruction of coalescence times suggested the major east–west split occurred ~13–18 kya during a period of glacial retreat in the northern Great Plains and substructuring largely occurred ~5–11 kya during a period of extreme drought cycles. A range‐wide species distribution model (SDM) for R. pipiens was developed and applied to prehistoric climate conditions during the Last Glacial Maximum (21 kya) and the mid‐Holocene (6 kya) from the CCSM4 climate model to identify potential refugia. The SDM indicated potential refugia existed in South Dakota or further south in Nebraska. The ancestral populations of R. pipiens in North Dakota may have inhabited these refugia, but more sampling outside the state is needed to reconstruct the route of colonization. Using microsatellite genotype data, this study determined that colonization from glacial refugia, drought dynamics in the northern Great Plains, and major rivers acting as barriers to gene flow were the defining forces shaping the regional population structure of R. pipiens in North Dakota. Amphibians of the northern Great Plains provide an opportunity to evaluate prehistoric climate signatures on genetic variation. This region was partially glaciated and has been characterized by glacial retreat (Mickelson et al., 1983) followed by cycles of drought and wet periods throughout the Holocene (~11 kya -present; Valero-Garcés et al., 1997;Xia, Haskell, Engstrom, & Ito, 1997). While the region southwest of the Missouri River was not glaciated, the region to the north and east was glaciated in the late Pleistocene until a rapid glacial retreat (~13 kya; Mickelson et al., 1983). This led to northward range expansions by a wide variety of species from southern refugia into newly deglaciated habitats (Masta, Laurent, & Routman, 2003;Wisely, Statham, & Fleischer, 2008;Yansa, 2006). The region remained cool and wet in the early Holocene following the glacial retreat (~11 kya-9 kya) before going through a prolonged extreme drought period (~9 kya-6 kya; Valero-Garcés et al., 1997;Xia et al., 1997). Throughout the mid and late Holocene to the present, the climate of the northern Great Plains has been characterized by milder oscillations in precipitation, ending in a current wet period following drought during the Little Ice Age and the Medieval Warm Period (950-750 BP;Fritz, Engstrom, & Haskell, 1994;Xia et al., 1997). The modern-day northern Great Plains region encompasses a number of distinct ecoregions. The semi-arid northwestern Great Plains is located to the south and west of the Missouri River and includes badlands, steppes, and shortgrass prairie with relatively low wetlands densities (Bryce et al., 1998;Euliss & Mushet, 2004). To the east of the Missouri River, there are the glaciated plains, which include the Prairie Pothole Region (PPR), an area with millions of depressional wetlands embedded primarily in shortgrass prairie that stretches from Saskatchewan to Nebraska and Iowa (Bryce et al., 1998;Tiner, 2003). Farther east, the shortgrass prairie transitions into tallgrass prairie and prairie pothole wetlands become less abundant on the Lake Agassiz Plain, an area formed after the draining of glacial Lake Agassiz ~8 kya (Bryce et al., 1998;Barber et al., 1999). Much of the native prairie and pothole wetlands in the Lake Agassiz Plain and the glaciated northern plains have been converted into small-grain and row-crop agriculture (~50% in North Dakota; Dahl, 1990), and are further threatened by continued agricultural expansion in the eastern part of the PPR and a potential shift to drier conditions due to climate change throughout much of the western part of the PPR (Carter Johnson et al., 2005, 2010. The northern leopard frog (Rana pipiens) is a ranid frog species that is widely distributed throughout the temperate regions of North America (Hammerson et al. 2004). Rana pipiens use a variety of habitats throughout their life cycle. Breeding ponds are used by adults in the spring and by tadpoles throughout the summer (Dole, 1971). Adults and metamorph juveniles extensively use terrestrial habitats for foraging and migration between other habitats during the summer (Dole, 1971;Pope, Fahrig, & Merriam, 2000). Rana pipiens also require well-oxygenated deep or flowing water habitats for overwintering hibernation (Cunjak, 1986;Emery, Berst, & Kodaira, 1972), which are particularly important for survival during the winter months on the northern Great Plains (Mushet 2010). Such habitats are rare during prolonged droughts, which limit the spatial distribution of northern leopard frog populations (Mushet 2010). Rana pipiens can migrate across relatively large distances among these different habitats, commonly moving 800 m, with reported ranges up to 5 km (Dole, 1971;Knutson, Herner-Thogmartin, Thogmartin, Kapfer, & Nelson, 2018). The genetic population structure and phylogeography of R. pipiens have been extensively studied and revised since the 1970s. Numerous described species were synonymized in the 1940s, but since the 1970s many species have been described based on morphology and/or genetics data (Hillis, 1988). Currently, R. pipiens has two distinct evolutionary lineages separated by the Mississippi River that were previously described as separate species (Cope, 1889;Hoffman & Blouin, 2004a;O'Donnell & Mock, 2012). Populations of R. pipiens east of the Mississippi River are typically more stable, have greater genetic diversity, and larger effective population sizes than populations west of the Mississippi River (Hoffman et al. 2004b;Philipsen et al. 2011, but see Mushet et al., 2013). Populations on the western edge of the range have become critically endangered and, in some cases, have already been extirpated (Corn & Fogleman, 1984;Rogers & Peacock, 2012). North Dakota lies in the transition zone between the more secure eastern populations and the imperiled western populations (NatureServe, 2017 | Sample collection, DNA extraction, and microsatellite genotyping Forty-one populations of R. pipiens were sampled throughout North Dakota (Figure 1). Potential sampling sites were selected a priori as permanent or semipermanent wetlands as classified by the US Fish & Wildlife Service National Wetland Inventory (USFWS, 2012;Fisher 2015). The distance between a sampling site and its nearest neighbor was at least 30 km and no greater than 85 km. At each site, neonate and adult specimens were captured by actively searching the wetland perimeter. Sampled specimens were spaced throughout the wetland perimeter to reduce the likelihood of sampling-related individuals. Rana pipiens toe clips were collected from 30 individuals from each sampling site following NDSU IACUC protocol #A10047. Toe clips were stored in individually marked vials containing 95% ethanol. | Analysis of genetic diversity and population structure The genotype dataset was used to derive metrics of genetic diversity and population structure in North Dakota R. pipiens. Linkage F I G U R E 1 Sampling sites (N = 41) where genetic material was collected around the state of North Dakota. Colors represent the major river basins where Rana pipiens populations clustered together. The Turtle Mountain ecoregion is also outlined as the R. pipiens population that was sampled in that region clustered separately from the other sampled populations in the Souris basin disequilibrium (LD) and deviations in Hardy-Weinberg equilibrium (HWE) were assessed using GENEPOP'007 (Rousset, 2008). Expected heterozygosity and allelic richness for each sample site were calculated using the adegenet package (Jombart, 2008) in R v.3.4.1 (R Core Team 2017). Nei's genetic distance (G ST ) was calculated between each sampling site and was used to construct an Unweighted Pair Group Method with Arithmetic Mean (UPGMA) tree using the phangorn package (Schliep, 2011). Population structure was examined using two Bayesian clustering algorithms, STRUCTURE 2.3 (Pritchard, Stephens, & Donnelly, 2000) and BAPS 3.2 (Corander, Sirén, & Arjas, 2008), and a multivariate method using K-means clustering following principal component analysis (PCA) of the microsatellite dataset in R. These three methods represent three statistically distinct approaches to describe population structure, allowing consistent patterns of structure to be distinguished from statistical artifacts of the clustering method. STRUCTURE and BAPS rely on different Bayesian clustering algorithms, the main difference being that BAPS has been optimized for incorporating spatial data into the clustering algorithm as a model parameter (Corander et al., 2008). The multivariate method does not rely on Bayesian inference, but instead uses dimensional reduction through PCA and a successive K-means clustering to determine numbers of populations and classify individuals into populations (Corander et al., 2008). STRUCTURE analysis consisted of an admixture model with correlated allele frequencies for each potential number of clusters (K). Each analysis consisted of 200,000 simulations after an initial burn-in of 20,000 simulations. The analysis was run for K values ranging from 1 to 41 possible clusters with 10 independent runs each. The ΔK method (Evanno, Regnaut, & Goudet, 2005) was used to identify the best-supported K value, which was determined based on the K value with the greatest ratio of change in the posterior probabilities of two sequential K values. If inconsistent results in K values were found compared to BAPS, additional nested STRUCTURE analyses were performed individually within each K group identified by the previous STRUCTURE analysis (Breton, Pinatel, Médail, Bonhomme, & Bervillé, 2008;Pereira-Lorenzo et al., 2010). These additional STRUCTURE analyses allowed for identification of potential substructuring that may have been missed. BAPS analyses were initially performed by both "clustering of individuals" and "spatially clustering of individuals" models within the population mixture analysis. These analyses were performed using K max values ranging from 1 to 41 with 10,000 iterations per run to estimate the admixture coefficients of each sample. Ten replicates were performed for each value of K max to investigate the consistency of results for all values of K. The adegenet package in R was used to perform multivariate analysis of population structure (Jombart, 2008). Multivariate analysis of the microsatellite data first required the reduction of dimensions using PCA, which was performed by the dudi.pca function from the ade4 R package (Chessel, Dufour, & Thioulouse, 2004). The first 100 principal components (PCs) explaining >95% of the variation in the microsatellite dataset were retained to use in K-means clustering. Clustering for each K value was evaluated using BIC (Bayesian information criterion), and the K value with the lowest BIC value was selected for further evaluation. Sample sites were classified into populations based on which cluster the majority (>50%) of individuals from the sample site were assigned. For sample sites where no cluster represented a majority of individuals, the top two clusters within that sample site were combined into one cluster and this was applied to all samples across the dataset. This process was repeated until all sample sites contained a majority of individuals from one cluster. The identified population clusters were used as predefined groups for discriminant analysis of principal components (DAPC; Jombart, Devillard, & Balloux, 2010) using the adegenet package in R (Jombart, 2008). This method reduces the dimensionality of the genetic variation between groups using PCA and then uses the PCs produced from this analysis in a linear discriminant analysis (LDA) to create discriminant functions representing a linear combination of correlated alleles that describe the greatest amount of variation in the genetic dataset. The optim.a.score function was used to determine the optimal number of PCs to retain to best describe the population structure without overfitting the discriminant functions. Population assignment probabilities were calculated for the optimized DAPC model, the hierarchical STRUCTURE model, and the BAPS model to assess how clearly populations were discriminated using each method. | Landscape genetic analysis Pairwise F ST (Nei, 1973) values were calculated for each pair of sampling sites as a measure of genetic distance between the frogs from each sampling site. Linear geographic distance between each pair of sampling sites was also calculated. A Mantel test including the linearized F ST values [F ST /(1-F ST )] and geographic distances was performed to find evidence for isolation by distance in the R. pipiens sampling sites across the state using the mantel function from the vegan package in R (Oksanen et al. 2017). A partial Mantel test was also performed to control for the variation associated with geographic distance and test the effect of the Missouri River as a barrier. Sample sites were coded into a barrier matrix with a binomial variable representing sites on the same side (0) or opposite sides (1) of the Missouri River. To test for global and local population spatial structure, a spatial principal component analysis (sPCA) was performed on the R. pipiens genetic dataset (Jombart, Devillard, Dufour, & Pontier, 2008). The sPCA works by maximizing the product of variance in allele frequencies and spatial autocorrelation (Moran's I) to find groups of alleles that are correlated with each other through space. An inverse square distances connection network was established to characterize the spatial relationships between each sampling site in the sPCA. The abilities of the eigenvalues produced by the sPCA to explain spatial population structure were assessed using global and local Monte Carlo tests . Three principal components with the largest positive eigenvalues were retained for further analysis. Landscape features affecting ecological suitability and the ability of R. pipiens to move between populations were included in a redundancy analysis (RDA; Legendre & Legendre, 2012) using the three retained sPCA principal components as a measure of genetic variation of the populations across the state. Landscape factor variables included in the RDA model were land use type, the Missouri River as a barrier, and the 6-digit hydrologic unit code (HUC-6) basin within which the sampling site was located. The land use type variable was created by drawing a 15 km buffer around each sampling site in ArcMap v10.5 (ESRI). This distance represents an area where high levels of gene flow (≥1 migrant/generation) would be expected between ponds within the buffer based on the distance R. pipiens can disperse within one generation (Dole, 1971;Knutson et al., 2018). Land use types from the National Land Cover Database (Homer et al., 2015) were reclassified into six classes: open water, urban/developed land, forest, scrubland/grassland, agriculture, and wetlands. The area of these classes within each 15 km buffer was calculated and converted into proportions to use as the land use variable in the RDA model. The Missouri River variable was created by classifying sampling sites as being east or west of the Missouri River. Finally, the basin variable was created by classifying sampling sites based on their HUC-6 location. A full RDA model including latitude and longitude positions of each sampling site along with all landscape features was run to determine whether any landscape variables explained a significant amount of variation in the sPCA axes. A partial RDA model conditioned on the latitude and longitude of sampling sites was run with all of the landscape feature variables to partial out variation in the sPCA axes due to isolation by distance to determine whether the selected landscape variables explained a significant amount of the remaining variation. Additional partial RDA models were run with each of the landscape factor as a single constrained variable conditioned on the other landscape factors, partitioning the variance in sPCA scores explained by each landscape factor on its own. A permutated ANOVA (PERMANOVA; Legendre, Oksanen, & Braak, 2011) was performed on each partial RDA model result with 999 permutations to determine whether the single unconditioned landscape factor in the partial RDA model explained a significant amount of the variance in the sPCA axes scores. | Historical population coalescence and paleoclimate modeling Population structure is strongly influenced by population demographic history, so the times of coalescence between the identified populations were estimated. DIYABC v2.0.4 (Cornuet et al., 2008), Figure 1 population j). A stepwise mutation model with a average mutation rate of 10 −3 -10 −4 was used to describe mutation dynamics across the whole set of microsatellite markers, while individual marker mutation rates ranged from 10 −2 to 10 −5 , representing a range of mutation rates commonly seen in vertebrates, including amphibians (Bulut et al., 2009;Guillemaud, Beaumont, Ciosi, Corneut, & Estoup, 2010;Storz & Beaumont, 2002). Uniform priors were set for popula- Model validation for all models was carried out using a k-fold sampling scheme. Points were randomly assigned a K value of one through four, and four separate model optimization trials were conducted. In each trial, a subset of points assigned one of the K values was left out to validate the model after it was fit, and a subset of points with the three other K values was used as a training dataset to build the model. This process was iterated four times for each model- | Genetic diversity and population structure No null alleles, significant deviations from Hardy-Weinberg equilibrium, or evidence of linkage disequilibrium was observed for any of the 11 loci. Allelic richness for each locus varied from 3 to 28 with an average of 17.8 alleles per locus (Table 1) | Landscape effects on population structure The Mantel test indicated there was significant isolation by distance (IBD) among sample sites (r = 0.515, p < 0.001). Geographic distance was positively correlated with genetic distance ( Figure 5). The partial Mantel test corroborated the east-west divide around the Missouri River seen in the population structuring analyses. There was a significant isolation-by-barrier effect between sites on opposite sides of the Missouri River (r = 0.582, p < 0.001). Spatial PCA found strong global structuring in the genetic variation across all sampling sites (global Monte Carlo test; r obs =0.146, p < 0.001), and there was no evidence of local structuring within sampling sites (local Monte Carlo test; r obs =0.036; p = 906). The top three sPCA axes that were retained explained 84.6% of the spatial genetic structure. The first sPCA axis explained 47.3% of the spatial genetic structure and indicated a stark split between populations on opposite sides of the Missouri River (Figure 6a). The second sPCA axis explained 23.0% of spatial variation and showed similarities between the Souris and Sakakawea population cluster with the Little Missouri and Lower Yellowstone cluster. The second sPCA axis also shows a stark division between the The full RDA model explained 97.6% of the variation in the first three sPCA axes (pseudo-F = 48.7, p = 0.001). The partial RDA conditioned on the geographic coordinates of sites found a significant effect of landscape variables after removing the effects of isolation by distance (pseudo-F = 20.7, p = 0.001). The IBD effect accounted for 60.8% of the variance in the sPCA axes, 36.8% was explained by the landscape factors, and 2.4% was unexplained. Partial RDA models focusing on individual landscape variables found two variables that explained a significant amount of variation in the sPCA axes (Table 3). HUC-6 basin explained 21.0% of the variance (pseudo-F = 21.0, p = 0.001), and the Missouri River explained 4.0% of the variance (pseudo-F = 35.7, p = 0.001). None of the land use types explained a significant amount of variance. | Population coalescence DIYABC indicated that median coalescence times among all 10 population clusters varied from 638 to 18,100 generations for the Beaumont model (Figure 7a; Table 4) and 588 to 13,600 generations for the Cornuet-Miller model ( Figure 7b; Table 4). The median coalescence time for the southwestern and northeastern clusters separated by the Missouri River was 13,600 generations and 18,100 generations for the Cornuet-Miller and Beaumont models, respectively ( Figure 7; | Species distribution modeling and paleoclimate projections Sixteen models were included in the final averaged model (Table 5). For the binomial logistic regression models, there was a maximum of two models with ΔBIC<2, which were then averaged together. All of the averaged binomial logistic regression models included the same six bioclimatic variables, three related to temperature and three related to precipitation. Two precipitation F I G U R E 3 Results of hierarchical STRUCTURE population clustering. The initial STRUCTURE run split the populations of Rana pipiens in North Dakota into two main clusters separated by the Missouri River; Populations 1-12 and 14 were to the southwest, whereas all remaining populations were northeast of the Missouri. The populations west of the Missouri River broke into four clusters when analyzed separately. The populations east of the Missouri River clustered into two large groups, one in the northwestern part of the state and one in the southern and eastern parts of the state. Each of these two groups further clustered into three clusters largely separated by basin variables, precipitation during the driest month and precipitation seasonality, were present in all of the best performing models and were most important in defining the suitable range of R. pipiens. Precipitation during the driest month was positively associated with R. pipiens occurrence, which also had a consistent negative relationship between with precipitation seasonality, suggesting F I G U R E 4 Results from DAPC population clustering analysis. (a) The first two discriminant functions explained 37.6% and 21.5% of the genetic variation in Rana pipiens from the sampled sites. Each node represents the genotype of an individual frog connected to a centroid of the cluster the frog was assigned to based on K-means clustering of the DAPC scores. (b) DAPC determined the sampled individuals were optimally clustered into ten groups, with 99.6% of individuals being assigned to one of these clusters with Q > 0.5. (Table 5) | Population structure, landscape effects, and prehistoric climate influences on population differentiation There is strong evidence of population structuring of R. pipiens in North Dakota. The populations are primarily split into a western clade to the southwest of the Missouri River and an eastern clade to the north and east of the Missouri River. The eastern clade is structured into six F I G U R E 7 Coalescence trees of the ten major clusters of Rana pipiens in North Dakota based on the Beaumont (a) and Cornuet-Miller (b) models. Both models show the major split between the populations east and west of the Missouri River occurred during the late Pleistocene (18-13 kya), while most of the subdivision within these major populations occurring during the dry period of the Holocene (11-7 kya) distinct clusters, and the western clade is structured into four distinct clusters. These patterns of population structure were consistent across three structuring methods with different underlying mechanisms and demonstrate the strengths of using multiple clustering methods to analyze population genetic data. River basins and the Missouri River were the most important landscape features in determining the spatial pattern of population structuring, along with isolation by distance. Acting as a barrier, the Missouri River has prevented gene flow between R. pipiens populations since the retreat of glaciers from North Dakota at the end of the Wisconsin glaciation. Both a partial Mantel test and the RDA variance partitioning (Table 3) (Mickelson et al., 1983), but the bioclimatic modeling indicated that this region was not suitable during the glacial maximum. However, following glacial retreat, the area west of the Missouri was likely to be the first area colonized by R. pipiens within North Dakota. Subsequently, the Missouri River has allowed the R. pipiens populations that colonized the southwest portion of North Dakota to remain genetically distinct from the R. pipiens populations that colonized the region to the east of the Missouri River. Hoffman and Blouin (2004a) Note. E, population located east of the Missouri River; W, population located west of the Missouri River. following the retreat of glaciers. The colonization route these lineages Most of the population subdivision associated with river basins is most likely to have occurred during the mid-Holocene (11-6 kya), a period when the northern Great Plains region went through extreme drought cycles (Valero-Garcés et al., 1997;Xia et al., 1997). These drought periods would have extremely restricted the breeding and overwintering aquatic habitats that R. pipiens require, possibly confining them to major riparian areas or areas with comparatively higher annual rainfall (i.e., the Turtle Mountain ecoregion; Bryce et al., 1998). In fact, river basins, which accounted for over 20% of the spatial genetic structure (Table 3), have been found to be an important factor in the genetic structuring of other frog species (Lind, Spinks, Fellers, & Shaffer, 2011;Murphy, Dezzani, Pilliod, & Sorfer, 2010 Additionally, in the prairie pothole region, overwintering habitats impose the greatest constraints on the distribution of R. pipiens (Mushet 2010). During droughts, the permanent deepwater and flowing water habitats required for overwintering are clustered in low-lying areas (Cohen et al., 2016;Van Meter & Basu, 2015). The connectivity of amphibian populations between basins is limited during dry periods when upland temporary wetlands are dry and R. pipiens are confined to overwintering sites in clusters of suitable wetlands in low-lying areas of basins (Mushet et al., 2013). Recent work has shown landscape-level decline of northern leopard frogs during severe droughts when populations are concentrated around the limited remaining winter refugia. Following the droughts, populations rapidly expanded (Mushet 2010 Temperature of Driest Quarter ABC model indicates it occurred approximately 600 years BP, during the Little Ice Age when the northern Great Plains had a relatively arid climate (Fritz et al., 1994;Laird, Fritz, Grimm, & Mueller, 1996;Xia et al., 1997), though the confidence intervals for this divergence also include other somewhat arid periods during the late Holocene. The dry period associated with the Little Ice Age lasted until approximately 200 years BP before shifting to the relatively wet current climate in eastern North Dakota (Fritz et al., 1994;Laird et al., 1996), a shift that likely restored connectivity and allowed the high amount of gene flow observed between the two subpopulations of R. pipiens located in the Red River basins. Isolation by distance was the final landscape factor underlying the genetic variation of R. pipiens populations in North Dakota. Isolation by distance was strongly supported by Mantel tests ( Figure 5) and was the most important single factor explaining the spatial genetic structure of R. pipiens in North Dakota according to the RDA variance partitioning analysis ( (Hoffman, Schueler, & Blouin, 2004). Rana pipiens commonly disperse distances of 800 m from their natal ponds (Bartlet & Klaver, 2017;Knutson et al., 2018) and are capable of dispersing >5 km in some landscapes (Dole, 1971). Rana pipiens are more capable of long-distance dispersal than many other amphibians, so patterns of isolation by distance may be difficult to detect in smaller areas with high wetland density due to high levels of gene flow (Mushet et al., 2013). Land use did not appear to influence population structure of R. pipiens at the scale analyzed in this study. North Dakota has a fairly homogenous landscape, largely comprised of grassland, pasture, and cultivated crops. Intensive agriculture and cattle grazing can have negative effects on the dispersal ability of some amphibians (Mushet, Euliss, & Stockwell, 2012;Rothermel & Semlitsch, 2002;Vos, Goedhart, Lammertsma, & Spitzen-Van der Sluijs, 2007). However, there is evidence that R. pipiens can use cultivated crop fields as well as native grasslands if the amount of cover and level of soil moisture in agricultural fields are comparable to the native grasslands allowing frogs to avoid desiccation (Bartlet & Klaver, 2017;Pope et al., 2000). Agricultural development may play a subtler role influencing connectivity within R. pipiens metapopulations than could be detected at the spatial scale used in this study. Finer scale sampling using higher-resolution genetic markers may be able to detect local effects of land use on the biotic connectivity between prairie wetlands. | CON CLUS ION Rana pipiens populations in North Dakota are highly structured, typical of amphibian populations at regional geographic scales. The patterns of genetic population structure reflect the colonization Rana pipiens populations southwest of the Missouri River, though currently stable, are characterized by lower genetic diversity (Stockwell et al., 2016) and smaller effective population sizes. These populations are at the highest risk of experiencing population declines similar to those reported on the western edge of the R. pipiens range. Conservation of wetlands and riparian areas across the region will be important, both for securing at risk R. pipiens populations on the Missouri Plateau and Badlands, and for maintaining the genetic diversity found throughout the R. pipiens populations on the glaciated northern Great Plains. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government. ACK N OWLED G M ENTS We thank Jacob Mertes for assistance with fieldwork; Patrick CO N FLI C T O F I NTE R E S T None declared. AUTH O R CO NTR I B UTI O N S CAS, JDLF, KP, and DM designed the project. JDLF performed fieldwork and laboratory work. KP and CAS provided guidance on data analyses. JW and JDLF conducted data analyses. CAS supervised the study. All authors participated in writing the manuscript.
6,792.8
2019-01-15T00:00:00.000
[ "Biology", "Environmental Science" ]
Symmetry, Integrability and Geometry: Methods and Applications Object-Image Correspondence for Algebraic Curves under Projections ⋆ We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence problem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane. Introduction Identifying an object in three-dimensional space with its planar image is a fundamental problem in computer vision. In particular, given a database of images (medical images, aerial photographs, human photographs), one would like to have an algorithm to match a given object in 3D with an image in the database, even though a position of the camera and its parameters may be unknown. Since the defining features of many objects can be represented by curves, obtaining a solution for the identification problem for curves is essential. A central projection from R 3 to R 2 models a pinhole camera pictured in Fig. 1. It is described by a linear fractional transformation x = p 11 z 1 + p 12 z 2 + p 13 z 3 + p 14 p 31 z 1 + p 32 z 2 + p 33 z 3 + p 34 , y = p 21 z 1 + p 22 z 2 + p 23 z 3 + p 24 p 31 z 1 + p 32 z 2 + p 33 z 3 + p 34 , This paper is a contribution to the Special Issue "Symmetries of Differential Equations: Frames, Invariants and Applications". The full collection is available at http://www.emis.de/journals/SIGMA/SDE2012.html where (z 1 , z 2 , z 3 ) denote coordinates in R 3 , (x, y) denote coordinates in R 2 and p ij , i = 1, . . . , 3, j = 1, . . . , 4, are real parameters of the projection, such that the left 3 × 3 submatrix of 3 × 4 matrix P = (p ij ) has a non-zero determinant. Parameters represent the freedom to choose the center of the projection, the position of the image plane and (in general, non-orthogonal) coordinate system on the image plane 1 . In the case when the distance between a camera and an object is significantly greater than the object depth, a parallel projection provides a good camera model. A parallel projection has 8 parameters and can be described by a 3 × 4 matrix of rank 3, whose last row is (0, 0, 0, 1). We review various camera models and related geometry in Section 2 (see also [14,20]). In most general terms, the object-image correspondence problem, or the projection problem, as we will call it from now on, can be formulated as follows: Problem 1. Given a subset Z of R 3 and a subset X of R 2 , determine whether there exists a projection P : R 3 − → R 2 , such that X = P (Z)? 2 A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. In the case when Z and X are finite lists of points, a solution based on the straightforward approach can be found in [20]. For curves and surfaces under central projections, this approach is taken in [15]. However, internal parameters of the camera are considered to be known in that paper and, therefore, there are only 6 camera parameters in that study versus 12 considered here. The method presented in [15] also uses an additional assumption that a planar curve X ⊂ R 2 has at least two points, whose tangent lines coincide. An alternative approach to the problem in the case when Z and X are finite lists of points under parallel projections was presented in [1,2]. In these articles, the authors establish polynomial relationships that have to be satisfied by coordinates of the points in the sets Z and X in order for a projection to exists. Our approach to the projection problem for curves is somewhere in between the direct approach and the implicit approach. We exploit the relationship between the projection problem and equivalence problem under group-actions to find the conditions that need to be satisfied by the object, the image and the center of the projection 3 . In comparison with the straightforward approach, our solution leads to a significant reduction of the number of parameters that have to be eliminated in order to solve Problem 1 for curves. All of the theoretical results of this paper are valid for arbitrary irreducible algebraic curves (rational and non-rational), but the algorithms are presented for rational algebraic curves, i.e. Z = {Γ(s) | s in the domain of Γ} and X = {γ(t) | t in the domain of γ} for rational maps Γ : R− → R 3 and γ : R− → R 2 . A bar above a set denotes the Zariski closure of the set 4 . Throughout the paper, we assume that Z is not a straight line (and, therefore, its image under any projection is a one-dimensional constructible set). Since, in general, P (Z) is not Zariski closed we must relax the projection condition to P (Z) = X . Under those conditions, Problem 1, for central projections, can be reformulated as the following real quantifier elimination problem: Reformulation 1 (straightforward approach). Given two rational maps Γ and γ, determine the truth of the statement: where U is the open subset of the set of 3 × 4 matrices defined by the condition that the left 3 × 3 minor is nonzero 5 . Real quantifier elimination problems are algorithmically solvable [30]. A survey of subsequent developments in this area can be found, for instance, in [22] and [11]. Due to their high computational complexity (at least exponential) on the number of quantified parameters, it is crucial to reduce the number of quantified parameters. The main contribution of this paper is to provide another formulation of the problem which involves significantly smaller number of quantified parameters. We first begin by reducing the projection problem to the problem of deciding whether the given planar curve X is equivalent to a curve in a certain family of planar curves under an action of the projective group in the case of central projections, and under the action of the affine group in the case of parallel projections. The family of curves depends on 3 parameters in the case of central projections, and on 2 parameters in the case of parallel projections. Then we solve these group-equivalence problems by an adaptation of differential signature construction developed in [9] for solving local equivalence problems for smooth curves. We give an algebraic formulation of the signature construction and show that it leads to a solution of global equivalence problems for algebraic curves. For this purpose, we introduce a notion of a classifying set of rational differential invariants and obtain such sets of invariants for the actions of the projective and affine groups on the plane. Following this method for the case of central projections, when Z and X are rational algebraic curves, we define two rational signature maps S| X : R− → R 2 and S| Z : R 4 − → R 2 . Construction of these signature maps requires only differentiation and arithmetic operations and is computationally trivial. Then Problem 1 becomes equivalent to Reformulation 2 (signature approach). Given two rational maps S| X and S| Z , determine the truth of the statement: where U is a certain Zariski open subset of R 3 . Note that Reformulations 1 and 2 have similar structure, but the former requires elimination of 14 parameters (p 11 , . . . , p 34 , s, t), while the latter requires elimination of only 5 parameters (c 1 , c 2 , c 3 , s, t). The case of parallel projection is treated in the similar manner and leads to the reduction of the number of real parameters that need to be eliminated from 10 to 4. 4 Recall that a set W ⊂ R n is Zariski closed if it equals to the zero set of a system of polynomials in n variables. The complement of a Zariski closed set is called Zariski open. A Zariski open set is dense in R n . A Zariski closure W of a set W is the smallest (with respect to inclusions) Zariski closed set containing W . 5 Note that, in Reformulation 1, we decide whether P (Z) ⊂ X , which appears to be weaker than P (Z) = X . However, they are actually equivalent. Since we assumed that Z is not a line, the set P (Z) is one-dimensional. Since X is rational algebraic curve, it is irreducible. Hence P (Z) ⊂ X ⇐⇒ P (Z) = X . Although the relation between projections and group actions is known, our literature search did not yield algorithms that exploit this relationship to solve the projection problem for curves in the generic setting of cameras with unknown internal and external parameters. The goal of the paper is to introduce such algorithms. The significant reduction of the number of parameters in the quantifier elimination problem is the main advantage of such algorithms. A preliminary report on this project appeared in the conference proceedings [8]. The current paper is significantly more comprehensive and rigorous, and also includes proofs omitted in [8]. Although the development of efficient implementation lies outside of the scope of this paper, we made a preliminary implementation of an algorithm based on signature construction presented here and an algorithm based on the straightforward approach over complex numbers. The Maple code and the experiments are posted on the internet [7]. The existence of a projection over complex numbers provides necessary but not sufficient condition for existence of a real projection. The paper is structured as follows. In Section 2, we review the basic facts about projections and cameras. In Section 3, we prove projection criteria that reduce the central and the parallel projection problems to a certain modification of the projective and the affine group-equivalence problems for planar curves. This criteria are straightforward consequences of known camera decompositions [20]. In Section 4, we define the notion of a classifying set of rational differential invariants and present a solution of the global group-equivalence problem for planar algebraic curves based on these invariants. This is an algebraic reformulation of a solution of local groupequivalence problem for smooth curves [9]. In Section 5, combining the ideas from the previous two sections, we present and prove an algorithm for solving the projection problem for rational algebraic curves and give examples. In Section 6, we discuss possible adaptations of this algorithm to solve projection problem for non-rational algebraic curves and for finite lists of points. We discuss the subtle difference between the discrete (with finitely many points) and the continuous projection problems, showing that the solution for the discrete problem does not provide an immediate solution to the projection problem for the curves represented by samples of points. This leads us into the discussion of challenges that arise in application of our algorithms to reallife images, given by discrete pixels, and of ideas for overcoming these challenges. In Appendix A, we give explicit formulae for affine and projective classifying sets of rational invariants. Projections and cameras We embed R n into projective space PR n and use homogeneous coordinates on PR n to express the map (1) by matrix multiplication. Notation 1. Square brackets around matrices (and, in particular, vectors) will be used to denote an equivalence class with respect to multiplication of a matrix by a nonzero scalar. Multiplication of equivalence classes of matrices A and B of appropriate sizes is well-defined by With this notation, a point (x, y) ∈ R 2 corresponds to a point [x, y, 1] = [λx, λy, λ] ∈ PR 2 for all λ = 0, and a point (z 1 , z 2 , z 3 ) ∈ R 3 corresponds to [z 1 , z 2 , z 3 , 1] ∈ PR 3 . We will refer to the points in PR n whose last homogeneous coordinate is zero as points at infinity. In homogeneous coordinates projection (1) is a map [P ] : PR 3 → PR 2 given by where P is 3 × 4 matrix of rank 3 and superscript T denotes transposition. Matrix P has a 1dimensional kernel. Therefore, there exists a point [z 0 1 , z 0 2 , z 0 3 , z 0 4 ] ∈ PR 3 whose image under the projection is undefined (recall that [0, 0, 0] is not a point in PR 2 ). Geometrically, this point is the center of the projection. In computer science literature (e.g. [20]), a camera is called finite if its center is not at infinity. A finite camera is modeled by a matrix P , whose left 3 × 3 submatrix is non-singular. Geometrically, finite cameras correspond to central projections from R 3 to a plane. On the contrary, an infinite camera has its center at an infinite point of PR 3 . An infinite camera is modeled by a matrix P whose left 3 × 3 submatrix is singular. An infinite camera is called affine if the preimage of the line at infinity in PR 2 is the plane at infinity in PR 3 . An affine camera is modeled by a matrix P whose last row is (0, 0, 0, 1). In this case map (1) becomes Geometrically, affine cameras correspond to parallel projections from R 3 to a plane 6 . Eight degrees of freedom reflect a choice of the direction of a projection, a position of the image plane and a choice of linear system of coordinates on the image plane. In fact, by allowing the freedom to choose a non-orthogonal coordinate system on the image plane, we may always assume that we project on one of the coordinate planes. Definition 1. A set of equivalence classes [P ] , where P is a 3 × 4 matrix whose left 3 × 3 submatrix is non-singular, is called the set of central projections and is denoted CP. A set of equivalence classes [P ], where P has rank 3 and its last row is (0, 0, 0, λ), λ = 0, is called the set of parallel projections and is denoted PP. Equation (1) determines a central projection when [P ] ∈ CP and it determines a parallel projection when [P ] ∈ PP. Sets CP and PP are disjoint. Projections that are not included in these two classes correspond to infinite, non-affine cameras. These are not frequently used in computer vision and are not considered in this paper. 3 Reduction to the group-equivalence problem Definition 2. We say that a curve Z ⊂ R 3 projects to X ⊂ R 2 if there exists a 3 × 4 matrix P of rank 3 such that X = P (Z), where Recall that for every algebraic curve X ⊂ R n there exists a unique projective algebraic curve [X ] ⊂ PR n such that [X ] is the smallest projective variety containing X (see [17] Definition 3. The projective group 7 PGL(n + 1) is a quotient of the general linear group GL(n + 1), consisting of (n + 1) × (n + 1) non-singular matrices, by a 1-dimensional abelian subgroup λI, where λ = 0 ∈ R and I is the identity matrix. Elements of PGL(n + 1) are equivalence classes [B] = [λB], where λ = 0 and B ∈ GL(n + 1). The equi-affine group SA(n) is a subgroup of A(n) whose elements [B] have a representative B ∈ GL(n + 1) with determinant 1 and the last row equal to (0, . . . , 0, 1). In homogeneous coordinates, the standard action of the projective group PGL(n + 1) on PR n is defined by multiplication The action (2) induces linear-fractional action of PGL(n + 1) on R n . 8 The restriction of (2) to A(n) induces an action on R n consisting of compositions of linear transformations and translations. Definition 4. We say that two curves X 1 ⊂ R n and X 2 ⊂ R n are PGL(n + 1)-equivalent if there exists [A] ∈ PGL(n + 1), such that [ where G is a subgroup of PGL(n + 1), we say that X 1 and X 2 are G-equivalent and write Before stating the projection criteria, we make the following simple, but important observations. Proposition 1. (i) If Z ⊂ R 3 projects to X ⊂ R 2 by a parallel projection, then any curve that is A(3)equivalent to Z projects to any curve that is A(2)-equivalent to X by a parallel projection. In other words, parallel projections are defined on affine equivalence classes of curves. (ii) If Z ⊂ R 3 projects to X ⊂ R 2 by a central projection then any curve in R 3 that is A(3)equivalent to Z projects to any curve on R 2 that is PGL(3)-equivalent to X by a central projection. Theorem 1 (central projection criterion). A curve Z ⊂ R 3 projects to a curve X ⊂ R 2 by a central projection if and only if there exist c 1 , c 2 , c 3 ∈ R such that X is PGL(3)-equivalent to a planar curvẽ Proof . (⇒) Assume there exists a central projection [P ] such that X = P (Z). Then P is a 3×4 matrix, whose left 3 × 3 submatrix is non-singular. Therefore there exist c 1 , c 2 , c 3 ∈ R such that p * 4 = c 1 p * 1 + c 2 p * 2 + c 3 p * 3 , where p * j denotes the j-th column of the matrix P . We observe that where A is the left 3 × 3 submatrix of P , 8 Linear-fractional action of PGL(n + 1) on R n is an example of a rational action of an algebraic group on an algebraic variety. General definition of a rational action can be found in [ , where B and [P 0 C ] are given by (5). We note that the map is a projection centered (−c 1 , −c 2 , −c 3 ) to the plane z 3 = 1 with coordinates on the image plane induced from R 3 , namely, x = z 1 and y = z 2 . We call (6) the canonical projection centered at (−c 1 , −c 2 , −c 3 ). It follows from decomposition (4) that any central projection is a composition of a translation in R 3 (corresponding to translation of the camera center to the origin), the canonical projection P 0 C centered at the origin, and a projective transformation on the image plane. Remark 2 (CP is a homogeneous space). It is easy to check that the map (4) shows that this action is transitive. The stabilizer of the canonical projection P 0 C centered at the origin is a 9-dimensional group The set of central projections CP is, therefore, diffeomorphic to the homogeneous space Theorem 2 (parallel projection criterion). A curve Z ⊂ R 3 projects to a curve X ⊂ R 2 by a parallel projection if and only if there exist c 1 , c 2 ∈ R and an ordered triplet Proof of rank 3. Therefore there exist 1 ≤ i < j ≤ 3 such that the rank of the submatrix p 1i p 1j p 2i p 2j is 2. Then for 1 ≤ k ≤ 3, such that k = i and k = j, there exist c 1 , c 2 ∈ R, such that Since Observe that [A] ∈ A(2) and the direct statement is proved. (⇐) To prove the converse direction we assume that there exist [A] ∈ A(2), two real numbers c 1 and c 2 , and a triplet of indices such that , where a planar curveZ i,j,k c 1 ,c 2 is given by (8). Let B be a matrix defined in the first part of the proof. A direct computation shows that Z is projected to X by the parallel Remark 3 (PP is a homogeneous space). The map Ψ : (A(2) × A(3)) × PP → PP defined by (7) for [P ] ∈ PP and ([A], [B]) ∈ A(2) × A(3) is an action of the product group A(2) × A(3) on the set of parallel projections PP. Decomposition (9) shows that this action is transitive. The stabilizer of the orthogonal projection P 0 P is a 10-dimensional group The set of central projections PP is, therefore, diffeomorphic to the homogeneous space The families of curvesZ i,j,k c 1 ,c 2 given by (8) have a large overlap. The following corollary eliminates this redundancy and, therefore, is useful for practical computations. Corollary 1 (reduced parallel projection criterion). A curve Z ⊂ R 3 projects to X ⊂ R 2 by a parallel projection if and only if there exist a 1 , a 2 , b ∈ R such that the curve X is A(2)-equivalent to one of the following planar curves: Proof . We first prove that for any permutation (i, j, k) of numbers (1, 2, 3) such that i < j, and for any c 1 , -equivalent to one of the sets listed in (10). We can reverse the argument and show that any curve given by (10) is A(2)-equivalent to a curve from family (8). Then the reduced criteria follows from Theorem 2. We note that the map is a parallel projection onto the z 1 , z 2 -coordinate plane in the direction of the vector (−a 1 , −a 2 , 1) with coordinates on the image plane induced from R 3 , namely, x = z 1 and y = z 2 . We call (11) the canonical projection in the direction (−a 1 , −a 2 , 1). The map x = z 1 + b z 2 , y = z 3 is a projection onto the z 1 , z 3 -coordinate plane in the direction of the vector (−b, 1, 0) with coordinates on the image plane induced from R 3 , namely, x = z 1 and y = z 3 , and finally the map x = z 2 , y = z 3 is the orthogonal projection onto the z 2 , z 3 -plane. Solving the group-equivalence problem Theorems 1 and 2 reduce the projection problem to the problem of establishing group-action equivalence between a given curve and a curve from a certain family. In this section, we give a solution of the group-equivalence problem for planar algebraic curves. In Section 4.1, we consider a rational action of an arbitrary algebraic group on R 2 and define a notion of a classifying set of rational differential invariants. In Section 4.2, we define a notion of exceptional curves with respect to a classifying set of invariants and define signatures of non-exceptional curves. We then prove that signatures characterize the equivalence classes of non-exceptional curves. In Section 4.3, we produce explicit formulae for classifying sets of rational differential invariants for affine and projective groups. In Section 4.4, we specialize our signature construction to rational algebraic curves and provide examples of solving group-equivalence problem for such curves. We note that differential invariants have long been used for solving the group-equivalence problem for smooth curves. Classical differential invariants were obtained with the moving frame method [10], which most often produces non-rational invariants. Signatures based on classical differential invariants were introduced in [9]. For smooth curves, the equality of signatures of two curves implies that there are segments of two curves that are group-equivalent (in other words, these curves are locally equivalent), but the entire curves may be non-equivalent. This is well illustrated in [25]. In a recent work [21], a significantly more involved notion of the extended signature was introduced to solve global equivalence problem for smooth curves. The rigidity of irreducible algebraic curves allows us to use simpler signatures to establish global equivalence. Rationality of the invariants as well as explicit characterization of exceptional curves allows us to solve equivalence problem using standard computational algebra algorithms. Definition of a classifying set of rational dif ferential invariants A rational action of an algebraic group G on R 2 can be prolonged to an action on the n-th jet space J n = R n+2 with coordinates (x, y, y (1) , . . . , y (n) ) as follows 9 . For a fixed g ∈ G, let (x,ȳ) = g · (x, y). Thenx,ȳ are rational functions of (x, y) and g · x, y, y (1) , . . . , y (n) = x,ȳ,ȳ (1) , . . . ,ȳ (n) , whereȳ Here d dx is the total derivative, applied under assumption that y is function of x. 10 We note that a natural projection π n k : J n → J k , k < n is equivariant with respect to action (12). For general theory of rational actions see [28] and for general definitions and properties of the jet bundle and prolongations of actions see [26]. Definition 5. A function on J n is called a differential function. The order of a differential function is the maximum value of k such that the function explicitly depends on the variable y (k) . A differential function which is invariant under action (12) is called a differential invariant. Remark 4. Due to equivariant property of the projection π n k : J n → J k , k < n, a differential invariant of order k on J k can be viewed as a differential invariant on J n for all n ≥ k. Definition 7 (classifying set of rational differential invariants). Let r-dimensional algebraic group G act on R 2 . Let K and T be rational differential invariants of orders r − 1 and r, Jets of curves and signatures In this section, we assume that X ⊂ R 2 is an irreducible algebraic curve, different from a vertical line. Let F (x, y) be an irreducible polynomial, whose zero set equals to X . Then the derivatives of y with respect to x are rational functions on X , whose explicit formulae are obtained by implicit differentiation From the definition of the prolonged action (12), it follows that for all g ∈ G and p ∈ X the following equality holds, whenever both sides are defined. Definition 9. A restriction of a rational differential function Φ : J n − → R to a curve X is a composition of Φ with the n-th jet of curve, i.e. Φ| X = Φ • j n X . If defined, such composition produces a rational function X − → R. Definition 10. Let I = {K, T } be a classifying set of rational differential invariants for G-action (see Definition 7). Then a point p ∈ X is called I-regular if: (1) p is a non-singular point of X ; An algebraic curve X ⊂ R 2 is called non-exceptional with respect to I if all but a finite number of its points are I-regular. Lemma 1. Let I = {K, T } be a classifying set of rational differential invariants (see Definition 7). Let X ⊂ R 2 be a non I-exceptional curve defined by an irreducible implicit equation (1) K| X and T | X are rational functions on X and therefore there exist polynomials k 1 , k 2 ∈ R[x, y] with no non-constant common factors modulo F , and polynomials t 1 , t 2 ∈ R[x, y] with no non-constant common factors modulo F , such that (2) The Zariski closure S X of the image of the rational map S| X : (3) dim S X = 0 if and only if K X and T X are constant functions on X and dim S X = 1 otherwise. In the latter case, S X is an irreducible algebraic planar curve, i.e. a zero set of an irreducible polynomialŜ X (κ, τ ). (1) A function K| X : X − → R is a composition of rational maps (see Definitions 8 and 9). Since X is non-exceptional this composition is defined for all but finite number of points on X and therefore K| X is a rational function. The same argument shows that T | X is a rational function. Since X is defined by F (x, y) = 0, where F is irreducible, there exist polynomials k 1 , k 2 ∈ R[x, y] with no non-constant common factors modulo F , and polynomials t 1 , t 2 ∈ R[x, y] with no non-constant common factors modulo F , such that (14) holds. (2) By definition, and therefore is the projection of the variety defined by X, given by (15), to the κ, τ -plane. It is the standard theorem in the computational algebraic geometry (see, for instance, [12,Chapter 5]) that the Zariski closure S X of this projection is the variety of the elimination ideal X ∩ R[κ, τ ]. (3) It is not difficult to prove, in general, that the Zariski closure of the image of an irreducible variety under a rational map is an irreducible variety. The dimension of this closure is less or equal than the dimension of the original variety. Thus S X is an irreducible variety of dimension zero when the signature map S| X = (K| X , T | X ) is a constant map and of dimension one otherwise. In the latter case S X is an irreducible algebraic planar curve and, therefore, is a zero set of a single irreducible polynomial. Definition 11. Let I = {K, T } be a classifying set of rational differential invariants with respect to G-action and X be non-exceptional with respect to I. (1) The rational map S| X : X − → R 2 defined by S| X (p) = (K| X (p), T | X (p)) for p ∈ X is called the signature map. (2) The image of S| X is called the signature of X and is denoted by S X . Theorem 3 (group-equivalence criterion). Assume that irreducible algebraic curves X 1 and X 2 are non-exceptional with respect to a classifying set of rational differential invariants I = (K, T ) under the G-action. Then X 1 and X 2 are G-equivalent if and only if their signatures are equal: Remark 5. Assume that two curves X 1 and X 2 have non-constant signature maps, and so the closures of their signatures are zero sets of polynomialsŜ X 1 (κ, τ ) andŜ X 2 (κ, τ ), respectively. The equality of signatures S X 1 = S X 2 , implies thatŜ X 1 (κ, τ ) is equal up to a constant multiple toŜ X 2 (κ, τ ). The converse is true over C, but not over R, because the latter is not an algebraically closed field (see Example 1 below and [12] for general results on implicitization). Proof of Theorem 3. Direction =⇒ follows immediately from the definition of invariants. Below we prove ⇐=. We notice that there are two cases. Either K| X 1 and K| X 2 are constant maps on X 1 and X 2 , respectively, and these maps take the same value. Otherwise both K| X 1 and K| X 2 are non-constant rational maps on X 1 and X 2 , respectively. Case 1: There exists κ 0 ∈ R such that K| X 1 (p 1 ) = κ 0 and K X 2 (p 2 ) = κ 0 for all p 1 ∈ X 1 and for all p 2 ∈ X 2 . Since X 1 and X 2 are non-exceptional, we may fix I G -regular points p 1 = (x 1 , y 1 ) ∈ X 1 and p 2 = (x 2 , y 2 ) ∈ X 2 . Then, due to separation property of the invariant K, ∃ g ∈ G such that j r−1 We consider a new algebraic curve X 3 = g · X 2 . Then due to (13), we have Since p 1 is a I-regular point of X 1 , it follows from (16) that it is also a I-regular point of X 3 and, in particular, is non-singular. Let F 1 (x, y) = 0 and F 3 (x, y) = 0 be implicit equations of X 1 and X 3 , respectively. We may assume that Functions y = f 1 (x) and y = f 3 (x) are local analytic solutions of differential equation with the same initial condition f 3 (x 1 ), k = 0, . . . , r − 1 prescribed by (16). From the I-regularity of p 1 , we have that ∂K ∂y (r−1) p (r−1) = 0 and so (17) can be solved for y (r−1) : where function H is smooth in a neighborhood p (r−1) ∈ J r−1 . From the uniqueness theorem for the solutions of ODEs, it follows that f 1 (x) = f 3 (x) on an interval I x 1 . Since X 1 and X 3 are irreducible algebraic curves it follows that X 1 = X 3 . Therefore, X 1 = g · X 2 . Case 2: K| X 1 and K| X 2 are non-constant rational maps. Then S X 1 = S X 2 is a one-dimensional set that we will denote S. LetŜ(κ, τ ) = 0 be the implicit equation for S (see Lemma 1). We know that ∂Ŝ ∂τ (κ, τ ) = 0 for all but finite number of values (κ, τ ), because, otherwise, K| X 1 and K| X 2 are constant maps. Therefore, since the curves are non-exceptional, there exists I-regular points p 1 = (x 1 , y 1 ) ∈ X 1 and p 2 = (x 2 , y 2 ) ∈ X 2 such that Due to separation property of the set I G = {K, T }, ∃ g ∈ G such that j r X 1 (p 1 ) = g · [j r X 2 (p 2 )]. We consider a new algebraic curve X 3 = g · X 2 . Then due to (13), we have From (18), (19) and I-regularity of the point p 1 ∈ X 1 it follows that Since p 1 is a I-regular point of X 1 , it follows from (19) that it is also a I-regular point of X 3 and, in particular, is non-singular. Let F 1 (x, y) = 0 and F 3 (x, y) = 0 be implicit equations of X 1 and X 3 , respectively. We may assume that analytic on an interval I x 1 , such that F 1 (x, f 1 (x)) = 0 and F 3 (x, f 3 (x)) = 0 for x ∈ I 1 . Then functions y = f 1 (x) and y = f 3 (x) are local analytic solutions of differential equation S K x, y, y (1) , . . . , y (r−1) , T x, y, y (1) , . . . , y (r) = 0 (21) with the same initial condition f (k) 3 (x 1 ), k = 0, . . . , r, dictated by (19). Since ∂Ŝ ∂τ (κ 0 , τ 0 ) = 0 and ∂T ∂y (r) p (r) = 0 (see (18) and (20)), equation (21) can be solved for y (r) : where function H is smooth in a neighborhood p (r) ∈ J r . From the uniqueness theorem for the solutions of ODE it follows that f 1 (x) = f 3 (x) on an interval I x 1 . Since X 1 and X 3 are irreducible algebraic curves it follows that X 1 = X 3 . Therefore, X 1 = g · X 2 . From the proof of Theorem 3 we may deduce the following: Then S| X 2 = S| X 1 if and only if K| X 2 (p) = κ 0 for all p ∈ X 2 . Classifying sets of invariants for af f ine and projective actions In this section, we construct a classifying set of rational differential invariants for affine and projective actions. We will build them from classical invariants from differential geometry [4,10]. We start with Euclidean curvature which is, up to a sign 11 , a Euclidean differential invariant of the lowest order. Higher order Euclidean differential invariants are obtained by differentiating the curvature with respect to the Euclidean arclength ds = 1 + [y (1) dκ dx , κ ss = dκs ds , . . . . Equi-affine and projective curvatures and infinitesimal arclengths are well known, and can be expressed in terms of Euclidean invariants [13,24]. In particular, SA(2)-curvature µ and infinitesimal SA(2)-arclength dα are expressed in terms of their Euclidean counterparts as follows By considering effects of scalings and reflections on SA(2)-invariants, we obtain two lowest order A(2)-invariants They are of order 5 and 6, respectively, and are rational functions in jet variables. PGL(3)-curvature η and infinitesimal arclength dρ are expressed in terms of their SAcounterparts The two lowest order rational PGL(3)-invariants are of differential order 7 and 8, respectively Explicit formulae for invariants in terms of jet coordinate are given by (46) and (47) (2) The set I PGL = {K P , T P } given by (24) is classifying for the PGL(3)-action on R 2 . Proof . We start by introducing differential functions (1) We note that dim A(2) = 6. We will prove the separation property of I A = {K A , T A } given by (46) on a Zariski open subset W 6 = p (6) ∈ J 6 y (2) = 0 and ∆ 1 = 0 of J 6 , where ∆ 1 is given by (25), and the separation property of K A on W 5 = π 6 5 (W 6 ) ⊂ J 5 . An affine transformation can be written as a product of a Euclidean, an upper triangular area preserving linear transformation, a scaling and a reflection where c 2 + s 2 = 1, = ±1 and h = 0. 11 The sign of κ changes when a curve is reflected, rotated by π radians or traced in the opposite direction. A rational function κ 2 is invariant under the full Euclidean group. (2) We note that dim PGL(3) = 8. We will prove the separation property of I P = {K P , T P } given by (47) where c 2 + s 2 = 1, e = 0 and g = 0. In the first part of the proof, we have shown that the two matrices on the right can bring a point p (8) ∈ W 8 to the point p 2 , y . A direct computation shows that We observe that y can be uniquely determined from the values of invariants K P and T P and therefore we can complete the proof of the separation property of K P on W 7 and the separation property of {K P , T P } on W 8 by an argument similar to the one presented in part 1 of the proof. Theorem 4, in combination with Theorem 3, leads to a solution for the projective and the affine equivalence problems for non-exceptional curves. The following proposition describes exceptional curves. Proof . In the affine case, we note that j 5 X (p) ∈ J 5 \ W 5 and j 6 X (p) ∈ J 6 \ W 6 if an only if κ| X (p) = 0 or µ| X (p) = 0. If κ| X (p) = 0 for more than finite number of points on X , then X is a line, if µ| X (p) = 0 for more than finite number of points on X then it is a parabola (see Proposition 3). From the explicit formulae (46) we see that These rational functions are not identically zero on X if neither ∆ 2 | X = 0 nor y (2) | X = 0, or equivalently X is not a line or a conic. Therefore, if X is not a line or a conic, it is {K A , T A }regular. In the projective case, we note that j 7 X (p) ∈ J 7 \ W 7 and j 8 X (p) ∈ J 8 \ W 8 if an only if κ| X (p) = 0 or µ α | X (p) = 0. If µ α | X (p) = 0 for more than finite number of points on X then X is a conic (see Proposition 3). From the explicit formulae (47) and (48), we see that ∂K P ∂y (7) p (7) = 0 and ∂T P ∂y (8) p (8) = 0 for all p (7) ∈ W 7 and all p (8) ∈ W 8 . Therefore, if an algebraic curve is not a line or a conic it is {K P , T P }-regular. Remark 6. It is well known (and easy to prove) that the set of all lines constitutes a single equivalence class (an orbit) under both A(2, R)-action and PGL(3, R)-action. Under the A(2, R)action the set of all conics splits into three orbits: the set of all parabolas, the set of all hyperbolas and the set of all ellipses, although under the A(2, C)-action the set of all hyperbolas and ellipses comprise a single orbit. All conics constitute a single orbit under PGL(3, R)-action, see [3,Section II.5]. Therefore an I A -exceptional algebraic curve is not A(2, C)-equivalent to a non I Aexceptional algebraic curve and an I PGL -exceptional algebraic curve is not PGL(3, C)-equivalent to a non I PGL -exceptional algebraic curve. The projective and the affine equivalence problems for exceptional curves can be easily solved using the above remark and the following proposition. Proposition 3. Let X be an irreducible planar algebraic curve. Then (1) X is a line ⇐⇒ κ| X = 0; (2) X is a parabola ⇐⇒ µ| X = 0; (3) X is a conic ⇐⇒ µ α | X = 0, where "= 0" means that a corresponding rational function is zero at every point of X . The above statements are true for both real and complex algebraic curves. If X is a real algebraic curve, then it is a hyperbola if and only if µ| X is a negative constant, while X is an ellipse if and only if µ| X is a positive constant. The proof of part (1) of Proposition 3 follows immediately from (22). Proofs of the other statements can be found in [18,Section 7.3]. The following corollary is obtained from Proposition 3 using explicit formulae for equi-affine invariants where ∆ 1 and ∆ 2 are given by (25) and (26). Corollary 3. Let X be an irreducible planar algebraic curve. Assume that X is not a vertical line. Let ∆ 1 and ∆ 2 be given by (25) and (26). Then (1) The restrictions ∆ 1 | X and ∆ 2 | X are rational functions on X . (2) ∆ 1 | X is a zero function if and only if X is a line or a parabola. Otherwise, restrictions of A(2)-invariants K A | X and T A | X are rational functions of X . 13 (3) ∆ 2 | X is a zero function if and only if X is a line or a conic. Otherwise K P | X and T P | X are rational functions on X . Signatures of rational curves In this section, we adapt the signature constructions to rational algebraic curves and give examples of solving the affine and projective equivalence problems using signatures. We adapt Definition 9 to rational curves as follows. Let X is a rational curve parameterized by γ(t) = (x(t), y(t)), such that x(t) is not a constant function 14 . Make a recursive definition of the following rational functions of t: 13 KA|X and TA|X are defined and are both zero functions when X is either an ellipse or a hyperbola. We know, however, that ellipses and hyperbolas are not A(2, R)-equivalent. There is no contradiction with Theorem 3, because ellipses and hyperbolas are IA-exceptional per Definition 10. 14 Equivalently, X is not a vertical line. where˙denotes the derivative with respect to the parameter. Let Φ be a rational differential function. Then the restriction of Φ| γ is computed by substituting (27) into Φ. If defined, Φ| γ is a rational function of t. Recalling Definition 11 of signature and Corollary 3, we conclude that: Proposition 4. Let X be an irreducible planar algebraic curve parameterized by a rational map γ(t). Assume where ∆ 2 is given by (26). Then The signatures can be computed either using inductive formulae for invariants given by (23) and (24), or explicit formulae given by (46) The projective signature of X 1 is parameterized by invariants while the signature of X 2 is parameterized by invariants (s + 1) 3 (s 6 + 6s 5 + 15s 4 + 19s 3 + 12s 2 + 3s + 1) 3 (s 2 + 3s + 3) 8 s 8 , Although it is not obvious, the curves defined by parameterizations (31) and (32) satisfy the same implicit equation This is a sufficient condition for the equality of signatures S X 1 and S X 2 over complex numbers, but it is not sufficient over reals. We can look for a real rational reparameterization t = φ(s) by solving a system of two equations K P | α (t) = K P | β (s) and T P | α (t) = T P | β (s) for t in terms of s. One can check that t = s + 1 provides a desired reparameterization. Thus S X 1 = S X 2 and hence, by Theorem 3 Reparameterization t = s + 1 allows us to find pairs of points on X 1 and X 2 which can be transformed to each other by PGL(3)-transformation that brings X 1 to X 2 . Since four of such pairs in generic position uniquely determines a transformation we can compute that X 2 can be transformed to X 1 by a transformation It turns out that cubic X 3 has constant PGL(3)-invariants and therefore its signature degenerates to a point. Thus, by Theorem 3, X 3 is not PGL(3)equivalent to either X 1 or X 2 . To underscore the difference between the solution of the equivalence problems over real and over complex numbers, we will consider one more cubic X 4 pictured on Fig. 3, whose rational parameterization is given by The signature of X 4 is parameterized by invariants Invariants K P | δ and T P | δ satisfy the implicit equation (33). Since the signatures of X 1 and X 2 satisfy the same implicit equation, we can conclude that, over the complex numbers, X 4 is projectively equivalent to both X 1 and X 2 . In fact, we can find that for (where we are free to choose any of the three cubic roots) the complex projective transformation transforms X 4 to X 2 . Our attempt to solve T P | δ (u) = T P | β (s) and T P | δ (u) = T P | β (s) for u in terms of s gives a rather involved rational complex reparameterization that transforms signature map of δ into the signature map of β, but no real reparameterization. Therefore Example 2 (A(2)-equivalence problems). We can again consider three cubics pictured on Fig. 2, but now ask if they are A(2)-equivalent. Recalling that A(2) is a subgroup of PGL(3) we can immediately conclude from the previous example that X 3 is not A(2)-equivalent to either X 1 or X 2 . To resolve the equivalence problem for X 1 and X 2 we need to compute their affine signatures. The affine signatures of X 1 is parameterized by invariants It turns out, that restrictions of both invariants, K A | β (s) and T A | β (s), are non-constant functions of s. Hence, X 1 and X 2 have different affine signatures. Therefore In fact, affine signatures for all four curves X 1 , X 2 , X 3 and X 4 have different implicit equations and therefore no two of them are affine equivalent neither over real numbers, nor over complex numbers. Algorithm and examples The algorithms for solving projection problems are based on a combination of the projection criteria of Section 3 and the group equivalence criterion of Section 4. Central projections The following algorithm is based on the central projection criterion stated in Theorem 1 and the group-equivalence criterion stated in Theorem 3. In the algorithm, we compute restrictions of differential functions ∆ 2 , K P and T P to a curve parameterized by γ(t) and to a family of curves parameterized by (c, s), where c = (c 1 , c 2 , c 3 ) determines a member of the family and s serves to parameterize a curve in the family. These restrictions are computed by substitution of (27) into formula (26) for ∆ 2 and into (47) and (48) for K P and T P , respectively. When the restrictions to (c, s) are computed, derivatives in (27) are taken with respect to s. One can use general real quantifier elimination packages, such as Reduce package in Mathematica, to perform the steps involving real quantifier elimination problems. To make an efficient implementation, one needs to take into account specifics of the problems at hand. This lies outside of the scope of the current paper and is a subject of our future work. Output: The truth of the statement: ∈ CP, such that X = P (Z). Steps: 1. [If X is a line, then determine whether Z is coplanar.] 6. [Compute the rational invariants.] compute K P | γ , K P | , T P | γ and T P | using the formulae (47), (48) in Appendix A. 7. [Determine whether, for some c, the signature of the Zariski closureZ c of the curve parameterized by (c, s) equals to the signature of X .] if K P | γ is a constant rational function, then return the truth of the statement else return the truth of the statement ∃ c ∈ R 3 : c is generic ∧ K P | is not a constant rational function, where we define Proof of Algorithm 1. On the first step of Algorithm 1, we consider the case when X is a line. Then Z can be projected to X if and only if Z is coplanar. Both conditions can be checked by computing determinants of certain matrices. If X is not a line we define, on Step 2, a rational map that parameterizes a family of curves. On Step 3, we compute restrictions of differential function ∆ 2 to γ(t) and (c, s). We remind the reader, that in the latter case derivatives are taken with respect to s. Since on this step we know that X is not a line and there are values of c for whichZ c is not a line, these restrictions are defined. On Step 4, we consider the case when X is a conic. Equivalently, by Corollary 3, 0. Then Z can be projected to X if and only if ∃ c such that the Zariski closureZ c of the curve parameterized by (c, s) is projectively equivalent to X and therefore is a conic (see Remark 6). Equivalently, 0. If X is not a conic, we reach Step 5, where we check for a possibility thatZ c is a conic for all parameters c (equivalently ∆ 2 | = R(c,s) 0) and therefore it can not be projected to X , which at this step is known to have a higher degree. If that is not the case, we proceed to Step 6, where we compute restrictions of differential invariants K P and T P to γ(t) and (c, s). Since on this step we know that X is of degree greater than 2 and there are values of c for whichZ c is of degree greater than 2, these restrictions are defined. On Step 7, where we know that X is non-exceptional and decide if there exists c ∈ R 3 such that: (1)Z c is non-exceptional which is equivalent to condition (39); (2) the signatures of the algebraic curves X andZ c are the same. On Step 6, we computed rational functions K P | (c, s) and T P | (c, s). We need to show that if we substitute a specific value c = c 0 ∈ R 3 into these functions we obtain the same rational functions of s as we would obtain by computing restrictions of the invariants to the curve parameterized by (c 0 , s). For generic values of c defined by (39), we can show that this is true. Indeed, it is well known that taking derivatives with respect to one of the variables and specialization of other variables are commutative operations. From condition (39) it follows that the curveZ c is not a line and so the denominators of (27) are not annihilated by such specialization. Therefore the restriction of jet variables to a curve parameterized by (c, s) commutes with a specialization of c. From condition (39) it also follows that the denominators of (47) and (48) are not annihilated by a generic specialization. Therefore, for a generic c 0 , rational functions K P | (c 0 , s) and T P | (c 0 , s) equal to the restrictions of the invariants K P and T P to (c 0 , s). To decide equality of signatures of the algebraic curve parameterized by γ(t) and the curveZ c we use Corollary 2 with (37) analyzing the case of constant invariant K P | X and (38) the case of non-constant invariant K P | X . Remark 7 (reconstruction). If the output is true, then, in many cases, in addition to establishing the existence of c 1 , c 2 , c 3 in Step 4 or 7 of the Algorithm 1, we can find at least one of such triplets explicitly. We then know that Z can be projected to X by a projection centered at (−c 1 , −c 2 , −c 3 ). We can also, in many cases, determine explicitly a transformation [A] ∈ PGL(3) that maps X to the Zariski closureZ c of the image of the map (c, s). We then know that Z can be projected to X by the projection projects to any of the four cubics planar cubics described in Example 1. We start with cubics X 1 , X 2 pictured on Fig. 2, whose parameterizations are given by (28) and (29), respectively. Since these two cubics are PGL(3)-equivalent then the twisted cubic can and X 2 , centered at (−1, 0, 0), there is a complex projection centered at (−1, 0, 0) from Z to X 4 . We also established, in Example 1 that and, therefore, there is no real projection centered at (−1, 0, 0) from Z to X 4 which, as we have seen, does not preclude the existence of a real projection with a different center (−1, −1, 0). Finally we consider X 3 , pictured on Fig. 2 with parameterization (30). From Example 1 we know that invariants for X 3 are constants, see (35). Following Algorithm 1, we need to decide whether there exists c ∈ R 3 , such that (c, s) does not parameterize a line or a conic and This is, indeed, true for c 1 = c 2 = 0 and c 3 = 1. This is sufficient to conclude the existence of a real projection. We can check that Z can be projected to X 3 by the a central projection x = z 1 z 3 +1 , y = z 2 z 3 +1 . The above example underscores Remark 1: although the twisted cubic can be projected to each of the planar curves X 1 , X 2 , X 3 and X 4 , the planar curve X 3 is not PGL(3, C)-equivalent to X 1 , or X 2 , or X 4 . Also X 4 is not PGL(3, R)-equivalent to X 1 or X 2 . Since all conics are PGL(3)-equivalent, we established that the twisted cubic can be projected to any conic. Moreover, we established that the twisted cubic is projected to a conic if and only if the center of the projection lies on the twisted cubic. So far in all our examples the outcome of the projection algorithm was true. Below is an example with false outcome. Example 5. We will show that the twisted cubic (40) can not be projected to the quintic ω(t) = (t, t 5 ). The signature of the quintic is parameterized by a constant map K P | ω (t) = 1029 128 and T P | ω (t) = 0, ∀ t. Following Algorithm 1, we need to decide whether there exists c ∈ R 3 , such that (c, s) does not parameterize a line or a conic and Substitution of several values of s in the above equation yields a system of polynomial equations for c 1 , c 2 , c 3 ∈ R that has no solutions. We conclude that there is no central projection from Z to ω(t) = (t, t 5 ). This outcome is, of course, expected, because a cubic can not be projected to a curve of degree higher than 3. Parallel projections The algorithm for parallel projections is based on the reduced parallel projection criterion stated in Corollary 1. This algorithm follows the same logic but has more steps than Algorithm 1, because we need to decide whether a given planar curve is A(2)-equivalent to a curve parameterized by α(s) = (z 2 (s), z 3 (s)), or to a curve parameterized by β(b, s) = (z 1 (s) + bz 2 (s), z 3 (s)) for some b ∈ R, or to a curve parameterized by δ(a 1 , a 2 , s) = (z 1 (s) + a 1 z 3 (s), z 2 + a 2 z 3 (s)) for some a = (a 1 , a 2 ) ∈ R 2 . Since the affine transformations are considered, projective invariants are replaced with affine invariants (see (46)). Due to its similarity to Algorithm 1, we refrain from writing out the steps of the parallel projection algorithm and content ourselves with presenting examples. A Maple implementation of the parallel projection algorithm over complex numbers is included in [7]. Example 6. As a follow-up to Example 3, it is natural to ask whether the twisted cubic can be projected to any of the cubics considered in that example by a parallel projection. Our implementation of the parallel projection algorithms [7] provides a negative answer to this question, the twisted cubic can not be projected to X 1 , or X 2 or X 3 , or X 4 under a parallel projection even over complex numbers. There are plenty of rational cubics to which the twisted cubic can be projected by a parallel projection. For example, the orthogonal projection to the z 1 , z 3 -plane projects the twisted cubic to (s 3 , s). Example 7. As a follow-up to Example 4, we consider the problem of the parallel projection of the twisted cubic to a conic. To answer this question we first consider a curve α(s) = (z 2 (s), z 3 (s)) = (s 2 , s), which is a parabola, and therefore the twisted cubic can be projected to any parabola. We then define a one-parametric family of curves β(b, s) = z 1 (s) + bz 2 (s), z 3 (s) = s 3 + bs 2 , s and a two-parametric family δ(a, s) = z 1 (s) + a 1 z 3 (s), z 2 + a 2 z 3 (s) = s 3 + a 1 s, s 2 + a 2 s . Obviously, there are no parameters a or b such that a curve in those two families becomes a conic (we can check this formally by computing rational functions ∆ 2 | α and ∆ 2 | β and seeing that there are no values of a or b that will make them zero functions of s). Thus we conclude that the twisted cubic can not be projected to either a hyperbola or an ellipse by a parallel projection. For a less obvious example and to finally get away from the twisted cubic we consider the following: Example 8. We would like to decide whether the spatial curve Z parameterized by Γ(s) = s 4 , s 2 , s , s ∈ R can be projected to X parameterized by The signature of X is parameterized by invariants K A | γ (t) = −1600 (24t 5 + 51t 4 + 57t 3 + 33t 2 + 9t + 1) 2 (24t 3 + 32t 2 + 24t + 5) 3 , T A | γ (t) = −40 448t 7 + 1304t 6 + 1956t 5 + 1735t 4 + 915t 3 + 287t 2 + 51t + 4 (24t 3 + 32t 2 + 24t + 5) 2 . is not A(2)-exceptional and its invariants are given by Independently of the value of b all curves in the family have the same signature equation which is different from the implicit equation for the signature for X , and therefore the curves from this family are not A(2)-equivalent to X . We finally consider a two-parametric family δ(a, s) = z 1 (s) + a 1 z 3 (s), z 2 + a 2 z 3 (s) = s 4 + a 1 s, s 2 + a 2 s and find out that for a 1 = 20 and a 2 = 2 the implicit equations of the signatures of X and the curve parameterized by δ(a, s) are the same. Thus we conclude that Z projects to X by a parallel projection over the complex numbers. By solving equations K A | δ (20, 2, s) = K A | γ (t) and T A | δ (20, 2, s) = T A | γ (t) for s in terms of t, we find a real reparameterization s = 1 + 4t which matches the signature maps of two curves, i.e. S| γ (t) = S| δ (20, 2, 1 + 4 t). Thus the signatures of X and the curve parameterized by δ(20, 2, s) are identical and, therefore, Z projects to X by a parallel projection over the real numbers. We proceed to find a projection. Since S| γ (t) = S| δ (20, 2, 1 + 4t), we not only know that the exists A(2)-transformation A that maps γ(t) to δ 20, 2, (1 + 4t) , but that for any value of t the transformation A maps the point γ(t) to the point δ(20, 2, (1 + 4t). Three pairs of points in general position are sufficient to recover an affine transformation. Using three pairs of points corresponding to t = 0, 1, 2 we find the affine transformation x = 256x + 96y + 21,ȳ = 16y + 3 On Step 2 of Algorithm 1, where we are describing the family of curvesZ c 1 ,c 2 ,c 3 , we must produce all three possible implicit equations A c (x, y) = 0, when c 3 3 − c 1 = 0 and c 2 3 + c 2 = 0, B c (x, y) = 0, when c 3 3 − c 1 = 0, but c 2 3 + c 2 = 0 and C c (x, y) = 0, when c 3 3 − c 1 = 0 and c 2 3 + c 2 = 0. Then the rest of the algorithm should run for each of these cases with appropriate conditions on c. We found that, for majority of the examples, producing the set of all possible implicit equations for the curvesZ c 1 ,c 2 ,c 3 (3) from the given implicit equations of an algebraic curve Z ⊂ R 3 to be a very challenging computational task. Projection problem for f inite lists of points The projection criterion of Theorem 1 adapts to finite lists of points as follows: The proof of Theorem 5 is a straightforward adaptation of the proof of Theorem 1. The parallel projection criteria for curves, given in Theorem 2 and Corollary 1, are adapted to the finite lists in an analogous way. The central and the parallel projection problems for lists of m points is therefore reduced to a modification of the problems of equivalence of two lists of m points in PR 2 under the action of PGL(3) and A(2) groups, respectively. A separating set of invariants for lists of m points in PR 2 under the A(2)-action consists of ratios of certain areas and is listed, for instance, in Theorem 3.5 of [27]. Similarly, a separating set of invariants for lists of m ordered points in PR 2 under the PGL(3)-action consists of cross-ratios of certain areas and is listed, for instance, in Theorem 3.10 in [27]. In the case of central projections we, therefore, obtain a system of polynomial equations on c 1 , c 2 and c 3 that have solutions if and only if the given set Z projects to the given set X and an analog of Algorithm 1 follows. The parallel projections are treated in a similar way. Details of this adaptation appear in the dissertation [6]. We note, however, that there are other computationally efficient solution of the projection problem for lists of points. In their book [20], Hartley and Zisserman describe algorithms that are based on straightforward approach: one writes a system of equations that relates pairs of the corresponding points in the lists Z ⊂ R 3 and X ⊂ R 3 and determines if this system has a solution. The book also describes algorithms for finding parameters of the camera that produces an optimal (under various criteria) but not exact match between the object and the image. In [2,1], the authors present a solution to the problem of deciding whether or not there exists a parallel projection of a list Z = (z 1 , . . . , z m ) of m points in R 3 to a list X = (x 1 , . . . , x m ) of m points in R 2 , without finding a projection explicitly. They identify the lists Z and X with the elements of certain Grassmanian spaces and use Plüker embedding of Grassmanians into projective spaces to explicitly define the algebraic variety that characterizes pairs of sets related by a parallel projection. They also define of an object/image distance between lists of points Z ⊂ R 3 and X ⊂ R 2 , such that the distance is zero if and only if there exists a parallel projection that maps Z to X. As illustrated by Fig. 7, a solution of the projection problem for lists of points does not provide an immediate solution to the discretization of the projection problem for curves. Indeed, let Z = (z 1 , . . . , z m ) be a discrete sampling of a spatial curve Z and X = (x 1 , . . . , x m ) be a discrete sampling of a planar curve X . It might be impossible to project the list Z onto X, even when the curve Z can be projected to the curve X . Applications: challenges and ideas A discretization of projection algorithms for curves will pave a road to real-life applications and is a topic of our future research. Such algorithms may utilize invariant numerical approximations of differential invariants presented in [5,9]. Differential invariants and their approximations are highly sensitive to image perturbations and, therefore, pre-smoothing of the data is required to use them. Since affine and projective invariants involve high order derivatives, this approach may not be practical. Other types of invariants, such as semi-differential (or joint) invariants [27,32], integral invariants [16,19,29] and moment invariants [31,33] are less sensitive to image perturbations and may be employed to solve the group-equivalence problem. One of the essential contributions of [1,2] is the definition of an object/image distance between ordered sets of m points in R 3 and R 2 , such that the distance is zero if and only if these sets are related by a projection. Since, in practice, we are given only an approximate position of points, a "good" object/image distance provides a tool for deciding whether a given set of points in R 2 is a good approximation of a projection of a given set of points in R 3 . Defining such object/image distance in the case of curves is an important direction of further research.
16,681
0001-01-01T00:00:00.000
[ "Mathematics", "Computer Science" ]
Coarse-Grained Modeling of On-Surface Self-Assembly of Mixtures Comprising Di-Substituted Polyphenyl-Like Compounds and Metal Atoms of Different Sizes We use coarse-grained molecular dynamics simulations to investigate the phase behavior of binary mixtures of di-substituted polyphenyl-like compounds and metal atoms of different sizes. We have estimated the possible on-surface behavior that could be useful for the target design of particular ordered networks. We have found that due to the variation of system conditions, we can observe the formation of the parallel, square, and triangular networks, Archimedean tessellation, and “spaghetti wires.” All of these structures have been characterized by various order parameters. INTRODUCTION Fabrication of two-dimensional materials attracts considerable attention, owing to their possibility to exhibit different features from their bulk counterparts. This field has begun with the discovery of graphene and characterization of its properties, especially in the electronic field. 1 From this date, a variety of different two-dimensional (2D) materials have been synthesized, and two main routes have been established. The first one is a top-down approach that benefits from the general knowledge of the three-dimensional (3D) materials such as covalent or metal-organic frameworks (COFs and MOFs, respectively) and is supposed to exfoliate a layered crystal due to applied external forces to form a single layer of the smallest thickness as possible. The second protocol is a bottom-up approach, which can be applied on the surfaces such as highly oriented pyrolytic graphite (HOPG) or coinage metals (Au, Ag, Cu) or in the air/water or liquid/liquid interfaces. The obtained single nanolayers have already been used as membranes for separation in both liquid and gas phases, 2 batteries, 3 molecular sieves, 4 and insulin delivery. 5 The on-surface synthesis performed either in ultrahigh vacuum or liquid conditions generally has proven to be the successful and most conventional routine for the preparation of well-ordered networks. To date, a variety of compounds of different geometry have been investigated, and it has been found that they can form small clusters 6 up to extended porous structures, 7 as well as Kagomépatterns, 8,9 rhombus tilings, 10 and five-vertex Archimedean tessellations. 11,12 The latter is particularly interesting since it has been obtained in a mixture of dicarbonitrile polyphenyl compounds with rare-earth metal atoms, whereas the first reports of such structures appeared in alloy particles 13 or chalcogenides. 14 In this paper, we wanted to further explore the conditions on how di-substituted polyphenyl-like (linker) molecules behave with mixtures of metal atoms. Unlike in the references, 11,12 we have changed not only the mixture concentration but also the metal atom sizes. For this purpose, we have designed a coarsegrained model and performed comprehensive molecular dynamics simulations. We believe that the protocol used in the course of this study can give a very helpful insight for the experimentalists owing to the fact that computer modeling is a very convenient substitute to the exploration of problems of interest and can reasonably complement experimental findings. Although there are other methods that have been widely used such as quantum density functional theory 15 METHODS In this paper, the geometry of the linear linker molecules has been devoted to reflecting the behavior of di-substituted polyphenyl compounds, as shown in Figure 1. In its structure, each of the gray segments mimicked one phenyl group, whereas red entities were the active interaction centers. The size of every linker's segment has been set to σ l = σ, while the active sites have been five times smaller, σ a = 0.2σ l . The segments in the former have been tangentially jointed with one another; therefore, the bonding distance has been set to σ l . The active sites have been entirely embedded into both terminal units of the linear linker, and the bonding distance has been abbreviated as d = 0.36σ l . In our previous paper, we have already shown that both the size and the bonding distance d can provide structures of 3-to 6-fold symmetries. 22 This approach, however, lacks the possibility to change the concentration of the mixtures since the second component has been treated implicitly. Therefore, we wanted to fill this gap, and metal atoms in our simulations have been treated explicitly, and their size varied between σ m = 0.5 − 1.0σ l . In molecular dynamics simulations, all of the objects have been treated as flat and rigid objects, and all of the necessary bonds have been maintained by harmonic binding potentials The interparticle potential employed in our simulations was (12,6) Lennard-Jones potential, which has been appropriately shifted to ensure the continuity of both the potential and of its first derivative 25 (5) where U LJ (r) = 4ε ij [(σ ij /r) 12 − (σ ij /r) 6 ] and U′ LJ (r cut ) is the first derivative of U LJ (r) at r = r cut . The Lennard-Jones potential parameters, σ l = σ and ε ll = ε, have been set to be the units of length and energy, respectively. The reduced time and temperature are equal to The energies of the linker−linker and the linker−active site interactions have been set to ε ll = ε aa = ε and ε ma = 5.0ε. The linker-site diameter and the energy of the linker-site interactions have been set to σ al = (σ a + σ l ) /2 and ε al = ε, respectively. The cutoff distance of the interactions between the active site and the metal atom has been set to r cut,ma = 2σ ma , whereas the remaining ones are r cut,ij = σ ij , where ij = aa, al, ll, lm, and mm. This has been done to assume that the only attraction in the system is due to the metal-organic coordination, whereas the remaining are the soft-core interactions. We did not use any solvent explicitly, but rather by means of presented interparticle potential, we modeled the system so that the interactions other than the active site-metal atoms are screened due to the solvent presence. The harmonic potential constants k al ≡ k ll have been set to 1000ε/σ 2 and k θ = 1000ε/(rad) 2 . Such high values of harmonic constants have been set to reduce the range of fluctuations and, in consequence, to maintain the rigidity of the assumed geometries. All of the molecular dynamics simulations have been performed in the NVT ensemble, using LAMMPS simulation package. 27,28 The velocity Verlet integration scheme has been used with the reduced time step of the order of t = 0.001τ. The number of linker molecules and metal atoms varied from 1600 to 8000 and 1600 to 4800, respectively. However, one has to note that the total number of atoms varied, depending on the concentration. This amount is sufficient for most of the selfassembly systems, which is simultaneously large enough to form ordered networks and small enough to form those structures in a reasonable time frame. The simulation scheme involved preliminary runs in the NPT ensemble to establish the desired density. Next, equilibration runs for 5 × 10 6 times steps using Berendsen thermostat, 29 with the damping constant equal to τ B = 10τ have been performed. Further equilibration for 5 × 10 7 as well as production runs have been performed using Nose−Hoover chain algorithm, 30 with the damping constant equal to τ NH = 10τ and the number of chains set to N chain = 3. Every system has been cooled down from temperatures where we did not observe any order, up to the point where self-assembled networks have been distinct. The temperature grid was set to Δ T* = 0.01. RESULTS AND DISCUSSION Let us start from the description of the binary mixture with metal atoms 2-fold smaller than the diameter of core's segments, i.e., σ m = 0.5σ. The results for the system with an equal amount of linker and metal entities (χ = 0.5) can be found in Figure 2a. One can see the formation of a network with square symmetry with distinct imperfections in its structure. If one increases the number of linker molecules three times (χ = 0.75), the formation of a nearly perfect square lattice can be observed, as it has been shown in Figure 2b. To better understand the development of this network, we wanted to investigate the arrangement of metal atoms. In part (c) of Figure 2, we can see their layout for the mixture composition χ = 0.5. In this case, two atoms tend to glue with one another, despite their soft-core interactions. On the contrary, for χ = 0.75, metal atoms are entirely separated (cf. Figure 2d). To verify if observations from snapshots are correct, we have calculated the radial distribution function with respect to metal atoms, which can be found in the inset to Figure 2d. For the smaller molar fraction χ = 0.5, the most prominent peak is around r ≈ 0.5, which means that those entities are glued one to another. On the other hand, for higher χ = 0.75, this peak almost vanished, and the most prominent distance is around r ≈ 3.5. Moreover, we have computed the number of dimers in both cases, which is approximately 90 and 5% for mixture compositions χ = 0.5 and 0.75, respectively. Another quantity that we used to characterize the formation of a highly ordered, square network was the two-dimensional bond-orientational order parameter (BOOP), calculated with respect to metal atoms, which is defined as 31 where i runs over all metal atoms of the system, j runs over all neighbors of i, ϕ ij denotes the angle between the bond connecting particles i and j and an arbitrary but fixed reference axis, N bond denotes the number of bonds in the system, and k = 2, 3, 4, 5, 6. For the square network, we have assumed that two metal atoms are neighbors if their distance is less than 3.8σ, which is the second minimum extracted from the radial distribution function (cf. inset to Figure 2d). The bondorientational order parameter can take the values between 0 and 1 for the disordered and the ordered structures of a particular symmetry, respectively. To corroborate the observations from snapshots and radial distribution function, we have calculated this parameter for the aforementioned mixture compositions. In the first case, i.e., χ = 0.5, the 2D BOOP is approximately Q 4 = 0.202 ± 0.05, which indicates that there is an order to some extent, however, the presence of imperfections in the network is noticeable, which in consequence decreases its value. On the other hand, for the composition χ = 0.75, this parameter takes a value of Q 4 = 0.915 ± 0.03, which corresponds to a nearly perfect structure of 4-fold symmetry. This analysis demonstrates that the increase in the number of linker molecules in the system stabilizes the formation of a square network. We have also examined the mixture composition of χ = 0.25, which means that there are 3-fold more metal atoms than linker molecules. In this case, we have found that the formation of "spaghetti-like" strings (cf. Figure 2e). Similarly, as in the case of χ = 0.5, metal atoms are gluing one to another and are forming "dimers". An increase of the density does not lead to the increase of order in the system, and those strings do not start to align in one direction (cf. Figure 2f). The radial distribution function calculated with respect to metal atoms shown in the inset to Figure 2f shows that the most prominent peak is around r ≈ 0.5, which confirms that metals tend to form dimers. Moreover, we have computed the number of dimers in both cases, which is approximately 63 and 53% for the densities ρ* = 0.2 and 0.5, respectively. Next, we proceed to the description of a binary mixture with metal atoms of size equal to σ m = 0.8σ. The results for χ = 0.5 can be found in Figure 3a. One can see that we are not able to distinguish any network of a particular symmetry. Linker molecules connect with metal atoms quite randomly, and multiple pore shapes can be observed. The arrangement of metal atoms as shown in Figure 3b shows that for this mixture composition, they tend to glue one to another and form dimers, as for smaller metal sizes. As previously, it leads to the disturbance in the formation of any ordered network. The results for the system with mixture composition χ = 0.75 can be found in Figure 3c. In this case also, the formation of multiple pore shapes can be observed; however, this pattern resembles the 3 2 .4.3.4 Archimedean tiling with several visible imperfections. For better visualization, we have colored the particular polygons belonging to this semiregular tessellation. The arrangement of metal atoms, as shown in Figure 3d, shows that they are separated, as it has been observed in a previous case (cf. Figure 2). The radial distribution function inserted to part (d) of this figure corroborates with the observations from the snapshots. Likewise, as for smaller σ m , we have evaluated the average amount of dimers in the system, which is approximately 73% (χ = 0.5) and 4% (χ = 0.75). We conclude that the increase of the number of linker molecules leads to the stabilization of ordered networks of a particular symmetry. Similarly, as for the previous metal size, we have examined the mixture composition of χ = 0.25. The formation of similar spaghetti stripes has been found, as in the case of σ m = 0.5σ. The results have been omitted for the sake of brevity. Let us proceed to the description of a binary mixture with metal atoms of size equal to σ m = 1.0σ. The results for the system with mixture composition χ = 0.5 can be found in Figure 4a. In this case, we can see the formation of a network with both positional and orientational order. To prove the former, we have calculated the two-dimensional structure factor 24 with respect to linker molecules, which can be found in the inset to Figure 4a. Moreover, to demonstrate the orientational order, we have calculated the nematic order parameter 32 with respect to linker molecules, defined as where b α (i) is the α-th coordinate of the unit vector b, specifying the orientation of the molecule i, and δ αβ is the Kronecker delta function. The corresponding eigenvalues of Q are ±S. This function takes values between 0 and 1 in disordered and perfectly ordered phases, respectively. In real systems, it is very difficult to reach the value of S equal to 1, owing to the possible imperfections of the ordered structure or rotation of differently oriented domains. One can see that for this structure, the value of this order parameter is around S ≈ 0.95 in the lowest temperatures. This value proves the formation of a highly ordered network of a single orientation. The temperature relation of this quantity also indicates that the structure remains until T* = 0.56. The results for the system with mixture composition χ = 0.75 can be found in Figure 4c. We can see the formation of similar 3 2 .4.3.4 Archimedean tiling, as for smaller metal size. It is noteworthy that this structure has more visible imperfections compared to the previous case. Further increase of linker molecules in relation to metal atoms, i.e., χ = 0.83, not only leads to the formation of 3 2 .4.3.4 Archimedean tiling but also a network with triangular symmetry can be observed. An increase in the density of the system shows that the semiregular tessellation vanished, and the latter structure is only present. The BOOP for this network takes high values and is around Q 6 = 0.96. This indicates that the Archimedean tessellation in this system is not a stable structure, and the formation of a triangular network is favored. However, it is worth mentioning that the same situation can be observed in experiments, where the formation of various different patterns can be observed, and the determination of which of them is thermodynamically stable is not so trivial. Finally, we proceed to the examination of the system with the mixture composition of χ = 0. 25. Surprisingly, we do not observe the formation of spaghetti wires, as for previous cases, but the parallel network remains. The only effect that the increase of the number of metal atoms caused is that there are two differently ordered domains in the system. In this case, due to the observation of two differently oriented domains, the nematic order parameter takes values around S ≈ 0.55. However, it is worth mentioning that if one would compute this quantity separately for each of those clusters, the situation would reflect the one observed in Figure 4a. The corresponding snapshots have been omitted for the sake of brevity. CONCLUSIONS In this paper, we have investigated the phase behavior of binary mixtures of di-substituted polyphenyl-like molecules and metal atoms. We considered the influence of metal atoms' size and the mixture composition of the self-assembly behavior. To deepen our discussion, we summarize the results in a more systematic way. In Figure 5, we present the overview of the structures observed for the systems with different metal atom sizes σ m and mixture compositions χ. We have found that for σ m = 0.5σ, depending on the mixture composition χ, the formation of two distinct networks can occur, which are spaghetti wires (SW) (cf. Figure 2e,f) and a nearly perfect square network (SN2) (cf. Figure 2b). The imperfect square structure (SN1) (cf. Figure 2a) is quite similar to SN2, but due to the concentration χ, the metal atoms form dimers, which in consequence, result in the deterioration of the formed square lattice. We conclude that the mixture composition below a certain amount of linker molecules enforces gluing metal atoms with one another, which may lead to a bigger amount of possible orientations on how linker molecules can interact with them. This corroborates with the observation that a nearly perfect square network SN1 is formed in higher linker concentrations due to the separation of metal atoms. An increase of metal size to σ m = 0.8σ leads to the formation of similar spaghetti wires as for σ = 0.5σ; however, the ordered network is completely different. We have observed the development of 3 2 .4.3.4 Archimedean tessellation (AT1) for the mixture concentrations of χ = 0.75 and above (cf. Figure 3c). Similarly, as for the previous case, the formation of the ordered network was only possible if the mixture composition enforced the separation of metal atoms. For the further increase to σ m = 1.0σ, we observe the occurrence of a parallel network (PN) for mixture composition of χ = 0.25 and 0.5 (cf. Figure 4a). In higher concentrations of linker molecules, we can see two different types of ordered structures. The first one is similar to 3 2 .4.3.4 Archimedean tessellation; however, we observe significant imperfections in its structure (AT2) (cf. Figure 4c), and the second is a nearly perfect triangular lattice (TN) (cf. Figure 4f). The general conclusions which can be extracted from our simulations are as follows: (i) the increase of metal atom size, σ m , changes its maximum coordination number due to the geometric effects. (ii) the mixture composition can "change" the maximum coordination number of the metal atom owing to the possibility of soft-interactive "gluing" with one another. This, in consequence, leads to the deterioration of the observed ordered structures. Based on our observations, we can estimate the possible onsurface behavior of di-substituted polyphenyl-like compounds with metal atoms in different conditions. We have shown the possible paths on how molecules can assemble. We believe that those findings can be very useful for experimentalists to design future experimental conditions for a target development of particular networks of interest.
4,622.8
2021-09-21T00:00:00.000
[ "Materials Science" ]
5G Network Slicing: Methods to Support Blockchain and Reinforcement Learning With the advent of the 5G era, due to the limited network resources and methods before, it cannot be guaranteed that all services can be carried out. In the 5G era, network services are not limited to mobile phones and computers but support the normal operation of equipment in all walks of life. There are more and more scenarios and more and more complex scenarios, and more convenient and fast methods are needed to assist network services. In order to better perform network offloading of the business, make the business more refined, and assist the better development of 5G network technology, this article proposes 5G network slicing: methods to support blockchain and reinforcement learning, aiming to improve the efficiency of network services. The research results of the article show the following: (1) In the model testing stage, the research results on the variation of the delay with the number of slices show that the delay increases with the increase of the number of slices, but the blockchain + reinforcement learning method has the lowest delay. The minimum delay can be maintained. When the number of slices is 3, the delay is 155 ms. (2) The comparison of the latency of different types of slices shows that the latency of 5G network slicing is lower than that of 4G, 3G, and 2G network slicing, and the minimum latency of 5G network slicing using blockchain and reinforcement learning is only 15 ms. (3) In the detection of system reliability, reliability decreases as the number of users increases because reliability is related to time delay. The greater the transmission delay, the lower the reliability. The reliability of supporting blockchain + reinforcement learning method is the highest, with a reliability of 0.95. (4) Through the resource utilization experiment of different slices, it can be known that the method of blockchain + reinforcement learning has the highest resource utilization. The resource utilization rate of the four slices under the blockchain + reinforcement learning method is all above 0.8 and the highest is 1. (5) Through the simulation test of the experiment, the results show that the average receiving throughput of video stream 1 is higher than that of video stream 2, IOT devices and mobile devices, and the average cumulative receiving throughput under the blockchain + reinforcement learning method. The highest is 1450 kbps. The average QOE of video stream 1 is higher than that of video stream 2, IOT devices and mobile devices, and the average QOE is the highest under the blockchain + reinforcement learning method, reaching 0.83. Introduction Relieving users' network congestion, reducing network latency, and offloading the network are the top priorities for 5G networks. As a core technology, the 5G network slicing technology can effectively solve the challenges of business creation and exclusive network access for different users, as well as the coexistence of multiple application scenarios. e 5G network is expected to meet the different needs of users [1]. 5G network slicing may be a natural solution [2]. A wide range of services required for vertical specific use cases can be accommodated simultaneously on the public network infrastructure. 5G mobile networks are expected to meet flexible demands [3]. erefore, network resources can be dynamically allocated according to demand. Network slicing technology is the core part of 5G network [4]. e definition of 5G network slicing creates a broad field for communication service innovation [5]. e vertical market targeted by 5G networks supports multiple network slices on general and programmable infrastructure [6]. e meaning of network slicing is to divide the physical network into two virtual networks so that they can be flexibly applied to different network scenarios. e future 5G network will also change the mobile network ecosystem [7]. e 5G mobile network is expected to meet the diversified needs of a variety of commercial services [8]. 5G mobile networks must support a large number of different service types [9]. Network slicing allows programmable network instances to be provided to meet the different needs of users. Blockchain can establish a secure and decentralized resource sharing environment [10]. Blockchain is a distributed open ledger [11] and is used to record transactions between multiple computers. Reinforcement learning algorithms can effectively solve large state spaces [12]. Reinforcement learning is mainly used to solve simple learning tasks [13]. 5G networks are designed to support many vertical industries with different performance requirements [14]. Network slicing is considered an important factor in enhancing the network and has the necessary flexibility to achieve this goal. Network slicing is considered one of the key technologies of 5G network [15]. You can create virtual networks and provide customized services on demand. [16]. When facing the different needs of different users, the network is divided into many pieces to meet customer needs. Moreover, it provides targeted services and assistance. Network Slice Classification. e ultimate goal of 5G network slicing is to organically combine multiple network resource systems to form a complete network that can serve different types of users. Network slices can be divided into independent slices and shared slices as shown in Table 1: e application scenarios of 5G networks are divided into three categories: mobile broadband, massive Internet of ings, and missioncritical Internet of ings [17]. e details are shown in e blockchain consists of a shared, fault-tolerant distributed database, and a multi-node network [18]. Blockchain Structure. e block chain is composed of a block header and a block body, which forms into a chain structure through the hash of the parent block [19]. e structure is shown in Figure 1: e structure contains the parent block hash, timestamp, random number, difficulty, and the Merkle root [20]. Its functions are shown in Table 3: Blockchain Properties. Blockchain technology has three attributes of distribution, security, and robustness [21], as shown in Table 4 Reinforcement Learning Process. In the process of reinforcement learning, the agent needs to make decisions on the information in the environment [22]. At the same time, the environment will also reward the agent for the corresponding behavior, and the agent will enter a new state after the behavior. e process is shown in Figure 2: Model Design. e 5G network slicing architecture is composed of network slicing demander, slice management (business design, instance orchestration, operation management), slice selection function, and virtualization management orchestration. e process of the 5G network slicing model is as follows: network services enter the slice manager through the network slice demander, and the slice manager includes business design, instance arrangement, and operation management. After the slice manager enters the slice selection function, it is divided into shared slice function and independent slice special function, and it can also enter the virtualization management orchestration as shown in Figure 3: Scalability within Shards. In the process of verifying the block consensus, the scalability within the shard [23]is as follows: Among them, b I is the average transaction size, B Ih is the block header size, and K is the number of shards. Scalability of Directory Fragmentation. Assuming that the average transaction size is b F , the block header size is B H , and the scalability of the directory fragmentation is as follows: Independent slice A slice with a logically independent and complete network function. e slice includes a user data plane, a network control plane, and various user business function films, which can provide a logically independent end-to-end private network service for a specific user group. If necessary, only part of the services of specific functions can be provided. Shared slice A shared slice is a specific network slice whose network resources can be used by different independent slices. e slice can provide end-to-end services, and when necessary, it can also only provide partial sharing functions. Distributed e blockchain connects the participating nodes through a peer-to-peer network to realize resource sharing and task allocation between peer nodes. Each network node does not need to rely on the central node and can directly share and exchange information. Each peer node can not only be an acquirer of services, resources, and information but can also be a provider thereof, which reduces the complexity of networking while improving the fault tolerance of the network. Scalability of Sharded Blockchain. e scalability of the entire sharded blockchain is composed of the internal scalability of the shards and the scalability of the catalog shards [24]. Assuming that the block packing time within the fragment and the directory fragment is the same as T I ′ and the block header size is the same as B H ′ , the formula is as follows: Value Function Method. e value function method is to give an estimate of the value for different states. 0 is the given value, and V π (s) starts from state V π (s). e formula is as follows: (4) e optimal strategy π * has a corresponding state-value function V * (s), which is expressed as follows: In the RL setting, it is difficult to obtain the state transition function P. So, a state-action value function is constructed. Safety Blockchain can use encryption technology to asymmetrically encrypt the transmitted data information. e task request for writing data in the blockchain needs to be accompanied by the private key signature of the task initiator. e changed signature is broadcasted together with the task request among participating nodes in the network. Each node can verify its identity, so the task request is not allowed for forgery and tampering. At the same time, the blockchain data structure in the blockchain further ensures that the content in the block cannot be tampered with at will. Even if some nodes in the chain are maliciously forged, tampered with, or destroyed. It will not affect the normal operation of the entire blockchain. Robustness e consensus mechanism determines the degree of agreement between the voting weight and computing power between subjects. e entire blockchain system uses a special incentive mechanism to attract more miners to participate in the process of generating and verifying data blocks, perform mathematical calculations in a distributed system structure, use consensus algorithms to select a node, and then create a new one. e effective block of is added to the entire blockchain, and the entire process does not rely on a third-party trusted institution. (6) Given Q π (s, a), in each state, the optimal strategy argmax a Q π (s, a) can be adopted. Under this strategy, V π (s) can be defined by maximizing Q π (s, a)as follows: At present, mature deep learning methods such as SARSA and offline Q learning can all be used to solve the value function. SARSA: Offline Q learning: Strategy Method. e strategy method is to directly output the action by searching for the optimal strategy π * . e objective function J(θ) is defined as the cumulative expected reward. e policy parameter ∇ θ J(θ) is estimated in the discounted cumulative expected reward gradient θ and obtained based on a certain learning rate (α l ). e formula of the strategy gradient method is as follows: MDP. MDP mainly solves the problem of learningrelated experiences in the interaction between the agent and the environment to achieve the goal [25]. Assuming that the state space is S, it is defined as follows: Among them, h represents the state of all wireless channels in the 5G network slice, H represents the channel state space, and H is represented as follows: Among them, h m represents the channel state and H m represents the channel state space. x means connection status, X means connection status space. X is defined as follows: d represents the state of all data transmission rates in the slice, and D represents the data transmission rate state space. D is defined as follows: φ represents the topological state of the physical network, and ψ represents the topological state space in the physical network. ψ is defined as follows: A r means that the action space is allocated for unlimited resources, which is defined as follows: A r � a r,1 , a r,2 , . . . , a r,|U| |∀u ∈ U, a r,u ∈ A r,u . Among them, a r,u is the 5G network radio resource allocation action, and A r,u is its corresponding network action space, expressed as follows: Among them, v u,m ′ represents occupied wireless resources. A i � a 1 , a 2 , . . . , a n , the calculation level of A i is denoted as S i � s 1 , s 2 , . . . , s n , and the link set composed of nodes is denoted as L n � l 1 , l 2 , . . . , l n . e first dynamic dispatch queue state transition function is as follows: Model Building. Suppose the weighted undirected graph of the physical network is C � (A i , S i ), where the set of network nodes is denoted as e second dynamic scheduling queue state transition function is as follows: Combining the above analysis, the 5G network slicing model, the formula is expressed as below: Variation of Time Delay with the Number of Slices. is article mainly studies 5G network slicing methods to support blockchain and reinforcement learning. First, we will test the model and compare the blockchain + reinforcement learning method with the blockchain, reinforcement learning, and unused methods. e results are shown in Figure 4. Computational Intelligence and Neuroscience e comparison results show that the delay increases with the increase of the number of slices, but the blockchain + reinforcement learning method has the lowest delay and can maintain the minimum delay. When the number of slices is 3, the delay is 155 ms. e overall delay of the blockchain is lower than the delay of reinforcement learning because the blockchain will give priority to nodes with rich resources and strong data processing capabilities when selecting nodes and link mappings, so the delay is lower. Delay Comparison of Different Slice Types. Under different slice types, set the number of users to 30 and compare the delays generated by several methods. We compare 5G network slicing, 4G network slicing, 3G network slicing, and 2G network slicing in blockchain + reinforcement learning, blockchain, reinforcement learning, and unused methods. e results are shown in Figure 5. rough the comparison results, it can be seen that the latency of 5G network slicing is lower than that of 4G, 3G, and 2G. 5G network slicing has the lowest latency of only 15 ms in the method of blockchain and reinforcement learning. is is because the greater the number of VNFs, the more nodes that the slice will pass through to process the same data packet, the longer the link that passes, and the greater the delay. System Reliability. System reliability is an indispensable step before the experiment. We will compare the system reliability of different methods (blockchain + reinforcement learning, blockchain, reinforcement learning) under different numbers of users. e comparison result is shown in Figure 6: It can be seen from the graph that the reliability decreases with the increase of the number of users because reliability is related to delay. e greater the transmission delay, the lower the reliability. e reliability of the supporting blockchain + reinforcement learning method is the highest, with a reliability of 0.95. is means that 5G network slicing that supports blockchain + reinforcement learning methods can provide services for more businesses. Resource Utilization of Different Slices. is article studies the methods that support blockchain and reinforcement learning. We will study the resource utilization of blockchain and reinforcement learning for different slices. Set up 4 slices and perform three tests on each slice, namely, blockchain + reinforcement learning, blockchain and reinforcement learning, and finally compare their resource utilization experiment results as shown in Figure 7: According to the experimental results, it can be concluded that the method of blockchain + reinforcement learning has the highest resource utilization rate. e resource utilization rate of the four slices under the blockchain + reinforcement learning method is all above 0.8, and Computational Intelligence and Neuroscience e experiment will compare 4 types of equipment using three methods: blockchain + reinforcement learning, blockchain, and reinforcement learning. By comparing the average cumulative received throughput (kpbs), which method is better is decided. roughput refers to the number of requests processed by the system in a unit of time. e results are shown in Table 5. e result is plotted as a histogram, and the result is shown in Figure 8. According to the experimental results, the average receiving throughput of video stream 1 is higher than that of video stream 2, IOT devices, and mobile devices, and the average cumulative receiving throughput is the highest under the blockchain + reinforcement learning method, reaching 1450 kbps. Average QOE. Under three different methods, compare the average QOE of different devices to prove which method is more suitable for 5G network slicing. QOE refers to the user's comprehensive experience of the quality and performance of the network system. e results are shown in Table 6: e result is plotted as a histogram, and the result is shown in Figure 9. According to the experimental results, the average QOE of video stream 1 is higher than that of video stream 2, IOT devices, and mobile devices, and the average QOE is the highest under the blockchain + reinforcement learning method, reaching 0.83. Conclusion With the advent of the 5G era, current technologies can no longer meet the needs of users. Network congestion and slow network speeds are major problems currently facing. In order for users to use network services more smoothly, network services are more convenient. is article designs 5G network slicing: a method model supporting blockchain and reinforcement learning. is model will perform better distribution management of the network, increase the transmission rate of users in the business, and reduce the transmission delay. e research results of the article are given below: (1) In the model testing stage, the results of the study on the variation of the delay with the number of slices show that the delay increases with the increase of the number of slices, but the blockchain + reinforcement learning method has the lowest delay and can maintain the minimum delay When the number of slices is 3, the delay is 155 ms. (2) e comparison of the delay of different slice types shows that the delay of 5G network slicing is lower than that of 4G, 3G, and 2G. 5G network slicing has the lowest delay in the method of blockchain and reinforcement learning, only 15 ms. (3) In the detection of system reliability, reliability decreases as the number of users increases. is is because reliability is related to delay. e greater the transmission delay, the lower the reliability. Supporting blockchain + reinforcement learning method has the highest reliability. (4) In the resource utilization experiment of different slices, it can be known that the method of blockchain + reinforcement learning has the highest resource utilization. e resource utilization rate of the four slices under the blockchain + reinforcement learning method is all above 0.8 and the highest is 1. (5) rough the simulation test of the experiment, the results show that the average receiving throughput of video stream 1 is higher than that of video stream 2, IOT devices, and mobile devices, and the average cumulative receiving throughput under the blockchain + reinforcement learning method e volume is the highest, reaching 1450 kbps. e average QOE of video stream 1 is higher than that of video stream 2, IOT devices, and mobile devices, and the average QOE is the highest under the blockchain + reinforcement learning method, reaching 0.83. Although the results of this experiment are obvious, it has certain limitations and is limited to the use of 5G network slicing. A lot of research is needed in the future to enhance its universality and apply it to more scenarios. In future research, the methods for supporting blockchain and reinforcement learning proposed in this article can be improved, so that blockchain and reinforcement learning methods can be realized in the network service requirements with more goals. Data Availability e experimental data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest regarding this work. Computational Intelligence and Neuroscience 9
4,783.8
2022-03-24T00:00:00.000
[ "Computer Science", "Engineering" ]
Towards an understanding of global brain data governance: ethical positions that underpin global brain data governance discourse Introduction The study of the brain continues to generate substantial volumes of data, commonly referred to as “big brain data,” which serves various purposes such as the treatment of brain-related diseases, the development of neurotechnological devices, and the training of algorithms. This big brain data, generated in different jurisdictions, is subject to distinct ethical and legal principles, giving rise to various ethical and legal concerns during collaborative efforts. Understanding these ethical and legal principles and concerns is crucial, as it catalyzes the development of a global governance framework, currently lacking in this field. While prior research has advocated for a contextual examination of brain data governance, such studies have been limited. Additionally, numerous challenges, issues, and concerns surround the development of a contextually informed brain data governance framework. Therefore, this study aims to bridge these gaps by exploring the ethical foundations that underlie contextual stakeholder discussions on brain data governance. Method In this study we conducted a secondary analysis of interviews with 21 neuroscientists drafted from the International Brain Initiative (IBI), LATBrain Initiative and the Society of Neuroscientists of Africa (SONA) who are involved in various brain projects globally and employing ethical theories. Ethical theories provide the philosophical frameworks and principles that inform the development and implementation of data governance policies and practices. Results The results of the study revealed various contextual ethical positions that underscore the ethical perspectives of neuroscientists engaged in brain data research globally. Discussion This research highlights the multitude of challenges and deliberations inherent in the pursuit of a globally informed framework for governing brain data. Furthermore, it sheds light on several critical considerations that require thorough examination in advancing global brain data governance. Introduction: The study of the brain continues to generate substantial volumes of data, commonly referred to as "big brain data," which serves various purposes such as the treatment of brain-related diseases, the development of neurotechnological devices, and the training of algorithms.This big brain data, generated in di erent jurisdictions, is subject to distinct ethical and legal principles, giving rise to various ethical and legal concerns during collaborative e orts.Understanding these ethical and legal principles and concerns is crucial, as it catalyzes the development of a global governance framework, currently lacking in this field.While prior research has advocated for a contextual examination of brain data governance, such studies have been limited.Additionally, numerous challenges, issues, and concerns surround the development of a contextually informed brain data governance framework.Therefore, this study aims to bridge these gaps by exploring the ethical foundations that underlie contextual stakeholder discussions on brain data governance. . Introduction Advances in neuroscience and the study of the brain have continued to generate large scale high-quality big brain datasets.These datasets which are essential for advancing neurotechnologies and developing new treatments are generated in different jurisdictions and consists of datasets from multiple disciplines, organisms, while also existing in multiple formats (Landhuis, 2017;Rommelfanger et al., 2018;Adams et al., 2020;Eke D. O. et al., 2021).This complexity and the sensitivity of brain data raises several challenges in the collection, processing and sharing of brain data.Some of these challenges include privacy, informed consent, security, confidentiality, and ownership.While the existence of an appropriate global governance mechanism for brain data would help to curtail some of these challenges, this has not been the case as currently a global framework for the governance of brain data is non-existent.This has given rise to calls for the development of an international data governance framework for brain data to foster data sharing and collaboration.The development of such of a governance framework should be culturally informed (Ienca et al., 2022) while acknowledging the pluralistic nature of ethical and legal principles that exist in various jurisdictions (Eke D. O. et al., 2021;Ienca et al., 2022).These ethical considerations and implications can provide tools for navigating the ethical and moral hurdles that exist in the management of brain data.Furthermore, acknowledgment of ethical principles and the embedding of ethics in policies and practices can promote large scale collaboration and data sharing in neuroscience projects (Stahl et al., 2018). While the development of a global framework that is culturally informed will advance collaborations, several challenges currently exist in the implementation of such global frameworks.These challenges span across technical, regulatory, and ethical boundaries which influence the collection, processing, sharing and storage of brain data.Although there is a current acknowledgment of the importance of ethical considerations in the governance of brain data, various ethical and legal principles which influence brain data governance currently exist as identified in our previous study (Ochang et al., 2022).These principles such as privacy and consent are very visible in discussions (but multidimensional) while other principles and concepts such as neurorights and data retention and destruction are still less visible which shows that more work has to be done on their global conceptualisation and visibility.Also, discussions around the applicability of the principles that exist in the governance landscape are very multidimensional which calls for standardisation, agreements, and clear guidelines.These are but a few observations when exploring the ethical and legal landscape of brain data governance which can create hurdles in developing a governance framework. Furthermore, as the boundaries between jurisdictions and legal systems (e.g., GDPR and HIPAA) are mostly territorial while moral and ethical perceptions are proving to be inconsistent (Friedman, 1969;Stahl, 2012) varying between societies, pluralism in ethics and legal principles will continue to influence brain research (Emerging Issues Task Force and International Neuroethics Society, 2019;Eke D. O. et al., 2021).Researchers hailing from different geographical regions could face challenges in comprehending the prerequisites for legal and ethical reciprocity when sharing brain data.Moreover, those accessing data from diverse origins, possibly spanning various jurisdictions, might lack clarity regarding whether deidentified, coded, unlinked, or pseudonymised data holds the same equivalence as reversibly anonymised data (Dove, 2015).Furthermore, although universal declarations such as the Universal Declaration of Human Rights (UDHR) (United Nations General Assembly, 1949) attempt to generalise ethical and legal principles which should be upheld, no declaration will ever be exhaustive in guiding the practices of key actors in brain data research.This is because universally accepted declarations which embody ethical and legal principles will always go hand in hand with the cultural and moral diversity of various regions.Also storing big brain datasets across multiple repositories and infrastructure raises technical challenges due to the fact that brain data researchers are no longer dealing with terabytes but petabytes of data due to the large-scale datasets generated by a few minutes of neural activity (Landhuis, 2017;Ochang et al., 2022).These are some of the challenges that exist in relation to developing a global governance framework. Although recommendations have been made regarding developing a responsible framework (Fothergill et al., 2019) that is culturally informed, practical steps to develop such a framework using the perceptions of key stakeholders conducting brain research in different regions to understand their contextual perceptions has been limited.Also, neuroethical approaches, as illustrated by Farah (2015), often lack direct integration with data governance.Similarly, data governance research, as noted by Nielsen (2017), tends to overlook ethical dimensions.In developing a global framework for the governance of brain data that is culturally informed, various contextual challenges, issues, and concerns (regulatory, ethical, practical, and technical) must be understood.Such governance frameworks should also be dynamically responsive to the peculiar issues and challenges in the conduct of brain data research in various regions.In the current landscape of brain data governance which embodies several ethical and legal principles, these principles are underpinned by various ethical positions when applied in practice.When applied in practice they generate different issues and concerns which might be peculiar or contextual to the region of application.They also generate various standpoints, recommendations, and justifications.However, a clear understanding of these ethical positions of key stakeholders which can provide theoretical and practical insights for advancing a contextually aware global brain data governance framework, especially when applying current governance principles is currently lacking.This is one of the motivations for undertaking this study. It is here that we situate our justification and research question which asks what ethical positions underpin current brain data governance discourse?In the application of ethical and legal principles, stakeholders are bound to make moral justifications based on, for example, virtues, duties, consequences, or what they consider the greater good.This generates various ethical insights, perspectives and recommendations around the issues and concerns inherent in current principles by stakeholders in different regions which can only be understood by attempting to explore ethical positions.Other factors that have influenced this study include the need to capture the insights of brain data researchers in regions, including Africa, thereby promoting inclusivity in current discussions.Having spoken to key neuroscientists to understand the application of ethical and legal principles in our previous research, this paper attempts to find out what ethical positions can be found in their practices especially around duties, virtues, consequences, and the need to agree on guidelines, regulations, and other binding practices. To the best of our knowledge this is the first study which attempts to understand global brain data governance by bringing together neuroscientists with a global representation to provide discussions which are underpinned by various ethical positions to provide insights, justifications, and recommendations.The findings of the study revealled various issues, concerns and recommendations which are supported by various deontological, consequentialist and virtue positions.Some of the insights provide reasons to endorse and comply with fundamental laws, institutions and principles which are underpinned by the social contract theory. This paper makes a dual contribution to the existing body of knowledge.Firstly, it provides a unique perspective on brain data governance by elucidating theoretical frameworks within the context of brain data.This elucidation aids in comprehending the ethical stances of stakeholders in the realm of brain data governance.Secondly, the paper addresses the diversity within the brain data governance landscape by convening prominent researchers.It acts as a catalyst for discussions that reflect the diverse aspects of brain data governance. . Conceptual background . . Brain data Brain data is multidisciplinary, uniting researchers from diverse fields of study.The collaboration among these disciplines yields diverse brain data types through various techniques and modalities.This contributes to its inherent complexity, further compounded by data from multiple species and organisms (Landhuis, 2017;Abbott, 2020), culminating in extensive brain data often termed big brain data.Notably, a mere 20-minute recording of neural activity generates ∼500 petabytes of data (Landhuis, 2017).This large volume underscores that scientists grapple not only with the intricacy of brain data but also its magnitude.These attributes also satisfy key characteristics of big data (volume, velocity, variety, and veracity) (L'Heureux et al., 2017;Fothergill et al., 2019) resulting to brain data being sometimes referred to as big brain data (Landhuis, 2017;Kellmeyer, 2018).Given its diverse nature, reflecting processes in both human and multiple specimen brains, the term "brain data" lacks precise conceptual clarity and is often used ambiguously.To enhance conceptual precision, this paper encompasses both human and animal brain data.Furthermore, the paper argues that brain data has transcended raw measurements to include derived data and metadata (data of data).Therefore, to provide conceptual clarity this paper refers to brain data as data that directly or indirectly (including metadata) pertains to the brain structure, activity, and function of humans, animals, and other organisms.Examples include Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Magnetoencephalography (MEG) and Near-Infrared Spectroscopy (NIRS) (Liu et al., 2006;Crosson et al., 2010) and metadata in brain datasets.The inclusion of metadata in the definition of brain data is important as metadata offers descriptive or structural information that provides context and details about brain data. . . Brain data governance and its potential dimensions Brain data governance focuses on the policies, procedures, and systems that are established to oversee the collection, storage, use, sharing and management of brain data.It can be defined as the policies and strategies that define responsibilities of accountable stewardship which include acquiring, aggregating, deidentifying, processing, curation, retention, deletion, use and the overall availability, usability, integrity, security, and privacy of data in alignment with ethical, legal, and social obligations (Ochang et al., 2022).This definition acknowledges the ethical, legal, and social implications (ELSI) of collecting, processing, storing, and sharing brain data resulting in the need to consider key dimensions in the governance data that are different from other traditional forms of data.Key areas of brain data governance include ethics, policies, regulations and guidelines (binding laws), human rights, innovation (development of medical devices and neurotechnologies), and participatory governance (Ienca et al., 2022;Ochang et al., 2023) as shown in Figure 1.This also involves assessing technical provisions (e.g., security and data protection) to align with regulatory and ethical prerequisites, and prioritising alignment with research environments rather than corporate or conventional information systems environments. . .The relationship between ethical theories and data governance . . .Ethics Ethics which stems from the Greek word "ethos" focuses on the moral foundations and reflections on which actions, behavioral judgements, and moral evaluations are made (Stahl, 2012;Fieser, 2018).Discussions around ethics span over millennia as ethics is firmly rooted in moral systems such as customs, religion, and law.The use of ethical theories to underpin current brain data governance discourse has been limited if not non-existent.While in the context of big data and information systems there is a reasonable application of ethical theories in an attempt to understand or advance data ethics (Mittelstadt and Floridi, 2016;Herschel and Miori, 2017;Hand, 2018;Bezuidenhout and Ratti, 2021).Ethical theories can provide explanatory mechanisms in the quest for understanding the ethical reasoning around the collection, use and sharing of data.For example, in the field of big data some researchers have used traditional ethical theories to analyse new power distributions in big data to show that the characteristics of big data shifts the nature of the ethics debate because it redefines power dynamics and the extent to which the element of free will exists in data utility (Zwitter, 2014).Also, traditional ethical theories used in data ethics provide an underpinning to the notion of moral responsibility which exists in the everchanging nature of our data ecosystem and has provided the foundation for advances in ethics such as in network ethics (Floridi, 2009), big data ethics (Herschel and Miori, 2017;Someh et al., 2019), computer and informatic ethics.Therefore, traditional ethical theories provide instruments to help in the framing of moral issues and recommendations that exist in the sharing and usage of data. . . . Key ethical theories in data ethics Ethical theories used in data ethics share a common property which is that they all provide instruments to make logical, reasoned, and persuasive arguments based on the principles of the theory in use (Herschel and Miori, 2017).Prominent ethical theories that are usually applied in data ethics often include Deontology (Kantianism or Kantian ethics), Consequentialism (utilitarianism), Virtue Ethics and Social Contract Theory (Stahl, 2012;Zwitter, 2014;Herschel and Miori, 2017).Deontology derived from the Greek word deon which stands for duty or obligation (Stahl, 2012;Farah, 2015) deontology focuses on the duty and obligations of the moral agent and duty that individuals have toward one another.Deontology focuses on the characteristics embedded in the action of the moral agent and attempts to evaluate the ethical quality of the action of the individual based on rules.From a deontological perspective what makes an action right is conformity to a moral norm and a major underpinning perspective is that no matter how morally good some consequences turn out to be, the choices that lead up to those good outcomes may be morally forbidden (Kant, 2011).Consequentialism on the other hand or consequentialist ethics focuses on the outcomes of the actions of a moral agent.Due to the fact that in consequentialism outcomes are usually all that matters the moral agent must act in the best way that achieves the best measurable outcomes (Bentham et al., 1996;Mill, 2001;Card and Smith, 2020).Virtue ethics focuses on the character of an individual and alienates the duties or consequences of actions by a moral agent.It focuses on the virtues one should possess and what actions are good or bad and how actions should be modeled after such behavior (Jantavongso and Fusiripong, 2021).Social Contract theory which is usually attributed to the works of Thomas Hobbes, John Locke, Jean-Jacques Rousseau, and John Rawls focuses on the social agreements that are established to allow individuals coexist in a society (Boucher and Kelly, 2004;Nimbalkar, 2011). Based on the theoretical tradition in data ethics, the general choices made by a moral agent is usually centered on these traditional theories as the dominant approaches as they capture much of our moral intuition.This is not to say there are not a wealth of ethical theories that exist in the landscape of data ethics that can provide instruments for ethical analysis but many of them combine the dominant theories above to address several weaknesses (Ross, 1930;Stahl, 2012). . . . Ethical theories meet data governance The interplay between ethical theories and data governance is crucial in guiding how key stakeholders or brain research projects, and organisations collect, store, use, and share data responsibly and ethically.Ethical theories provide the philosophical frameworks and principles that help inform the development and implementation of data governance policies and practices.In Ethical theories can provide the foundation for ethical decision-making within data governance.By integrating these theories into their practices, key stakeholders (e.g., researchers) and organisations who collect, store, and share brain data can ensure that their data handling processes are aligned with moral principles and societal values, ultimately leading to responsible and trustworthy data governance. . Research design The research aims to investigate the ethical perspectives that form the basis of discussions on brain data governance.Uncovering the ethical considerations guiding the decisions, actions, or viewpoints of key brain data stakeholders can offer deeper insights into comprehending the diverse contextual foundations.This, in turn, can contribute to a better understanding of the broader landscape of global brain data governance. . . Method This study makes use of a secondary analysis (Creswell and Creswell, 2018) of interviews used for a previous study.The interviews involved 21 neuroscientists drafted from the International Brain Initiative (IBI), LATBrain Initiative and the Society of Neuroscientists of Africa (SONA) who are involved in various brain projects globally as shown in Table 1.The primary focus of the interviews was to find out the ethical and legal principles or issues that could arise in brain data research.The previous study focused on understanding the practical experiences relating to the influence ethical and legal principles had on brain data governance.The primary analysis and results identified statements around practical ethical and legal principles and issues that could arise in brain data research which majorly circulated around human rights, research ethics, participatory governance, and policies, regulations, and guidelines. The richness of the quality of data collected in our previous research informed this study.The fact that neuroscientists who were respondents expressed various ethical and moral dimensions around the application of ethical and legal principles and the issues that arise from brain research provided the need to carry out exploration to understand their ethical positions.Therefore, a secondary analysis of the primary data was carried out to identify interesting findings that are independent of the original research.For data collection, we used neuroscientists as key stakeholders compared to other stakeholders such as policymakers and research participants because neuroscientists involved in brain data research are faced with different ethical decisions which puts their moral compass into question through practical decision making.Therefore, neuroscientists are at the forefront of the application of ethical and legal principles in brain data research. Transcripts derived from the interviews were analyzed using NVivo qualitative analysis software (QSR International, 2021).The transcripts were read in their entirety to understand the meaning of sentences and phrases through which there was an expression of ethical values relating to an ethical principle.Then thematic analysis (Kiger and Varpio, 2020) was used to categorise the meaning of statements under underpinning theories that could be used to explain the underlying statements containing an . Results The analysis provided results which describe the theoretical underpinnings around the statements of the respondents.Through the analysis the ethical positions of the neuroscientists are presented below in the form of deontological, consequentialist, virtue and social contract positions. . . Deontological positions The participants emphasised on certain duties and rules which should be applied toward data subjects and brain data.Participants also expressed rights and duties irrespective of the consequences. Participants emphasised on the need to maintain privacy, protection and confidentiality of both research subjects and their data because they see this as an obligation and a core issue in brain data research especially when sharing data.Respondents also believe Ethics Research Boards (ERBs) are heavily influenced by the need to maintain privacy and this is prioritised and fundamental to ethics approval.Although participants see maintaining privacy, confidentiality, and the protection of participants as an obligation, there is a lack of agreement on best practices as different words are used to quantify maintenance of privacy and confidentiality.For example, participants used words like "blinding", "coded" or "defaced" or "de-identified".Participants also have a perception that they have a duty or obligation to inform research subjects when incidental or abnormal findings occur in brain research, however maintaining the need for privacy and confidentiality often conflicts with such obligations. With regards to consent the participants expressed their obligations regarding processing consent as a rule for the collection of brain data.Participants also expressed the inadequacies of current consent models which calls for extra obligations.For example, a participant emphasised that the obligation to collect consent extends to providing communication and clarity to data subjects to understand the potential downstream uses of data that might not be anticipated as at the time of collecting consent.Also, in the use of invasive technologies there is a perception that the duty to collect consent should dynamically occur during the entire process due to the ability of deep brain simulations and invasive technologies to alter a subject's brain chemistry. "I attended this one seminar recently where an individual had Parkinson's and had this, like life saving deep brain stimulation technique that in 10 years, he would might have to get removed or, you know depending on how the trial goes, and someone brought up the question of, are you the same person as when you first signed your informed consent to 10 years later when your entire brain chemistry has changed?And at that point in time, you know, who does the informed consent lie on?You know, things change over 10 years when you're receiving this really invasive technology".(North America 4) The respondents also raised deontological perceptions around engagement and the need to acquire brain data to carry out research.This necessity was in relation to the duty to educate data subjects on the need to acquire brain data to create interventions around brain diseases.One participant highlighted the stigma associated with brain diseases as compared to other diseases and stressed the importance of engaging and educating the society on the need to acquire brain data which might change the perceptions of data subjects to easily provide brain data for research. "because there's a stigma around that, the data is not readily available and presented. . . . . ...you have to let people understand from an educational point of view, and the fact that we need this data, the fact that we can intervene".(Africa 1) Respondents expressed the need for data subjects to have control over the decision-making process around their data.Words such as "free will" and "human rights" were used in reference to independence and autonomy.Some respondents expressed the need for large corporations to express more ways for data subjects to have control over their data. Deontological views about fairness and transparency were expressed in relation to providing fair and equitable access to data which involved setting up requirements to allow data subjects to access and use their data.Also, respondents expressed concerns on how to make research results and the benefits of research available in a simple and understandable format to data subjects who make brain data available both for research and for the development of neurotechnologies.These concerns are driven by a sense of duty to promote accountability and some of the respondents pointed out that data subjects might be hit by paywalls when they attempt to access research outputs. Perceptions around integrity were expressed around the rules and duties of data repositories and researchers.Views on data quality, open access and meeting FAIR (Findable, Accessible, Interoperable and Reusable) requirements were expressed with some participants raising arguments on how some datasets fulfill journal requirements but lack reusability. "So what I often see when they say, yes, all the data is on our website and open access.And then you see the kind of data they put are basically, they are unusable.But they fulfill the formal requirements for open access for the journal, but when you actually use it, forget it.You can't use it.And we had some big names there".(North America 7) These views about the duty and rules regarding data integrity were further expressed by another participant with regards to brain data repositories who stated that the duty of ensuring that data is ethically acquired and meets FAIR rules is reflected in the open data commons of the repository and this basically outlines the liability of the data submitter.Therefore, the liability to ensure that data meets ethical requirements is shifted to the data submitter rather than those in charge of the repository. Some participants emphasised on the need to protect their intellectual property (IP) as their perceptions illustrate their views on ownership.The perceptions of some participants show that there is currently a need on how best to acknowledge the owners of research outputs which is seen as a duty.There is also a perception that the ways of acknowledging people for their research outputs has evolved significantly but more strategies might need to be developed as one respondent pointed out that copyrights are the only way to protect intellectual output while the other stated that there are issues in acknowledging ownership. "They're kind of broader issues like provenance and how do you cite the people who did all the work to get the data and licensing issues and things like that".(North America 6) "the only thing we know is to protect your property you get a copyright".(Africa 2) Some perceptions regarding the responsibility and accountability of parties involved in data submission were made in reference to the liability of repositories.It appears that there is a need to provide more clarity around the liability of improperly acquired data residing in repositories as one participant pointed that in most cases improperly acquired data is usually taken down without holding repositories liable.These perceptions hold some deontological views as to the duties of repositories in ensuring that data is properly acquired and used in accordance with guidelines. "I have become a little bit more concerned about repositories and their liability for breaches, right?Thus far, nobody has held the repository liable for data that was improperly acquired, the repository is expected to take it down".(North America 3) With regards to the legal basis for the collection, processing, storage and sharing of brain data, while some participants explicitly mentioned laws that underpin their activities around brain data research, some participants pointed out the lack of laws and the non-clarity of guidelines, policies, and procedures.This non clarity results in conflicting views and sometimes lack of guidance on the rules and duties in brain data research from a deontological perspective. "we want to you know, stick to the regulation and the law, but the regulation and the law is not completely clear.So, what is personal information is very straightforward in some field, like legal fields, but in neuroscience data, it is kind of difficult to define what is personal information, because we are dealing with brain imaging data of a person or a patient".(Asia 3) Most of the respondents expressed the need to prevent harm and to promote the welfare of data subjects and research participants.This is also included situations where incidental findings occur and one participant pointed out that researchers have an obligation to pass the data of subjects to trained clinicians when incidental findings surface.The use of fundamental human rights was acknowledged by participants as a good approach to promote beneficence and non-maleficence.However, one respondent emphasised that while researchers are obliged to protect and promote the welfare of participants, current regulations that criminalise brain diseases and mental health may cause harm to data subjects. Deontological views expressed around the retention and destruction of brain which involves complying with data management procedures shows that the respondents adopt different rules in the retention and deletion of brain data.Some of the respondents agree that destruction of data is the right of a data subject.However, the perceptions of the various respondents also show that there are major divisions in the arguments of the respondents around if brain data should be destroyed and when it should be destroyed or if it should be retained continuously. "So having a little bit of standard, like, keep your data at least for 10 years so that people can reproduce your results".(Europe 3) "I don't think data over ten years can be necessarily helpful or useful for research anymore because of the advancement in research".(North America 5) "the better way to destroy this data is to, but this is very, very tricky, is to confirm to be very, very sure that the anonymization of the data is being done properly".(Latin America 1) . . Consequentialist positions Participants emphasise on the consequences of not having appropriate measures to ensure privacy, confidentiality and data protection when sharing brain data.Some of these consequentialist arguments occur especially during the sharing of brain data and in the form of brain data images as there is a perception that current privacy, confidentiality, and protection techniques do not fully address re-identification.For example, in terms of personalised medicine through modeling, a respondent had a perception that participants can be identified because human imaging data is used to build models.Also, the misuse of brain data is considered as one of the most feared points by neuroscientists as pointed out by a respondent.The potential for misuse is also expressed in terms of the reuse of brain data.Although the reuse of brain data can provide valuable insights when combined with other data to form big data a consequentialist argument raised by one of the respondent points to the fact that it jeopardises the privacy of the individual especially with regards to broad reuse."there's a real tension between very broad reuse, and having approval for broad reuse, where you can ask many possible questions or you can link the data with other data, which is a major challenge, because potentially it jeopardizes the privacy of the individual and it also alters the types of questions that can be answered".(North America 2) This perception relates to another consequentialist argument about privacy, confidentiality and protection which is based on the lack of definition of what is classified as personal data or personal information which comes from the perception that the shape of the brain could be a fingerprint use to identify an individual from their brain imaging data."Yeah, and there is no restriction of the sharing of brain image data in terms of personality you know as personal information. . . .one possibility is that the shape of the brain could be a kind of personal information, because as you know, the brain has a very complex shape different people have different gyral patterns or sulcal patterns.So that is kind of a fingerprint.So, its maybe possible to identify a person from his MRI data that is possible".(Asia 2) Consequentialist arguments raised by participants around consent suggests that consent is a tool for research subjects to control their data and this control leads to trade-off between privacy and data utility.According to participants consent should be limited by law as data subjects can never fully understand the implications of consent around data sharing. "I don't think it's entirely possible, I think people can't completely understand what could be the implications of sharing, and that this consent should also be limited by law.So, it shouldn't be possible to consent to anything".(North America 1) This argument is reflected by another participant who pointed out that consent limits data utility especially when sharing data across borders because consent forms cannot be modified during cross border sharing. Perceptions around bias and discrimination expressed consequentialist views of using biased brain data samples for the development of algorithms.Some participants argued that although bias in brain data and algorithms can be unconscious or unintentional this generates risks, and these risks will always surface regarding the testability and reliability of algorithmic decisions.Some participants also believe that US and EU regions will have to deal with more issues of bias and discrimination due to the prominent level of ethnic diversity. Respondents also raised views around engagement which centers on ERBs.Some respondents pointed out that ERBs which are supposed to review and approve research to ensure compliance usually lack the necessary education and expertise to understand data related issues and concepts which results in barriers in brain data research.Therefore, more engagement and education of ERBs might be needed to provide exposure to some of the data related concepts as a lack of these expertise might prevent fairness in the ethics review process.This is also stressed by another participant who pointed out that while ERBs are necessary there is currently a lack of sufficient competence in turning complex legal and ethical frameworks into practice therefore resulting in consequences that affect research approval. "And in general, you know, what is a big big issue is that these ethics review boards, and there's called different things, IRB, or REBs, etc.But these are, very rarely actually have the education necessary to understand, for example, the real issues around the data and the data use and reuse.And so I think there's a critical need for education of these ethics review boards, to understand that you know, data and data related issues".(North America 2) Consequentialist views around autonomy and independence pointed to the need to structure ERBs which are free from institutional control especially in the process of ethics approval.Participants emphasised that independent ERBs free from institutional control are important as a lack of such independent and autonomous ERBs can stall research projects.A practical example was illustrated by a participant who narrated how ethics approval for a project was stalled due to the responsible institution proceeding on strike due to an industrial dispute. "You could submit an approval today, tomorrow, you come back and hear that the teaching hospital is on strike and everywhere is shut down.There is nothing you can do about it and it can take one full year and you can't do anything about it, your research will suffer".(Africa 3) The consequentialist views around fairness and transparency focused on the lack of procedural fairness in ethics approval by ERBs.There is also a perception by some respondents that ethics approval are control instruments and create a sense of lack of procedural fairness due to the evolving and complex nature of ethics approval.For example, a participant highlighted an example where the use of the word drone required further clarification and "open access was supposed to be a tool for everybody in a more democratic way for everybody to have access to science, but then it starts to become a business. . . . . .I have seen a lot of those examples and that's very bad and that is also something that affect directly countries where we don't have good funding".(Latin America 3) Although the consequentialist views on the legal basis for processing brain data focused on the complications that arises in brain data sharing among jurisdictions, participants also placed emphasis on the lack of regulations tailored toward brain data which might result in unintended violation of procedures and guidelines.This lack of regulations was mostly highlighted by participants in the African and Latin American region and one participant pointed out that the lack of brain data regulations can result in misuse.while another participant from the Asian region referenced the GDPR (General Data Protection Regulation) stating that it provides clarity and prevents such unintended consequences in the management of data. "But I do say that I respect the procedure of the EU that they have or they clearly state some global law for the whole of Europe and then get consensus and then researchers try to follow such a global law.This is very important for helping not only the people but also the researchers.We don't need to think too much".(Asia 3) Respondents also expressed views on the balancing of risks and the benefits of research by ERBs which can be considered as the principle of proportionality.One respondent explained that while there are varying ethical and legal principles which creates challenges in the analysis of risks by ERBs, there has been too much emphasis on the risks of brain data research rather than the benefits.These risks mainly focus on data reuse and data linkage therefore creating a lack of proportionality in the ethics approval process by ERBs. "The other aspects that I think is so, so critical, is, there's so much emphasis on risk in a negative sense, right, and the risk of data sharing and data reuse and data linkage.It's the big focus for many of these ethics boards is on risk".(NorthAmerica2) Strong consequentialist views were expressed with regards to neurorights.Although many of the respondents had not heard of neurorights, it appears that some neuroscientists believe that neurorights will promote transparency, openness, protection, and privacy which results in the overall good of data subjects and the responsible use of neurotechnologies.Also, respondents argue that due to the rapid evolution of neurotechnologies such as brain computer interfaces and neuronal implants, neurorights might assist in curbing both intentional and unintentional consequences of such technologies.Although some respondents believe that neurorights might have positive consequences some also expressed views that shows that neuroscientists believe that neurorights might create additional hurdles in navigating the already complex set of ethical and legal guidelines.Some other arguments point to the fact neurorights might stifle the development of neurotechnologies and advancement in neuroscience research.One clear example was raised by one of the respondents who expressed persuasive arguments by pointing out that search engines and social media modulate the brain in a more efficacy way than neurotechnologies and questioned why special rights have not been proposed in such areas. "do you realize how much your search questions to Google actually tell you about your brain?and how much social media actually modulates your brain in a far more efficacy way and in a far better understood way, than any neurotechnology today?And do we have anything controlling that?Do we have any rights there?No, not really".(North America 2) The consequentialist views about the destruction and retention of data as a principle was expressed especially around the need to retain data without destruction and the consequences of the deletion of data which reduces the greater good in brain data research.For example, a respondent pointed out that in terms of reproducing brain data research findings, the ability to combine multiple datasets to achieve big brain data can provide neuroscientists with valuable insights which will be beneficial to the society.Therefore, the constant deletion of data prevents such achievements.This is also expressed by another respondent who stated that deletion of data might violate the rights of data subjects and society to a good healthcare system because AI can be used with such large datasets. "the rights of society for a good health system and the rights of patients to get the best possible treatment, which especially if we talk about new health applications that are based on AI, for example, require the availability of large amounts of health data".(Europe 4) . . Virtue positions Respondents emphasised on the need for neuroscientists to express attributes that promote research integrity especially with regards to open access and data sharing.There is a perception that researchers are usually unwilling to share data due to the need for competitive advantage.However, a participant pointed out that developing regions tend to display more attributes with regards to open data sharing as it might be a prerequisite to gaining access to data. Responsibility and accountability were expressed as professional characteristics that neuroscientists involved in brain Frontiers in Big Data frontiersin.orgdata should possess.These were expressed in different dimensions around the sharing of data and around the development of Artificial Intelligence (AI) using algorithms derived from brain data.One respondent pointed out that the responsibility of brain data researchers does not end when data leaves a jurisdiction because poorly acquired data might have strong ethical and legal implications.Also, there were strong views on the responsibility and accountability of neuroscientists involved in the development of AI under government frameworks.These views focused on the need to express responsibility in defining the interaction and relationship that is established between end users of AI which might also include animals with some highlighting the need to have reflexivity when developing AI for militarisation as operating under a government framework may be legal but unethical. "we will have to take into account that these autonomous machines will interact with people. . ..it could be also animals. . ... if you programme a robot to do some military stuff, and it could be okay legally, because you are on a government framework, but it cannot be ethical in terms of your profession".(Latin America 3) . . Social contract positions With regards to the privacy of data there is a perception that there needs to be an agreement regarding the level of privacy, protection, and security of data in infrastructures.This view comes from a need to balance privacy and utility of data as sometimes the request to implement privacy, security, and data protection procedures in infrastructures reduces the usefulness of data.Also there appears to be a need to have an agreement on the sharing of face data with human imaging data as these considerations will increase privacy and confidentiality and reduce the risk of re-identification. "The apparent concern still, is if we can share people's face, which is often included in human imaging data, and we should remove his or her face from MRI data that is there is an active discussion going on and theoretically, we should remove face data from the MRI data that is almost established as a fact and now we need to follow that idea".(Asia 2) The social contract theory underpins consent based on the need to develop or agree on appropriate consent models as respondents pointed out that current consent models cannot handle the peculiarities of brain data research.Participants appear to be highly influenced by the need to carefully manage consent during data sharing due to different translations of consent and the nature of brain data. Using the focal lens of the social contract theory shows that some participants expressed the need for common ground of rules regarding the oversight applied to the research community and to the private sector around data collection and data sharing.This is expressed in the form of the research community having a closed market approach while the industry has a free markets approach when it comes to data sharing.Therefore, having a common ground of rules will reduce a sense of discrimination while promoting equity and fairness. "I think we should have a common ground rule of ethical principles that would apply to both researchers and in the industry.What can a researcher do in terms of data, collection and sharing shouldn't be different from what a company can do with data collection and sharing".(North America 5) Respondents emphasised on the need for more engagement regarding the change of certain existing legal frameworks that criminalise brain data diseases and mental health.For example, a participant pointed out that some countries currently have laws which portray attempted suicide as a criminal offense whereas attempted suicide might be related to a brain condition.As a result of this, individuals with brain related conditions are treated as criminals. Using the focal lens of the social contract theory, perceptions were deduced regarding the validation of data generated by simulators in the field of brain data research to promote data integrity.Based on this view, neuroscientists are influenced by the need to develop rules and agreements on how to measure and validate the correctness of simulated brain data.This comes from the view that experimental data might be easily validated but they appear to be limited ways to measure simulated brain data. "of course, with simulation data, the correctness of the data itself is harder to validate".(Europe 1) "How do we check or how do we guarantee this data generated from simulators are okay?".(Latin America 1) Some perceptions regarding data ownership highlight the necessity for a common set of rules concerning the open sharing of data to align with FAIR principles and meet the requirements of funding agencies.Currently, there exists tension in balancing the need for open access and the expectations of funding agencies, which may be the owners of the research data.This is in relation to the need to develop laws that can protect ownership of brain data especially in repositories as stated by a respondent pointed to the fact that current licensing structures in repositories do not provide appropriate mechanisms for apprehending persons who or abuse or misuse data. "We don't have an apparatus to go after people who are illegally using our data.So we may as well just give it to them, because we're not going to go after them and we know we're not going to go after them, right.And I think that those legal issues are really important because the repositories that acquire this data, and that's always where I come from, are not in a position to do this".(North America 3) Strong emphasis was placed around developing a common structure for handling ethical and legal compliance in the academic and private sectors.This perception comes from providing clarity around the legal basis for processing brain data and participants pointed out that in academic research the collection and processing of brain data is well regulated with visible structures such as ethics review boards as compared to the private sector which has more brain data in its possession. Respondents expressed perceptions around trust which majorly focused on the protection of intellectual property and reciprocity in brain data sharing.One respondent pointed out that there is a certain level of suspicion which accompanies data from other regions majorly because there is a lack of knowledge of the ethical and legal procedures used in obtaining brain data in other regions.Therefore, this lack of knowledge might also be accompanied by a lack of guidance on ethical and legal equivalence between two data brain sharing parties in different regions. "we are more suspicious of data that is submitted from other countries than our own, simply because we understand the rules in our own country, and we don't understand them and the others, and we don't have the staff to go and say, oh, yeah, no, this is equivalent and this is okay.But I think that sort of guidance is a problem".(North America 3) Although there were perceptions on the consequences of the adoption of neuororights, there was a high level of agreement that further discussions around the conceptualisation of neurorights is required.Assumptions and suggestions were made on what neurorights should entail and guiding questions were also proposed by some respondents.Some of the suggestions involved engaging the wider community and different stakeholders in the conceptualisation of neurorights.Questions such as what is the scope of neurorights?does consciousness come into play?is it only for research or the private sector? were also raised by some respondents which shows the need for clarity and agreements. "I'm curious to see what it entails.Right?It seems a bit more off the top of my head. . ...Like, should everyone have access to their own thoughts?What does that mean?What's a thought?Right?does consciousness come into play?Should we have something about the untouchable rights to an identity?What is an identity?".(Europe 2) Although the respondents expressed various perceptions which underline the fact that they operate under different brain data retention and destruction guidelines, there were also expressions which showed that there is a lack of clarity on the guidelines around data retention and destruction.Some of the participants provided various suggestions on how to provide clarity in such guidelines.One participant suggested that to provide such clarity principles investigators, laboratories, funders, and journals need to come to agreement with regards to the criteria for data retention and destruction. . Discussion The ethical analysis above presents various ethical and legal principles reflected in the discourse and the underpinning ethical positions.These discussions reflect the contextual perceptions in the conduct of brain data research and the management of brain data.In Figure 3 we present an analytical illustration of the various principles in the discussions and how they are underpinned by various ethical positions. . . Summary of insights and critical recommendations In Table 2 we offer a condensed overview of the observations derived from the contextual stances and perspectives of the neuroscientists involved in this study. Table 2 below also reveal key insights that need to be considered in facilitating the development of a global framework that encompasses contextual considerations.The study reveals the ./fdata. . Deontology positions Consequentialist positions Social contract positions Virtue positions Privacy, Protection, and Confidentiality: Insights show an emphasis on the obligation to maintain privacy, protection, and confidentiality of research subjects and their data in brain data research, especially during brain data sharing.However, there is a lack of agreement on best practices, with different terms like "blinding, " "coded, " or "de-identified" used to quantify such obligations which calls for standardisation. Consent and Data Utility: Key insights around consent suggests that consent is a tool for research subjects to control their data and this control sometimes lead to trade-offs between privacy and data utility especially when sharing brain data across borders where consent forms cannot be modified.Some insights suggest consent should be limited by law as data subjects can never fully understand the implications of consent around data sharing. Balancing Privacy and Data Utility: Participants emphasise the need for an agreement on the level of privacy, protection, and security of data in infrastructures while considering the balance between privacy and data utility.This insight highlights the importance of implementing privacy and security measures without compromising the usefulness of data. Ethical Considerations in AI Algorithm Development: Virtue positions provide ethical considerations surrounding the development of AI algorithms that utilise brain data under different legal frameworks.The insight indicates that researchers and developers need to be mindful of the ethical implications of AI algorithm development, even if certain practices are legally permissible. Consent and Communication: Participants recognise the duty to process consent for the collection of brain data.They express concerns about the inadequacies of current consent models, calling for extra obligations.Clear communication with data subjects is highlighted, ensuring they understand the potential downstream uses of their data, especially when using invasive technologies that can alter brain chemistry of brain data subjects. Appropriate Privacy and Protection Measures: Participants emphasise the need for appropriate measures to ensure privacy, confidentiality, and data protection while sharing brain data especially in the form of brain data images.The insights obtained show that current techniques may not fully address re-identification risks, especially in the sharing of brain data images and in personalised medicine modeling due to the use of human imaging data to build models.This can also be linked to the lack of clarity in what can be classified as personal information in the context of brain data. Development of Appropriate Consent Models: The perception by participants underscores the practical need to develop appropriate consent models in brain data research, as current models may not address the unique aspects of brain data.Along with development of appropriate consent models insights also emphasise the significance of carefully managing consent during data sharing to ensure ethical practices. Importance of Ethical Virtues: Insights show that key stakeholders in brain data research should possess certain ethical virtues, including integrity, reflexivity, responsibility, and accountability.This insight emphasises the practical significance of these virtues in guiding ethical decision-making and behavior in the context of brain data research. Citizen Engagement and Education: Key insights obtained emphasise the duty to engage and educate data subjects about the necessity of acquiring brain data for research, especially in creating interventions around brain diseases. Addressing the stigma associated with brain diseases is crucial in changing perceptions and encouraging data subjects to willingly provide brain data for research. Education and Independence of Review Ethics Boards (REBs): The insights highlight the importance of engaging and educating ERBs to understand data-related issues and concepts and lack of expertise in these areas may hinder fairness in the ethics review process.Practical insights also stress the need for independent REBs free from institutional control to ensure efficient ethics approval as the lack of autonomous REBs can delay research projects, affecting their progress. Establishing Common Ground Rules for Research and Industry Oversight: Insights point toward the need for common rules and agreements regarding data collection, processing and sharing practices between the research community and the industry or private sector.Such common ground would promote equity and fairness in practices involving brain data.This insight also highlights the importance of clarity in processing brain data within different sectors. Necessity of Responsible Research Conduct: Insights about the necessity of responsible and ethical conduct in brain data research are also deduced. It underscores the importance of adhering to ethical principles, being accountable for research actions, and maintaining transparency in data sharing, all of which are practical measures to ensure the ethical progression of brain data research. Data Subjects' Control: Theoretical and practical insights stress the importance of data subjects having control over their data, promoting independence and autonomy.The need for large corporations to offer more control over data is emphasised, as data subjects should not be treated as a means to an end but as autonomous individuals.Therefore, this calls for more frameworks that permit citizens or data subjects to have access and control to their brain data which is essential in the governance of brain data. Open Revising Legal Frameworks that criminalise brain diseases: Participants express perceptions on the need for more engagement to change existing legal frameworks that criminalise certain brain data-related conditions.This insight highlights the necessity of revising laws to provide adequate care for individuals with brain conditions rather than criminalising them which can promote the welfare of brain data research participants and promote human rights. Fair and Transparent Access to Brain Data Research Outputs: Deontological views highlight the duty to provide fair and equitable access to data, allowing data subjects to access and use their data.Ensuring research results and benefits are accessible in a simple and understandable format is crucial to promote accountability and avoid hindrances like paywalls for data subjects trying to access research outputs.This also promotes participatory brain data governance. Misuse of Brain Data: The potential misuse of brain data is a major concern among neuroscientists.The reuse of brain data, while offering valuable insights when combined with other data, is viewed as potentially jeopardising individual privacy, especially in cases of broad reuse. Validating Simulated Brain Data: The insights show that neuroscientists emphasise the need to develop rules and agreements to validate the correctness of simulated brain data, as it may lack standardised measurement methods compared to other forms of brain data. (Continued) Frontiers in Big Data frontiersin.org ./fdata. .underlying tensions and internal contradictions when considering current ethical and legal frameworks and how they influence the conduct of neuroscience.Some of the views of the respondents reflect concerns, issues and possible recommendations which can serve as catalyst for the understanding current contextual challenges in brain data research.When considering the views of the respondents it can be observed that there are currently internal contradictions and tensions between the need to advance neuroscience, the rights of data subjects, the obligations of neuroscientists, current structure of ERBs and data repositories. It appears that the need to maintain privacy, confidentiality, and protection sometimes conflict with the need to share brain data as current safeguards might be considered inadequate in handling issues that surface during sharing and processing.For example, a participant expressed the possibility of reidentification even with the use of anonymisation during sharing.This is in relation to several practices used in enhancing privacy and data protection.Certain agreements need to be established in determining if a blinded, coded or pseudonymised data is equivalent to an anonymised data.Also, these agreements have to be in conjunction with an underpinning law overseeing such privacy enhancing techniques in the relevant context.Such agreements might need to go in tandem with a global consensus around situations such as the sharing or removal of face data in human imaging data and what constitutes personal information.This is in line with the argument raised by a respondent asserting that the shape of the brain can be classified as personal information with people have different sulcal and gyral patterns which can serve as fingerprints.This is supported by the GDPR (Feiler et al., 2018) which states that if an individual can be identified from the information being processed then such information can be classified as personal information.Such arguments show that different context gives rise to different expectations and preferences in relation to data protection, privacy, and confidentiality (Nissenbaum, 2004).Furthermore, while the deontological view of protecting the data of a data subject is essential as prescribed by several laws and . guidelines, consequentialist views raise tensions around the utility of brain data when advanced privacy enhancing techniques are applied without recognising the need to enhance utility.Therefore, the importance of creating a balance between the utility of data while increasing access is essential (Eke D. et al., 2021) as reduction in the utility of data reduces the overall benefit of brain data as a result of overprotection which can indirectly reduce the benefit of such data to science and society. The need to create such balance between the rights of data subjects and the rights of the society to benefit from brain data research is also reflected in the broad reuse of brain data.Such practice involves combining multiple brain datasets to achieve big brain data which provides valuable insights.Although respondents provided views which promote broad reuse of data, consequentialist concerns enable us to identify the risks involved with broad reuse in two dimensions.First is the risk of reidentification which involves jeopardising the privacy of the data subject and secondly the risk of voiding consent because broad reuse alters the original questions in the design of the research and consent.With the advent of commercial actors and interest in AI, these risks will tend to increase as such AI technologies or neurotechnologies rely on substantial amounts of brain data for the training of datasets which are used to develop such technologies.This is in line with deontological views on the inadequacies of current consent models in handling brain data research and the call for better consent models (Ochang et al., 2022;Wiertz and Boldt, 2022).However, the consequentialist view raises internal contradictions around consent which raises tensions with broad reuse.For example, one of the respondents pointed to the fact that research participants do not usually understand the implications of consent and therefore consent should be limited by law.This shows that some researchers see consent as a tool used by data subjects to control their data and minimise risks which reduces the ability to share data and to use data for broad reuse.This can be pictured as causing harm indirectly because it prevents the ability to utilise brain data to the benefit of society and sometimes the data subject. Therefore, the role of ERBs involves analyzing brain research projects and balancing such risks during approval in other to enhance proportionality as pointed out by some of the views.This involves having the necessary expertise in review boards to carry out such reviews without placing a necessary emphasis on risks in a negative sense.Some of the views point to the fact that ERBs do not have necessary expertise to carry out such balancing of risks as there is currently a lack of knowledge in data related issues and the converting of complex ethical and legal frameworks into practice.A typical example expressed by one of the respondents was that there is the concept of a GUID (globally unique Identifier) used by the NIH to make a linkage between studies where the same individual participates in multiple studies.This makes it possible to create an anonymous identifier that allows the linkage of individuals across data without storing their personal health information or their personally identifying information.However, the respondent states that very many ERBs or reviewer boards are not familiar with a lot of these concepts.These raises perceptions and views around a lack of procedural fairness.Therefore, deontological, and consequentialist concerns provide views that shows the need to engage and educate ERBs which will ensure that decisions about risks are well informed.This is also in relation to the concerns around the structure of review boards due to the evolving landscape of research and the need for alternate review board models (Grady, 2015).Consequentialist views point to the fact that review boards need to be independent and autonomous as a lack of these might stall research in situations where industrial disputes arise.Also having patients, family members, and people with lived experience, for example, involved in the governance of ERBs can also enable ERBs carry out a fair evaluation of brain data research projects during review as these are stakeholders who can make judgements and conclusions on the merits and demerits of brain data research based on past experiences.With commercial actors now becoming increasingly active in the use of brain data, the existence of commercial review boards (Lemmens and Freedman, 2000) as visible oversight structures might also be necessary in ensuring that brain data is used responsibly which might also promote trust.This is because the view by some of the respondents show that commercial actors have less visible oversight structures as compared to the research sector. The deontological and consequentialist views also allowed us to gain insights into the perceptions around brain data repositories, intellectual property, data integrity and FAIR.Deontological concerns revealed the issues around the responsibilities of repositories in ensuring data integrity, provenance tracking and intellectual property.To promote FAIR, repositories use data use agreements (DUAs) (Eke D. et al., 2021) licenses such as CC0 or CC BY licenses as ethical and legal instruments to ensure that brain data is obtained and submitted ethically (Hrynaszkiewicz and Cockerill, 2012).DUAs also ensure compliance in repositories.The deontological views around the inability to understand how to enforce licenses and DUAs in repositories to carry out legal actions based on the misuse of brain data obtained from repositories or the submission of unethically acquired brain data shows that some of these instruments fall short in addressing issues of misuse as they do not provide strong mechanisms for enforcement.it also raises arguments around the need to promote open access and the need to promote data integrity.Although increasing procedural hurdles around repositories can be viewed as draconian due to the procedural challenges already encountered in the collection, sharing processing and storage of brain data, it calls for an understanding of the implications of putting these licenses and data use agreements in place and an understanding of how they are going to be enforced and if they are going to be enforced.This in line with the views of some of the respondents who pointed out that some of the data in repositories meet FAIR and open access requirements of journals but are unusable.Such views call for clear and visible structures which define liability and the development of visible global structures for the verification of ethics approval for brain research. With the advancement of brain data research and the development of neurotechnologies, there have been recent discussions about neurorights which focuses on establishing guidelines around protecting human rights as neurotechnology advances.While they have been many arguments in literature, the ethical positions of respondents which are mainly consequentialist in nature adds to the current debate in the framing of neurorights (Ienca, 2021;Rommelfanger et al., 2022).There were mixed views regarding the framing of neurorights and most of the respondents had not heard of the concept of neurorights.This raises a fundamental concern in the framing of neurorights as the respondents who are neuroscientists involved in large scale projects had not heard of neurorights.The consequentialist view by the respondents also justifies some of the claims by previous research regarding the framing and conceptualisation of neurorights.The contextual views regarding what neurorights should entail provides important insights and shows that neuroscientists who are important stakeholders should be involved in the framing of such rights.For example, some of the views asked if thoughts, consciousness, and identity should also be considered and if there is a hard border between what is considered as "neuro" and the rest of the body.Also, the respondents have perceptions that neurorights will add to the already complex landscape of ethical and legal issues and might prevent the collection of brain data for research and the advancement of neurotechnology which can benefit the society and data subjects.Some of the respondents from the European region also pointed out that brain data is well regulated, and the rights of data subjects are well protected and such new rights will have more effect in other regions where regulations such as the GDPR and AI act are lacking.This is in line with the deontological and consequentialist positions around the legal basis for the collection and processing of brain data.The deontological positions expressed the lack of clarity in the laws which results in the inability to follow rules and perform duties while the consequentialist views expressed the consequences of such inability to follow rules due to the nonclarity or nonexistence of brain data laws and guidelines.Some of the concerns show that in some regions existing ethical and legal frameworks in some regions do not provide the necessary guidance to ensure compliance when carrying out brain data research which results to both intentional and unintentional misuse of data and even violation of human rights.For example, some laws still criminalise attempted suicide which can be as result of a brain condition (Lew et al., 2022).Furthermore, the lack of clarity in legal basis can be observed around the contextual views relating to the retention and destruction of brain data as the different respondents propose different retention and destruction timelines for brain data. In addition to deontological and consequentialist concerns the study also highlights virtue ethics which focuses on the moral virtues and traits that key actors should possess and express.Some of these include being accountable for data even after sharing, ensuring data integrity and quality, showing responsibility and accountability around the use of brain data for commercial AI and neurotechnologies.Some of the positions highlight the fact that researchers and other data users might sometimes prioritise the benefits of brain data research to society and scientific curiosity above the fundamental rights of both human and animal subjects.Sometimes this involves operating under a framework that is legal but unethical.Also, in the development of AI and neurotechnologies much emphasis has been placed on human subjects and their interaction with neurotechnologies but one of the concerns by the respondents highlights the need for also addressing the interaction of such neurotechnologies with animals which can violate animal rights.This is in line with arguments around the limitations of the 3Rs tenet, Replacement, Reduction, Refinement, and the call to embed responsibility and reflexivity in line with Responsible Research and Innovation (RRI) (McLeod, 2015). In addition to deontological, consequentialist, and virtuous ethical positions the study also highlights the need for key actors to agree based on common interest which is underpinned by the social contract theory.The social contract theory aims to provide reasons why members of the society have reasons to endorse and comply with fundamental laws, institutions and principles based on a particular context (D'Agostino et al., 2021).Having a harmonised set of ethical and legal requirements for governing data ensures that neuroscientists and key actors involved in the management of brain data are acting ethically and are legally compliant.Based on the results the social contract theory encompasses the development of processes, guidelines and rules around several areas of concern such as balancing privacy and utility, agreements on the procedure for sharing brain images and definition of personal information in the context of brain data, development of consent models, common ground of rules regarding the oversight applied to the research community and to the commercial sector, decriminalising brain diseases, developing rules and agreements on how to measure and validate the correctness of simulated brain data, developing set of rules regarding the open sharing of data to meet FAIR rules while meeting the requirements of funding agencies and institutions, refining licensing structures in repositories to clarify liability and enforcement, developing a common structure for handling ethical and legal compliance in the academic and private sectors, developing sharing models around trust, agreements around the conceptualisation of neurorights, and agreement on data retention and destruction guidelines. . Conclusion The findings of the study offer valuable insights into the ethical stances held by key actors involved in brain data research.This research highlights the myriad challenges and deliberations inherent in the pursuit of a globally informed framework for governing brain data.Moreover, it illuminates several critical considerations that demand thorough examination in the advancement of global brain data governance. The study provides essential insights, considerations, and recommendations that align with the various dimensions of brain data governance, encompassing human rights, participatory governance, regulations, policies, guidelines, and the promotion of ethical innovation.Employing ethical theories, this research exemplifies how these theories facilitate the balancing of interests among stakeholders across different regions while emphasising the importance of value alignment. Furthermore, it furnishes normative guidance by synthesising diverse positions, principles, and values that should serve as the foundation for data governance decisions and actions.The insights derived from this research also underscore the role of ethical theories in informing the assessment of potential risks and benefits associated with the processing of brain data, as perceived by neuroscientists. This research has demonstrated that enhancing an understanding of the ethical stances held by pivotal stakeholders can enhance the nuanced approach to discussions in brain data governance.There's a need to establish day-to-day practices and routines that bring clarity, thus aiding essential stakeholders in effectively navigating the ethical and legal complexities inherent in brain data research.Some crucial considerations encompass fundamental concepts such as data protection by design and by default, and privacy by design (Eke D. et al., 2021).Moreover, ethics by design (European Commission, 2021) and by default stands as another pivotal concept capable of infusing ethics into everyday decision-making processes, thereby fostering ethical adherence.This is particularly valuable as certain perspectives from this study have emphasised the necessity of seamlessly integrating contextual perceptions and concerns into practical implementation of data governance frameworks.The findings show the need to move from recommendations and discussions to practical implementation of solutions to address concerns and to provide ethical and legal clarity which will advance the navigation of current ethical and legal hurdles and advance discussions in the development of a global governance framework. FIGURE FIGUREBrain data governance dimensions. FIGURE FIGUREEthical theories and brain data governance relationship. Figure 2 Figure2we illustrate the relationship between ethical theories and data governance.Ethical theories can provide the foundation for ethical decision-making within data governance.By integrating these theories into their practices, key stakeholders (e.g., researchers) and organisations who collect, store, and share brain data can ensure that their data handling processes are aligned with moral principles and societal values, ultimately leading to responsible and trustworthy data governance. Frontiers justification for the research while the use of the word robot was seen as less sensitive.Perceptions regarding data integrity were expressed in terms of open access and requirements for FAIR.The underpinning consequentialist view as portrayed by a respondent is that while public funding bodies, data repositories and publishers promote open access, there is currently a non-democratic effect regarding open access especially in regions with little or low access to funding.This is because the structure of open access usually requires payments that might stall open access and sharing in regions with low research funding. FIGURE FIGUREEthical and legal principles and their underpinning ethical positions. TABLE Demographics of neuroscientists in the study.Although subjective interpretations were used to analyse the statements, standardisation of the analysis was achieved through consistency checks and a process of deliberative mutual adjustment known as reflective equilibrium (de Maagt, 2017).
16,833.6
2023-11-09T00:00:00.000
[ "Law", "Political Science", "Philosophy" ]
Determinants of Occupational Health Problems of Industrial Workers in Coimbatore-Tamil Nadu The problems of industrial workers is a matter of concern for the partners of industry, Research scholars, academicians, policy makers, planners, labor leaders, and social workers. Recently there has been a growing awareness of the existence, importance, and needs of the unprotected workers. The unprotected workers are, by definition, disadvantaged workers, the degree varies from section to section. There is a lot of research in the field of unprotected workers. But very few research has been carried out about the unprotected industrial workers belonging to engineering and foundry units, who form a sizeable proportion of the total labor force. There is an immediate need to focus on occupational safety for the formal/organized industrial workers. In India, occupational health safety has so far benefited, only for workers of the formal sector. Against this background, the current study attempted to know the Determinants of Occupational Health Problems of industrial workers in Coimbatore – Tamilnadu. The study found that the sample industrial workers appear to be suffering from occupational-related health problems has increased with an increase in their current age, and such likelihood is also noted as moderately higher among those who are in debt, but unexpectedly such suffering from health problems is also found to be higher among those who own assets. Introduction Labor plays a vital role in the industrial system for the economic growth of a country. Hence, there is a need for a clear understanding of the different labor problems. The problems of industrial workers is a matter of concern for the partners of industry, Research scholars, academicians, policy makers, planners, labor leaders, and social workers. Recently there has been a growing awareness of the existence, importance, and needs of the unprotected workers. The unprotected workers are, by definition, disadvantaged workers, the degree varies from section to section. There is a lot of research in the field of unprotected workers. But very few research has been carried out about the unprotected industrial workers belonging to engineering and foundry units, who form a sizeable proportion of the total labor force. There is an immediate need to focus on occupational safety for the formal/organized industrial workers. In India, occupational health safety has so far benefited, only for workers of the formal sector. Against this background, the current study attempted to know the Determinants of Occupational Health Problems of industrial workers in Coimbatore -Tamilnadu. Methodology The causes of industrial health hazards that pertain to unsafe conditions can include insufficient workspaces such as Lighting, Excessive Noise, Slippery or Unsafe Flooring, Extreme Temperature Exposure, Inadequate Protection When Working with Machinery or Hazardous Materials, Unstable structures, Electrical Problems, Machine Malfunction, or Failure, and more. The causes of industrial hazards can occur in the environment around the workplace or within the work environment. External causes of industrial hazards may include fires, chemical spills, toxic gas emission, or radiation. The causes of industrial hazards in these cases might include organizational errors, human factors, abnormal operational conditions. The internal causes of industrial accidents can involve equipment or other work-related tangibles, harmful materials, toxic chemicals, and human error (Ezzati M et al., 2002). It is clear from the above paragraph that workers in the unprotected segment do not enjoy any job security, work security, and social security. The study is confined to Coimbatore is one of the most industrialized cities in Tamil Nadu and is known as the textile capital of South India or the Manchester of South India. Here, an attempt is made to find out the major determinants of the number of occupational health problems from which the respondents suffered / experienced, making use of multiple linear regression analysis. Such multivariate analysis allows us an accurate assessment of each of the explanatory variables (background characteristics) by taking into account the potential confounding effects of other variables used in the model. For this purpose, the number of occupational-related health problems considered here as a dependent variable, which is measured as discrete (i.e., the actual number of occupational health problems experienced by the respondents). The independent (explanatory) variables considered here for analysis are based on the theoretical importance and their levels of significance with the dependent variable. Of the ten variables included in the model, six are treated as discrete, and the other 4 are dummy variable type (2 categories only -for details see Table above). Results based on multiple linear regression analysis (Table 1 ) highlights that all the ten variables included in the model together have explained 25.1 percent of the variation in the number of work-related health problems suffered by the respondents during the 12 monthly preceding the survey (F=11.338; p<0.001). Controlling for all the variables included in the model, the likelihood of suffering from occupational-related health problems by the respondents appeared to be increasing in a highly significant way with their current age (β=0.192; p<0.001). It is conspicuous to note that respondents who have dept have shown a fairly higher propensity to suffer from more number of work-related health problems than those who do not have any debt, and statistically, this finding has turned out as moderately significant (β=0.111; p<0.05). Conversely, the probability of suffering from one or more work-related health problems is likely to decrease significantly with an increase in the respondents' satisfaction level with the factory environment (β=-0.169; p<0.001) and in an increase in their level of education (β=-0.154; p<0.01). Another eye-catching finding is that respondents whose nativity is urban areas have exhibited significantly lower odds of suffered from occupational health problems. Next to these, it is also striking to note that social status background (caste as social strata) has demonstrated a net effect, in a negative direction, on their extent of suffering from work-related health problems (β=-0.127; p<0.01). This finding supports the fact that members belong to higher caste groups (BCs in the present context) are likely to suffer from lesser number of health problems closely followed by MBCs, whereas respondents who belonged to SC / ST (lower in social strata) have reported being suffering from a higher number of such health problems. Another worth finding noted here is that as respondents type of labor increase from unskilled to semi-skilled and then to skilled, there seems to be a decrease in the number of health problems they suffered, and statistically, such association is moderately significant (β=-0.114; p<0.05). Yet another conspicuous finding it that female respondents have shown a lower tendency to suffer from occupation-related health problems than their male counterparts at a moderately significant extent (β=-0.107; p<0.01). All these findings are on the expected lines. Contrary to the expectation, respondents who own assets have reported being suffered from a higher number of occupational-related health problems than those who do not own such assets (β=0.154; p<0.01). It is considerable to note that though the likelihood of suffering from occupational health problems by the respondents appears to be decreasing with their average earnings per month, the t-test statistics didn't turn out as significant (β=-0.154; p<0.01) Conclusion In sum, from the results above, it is conspicuous to note that the sample industrial workers appear to be suffering from occupational-related health problems has increased with an increase in their current age, and such likelihood is also noted as moderately higher among those who are in debt; but unexpectedly such suffering from health problems is also found to be higher among those who own assets. On the other hand, the tendency to suffered from different work-related health problems has decreased with an increase in the respondents' satisfaction with factory environment, level of education, social status (caste background, SC/ST, MBC, and BC), and type of labor (unskilled, semi-skilled and skilled). Likewise, the odds of suffering from one or more occupational-related health problems seem to be lower among those whose nativity is urban areas, and gender is female.
1,820
2020-10-01T00:00:00.000
[ "Economics" ]
Coral larvae are poor swimmers and require fine-scale reef structure to settle Reef coral assemblages are highly dynamic and subject to repeated disturbances, which are predicted to increase in response to climate change. Consequently there is an urgent need to improve our understanding of the mechanisms underlying different recovery scenarios. Recent work has demonstrated that reef structural complexity can facilitate coral recovery, but the mechanism remains unclear. Similarly, experiments suggest that coral larvae can distinguish between the water from healthy and degraded reefs, however, whether or not they can use these cues to navigate to healthy reefs is an open question. Here, we use a meta-analytic approach to document that coral larval swimming speeds are orders of magnitude lower than measurements of water flow both on and off reefs. Therefore, the ability of coral larvae to navigate to reefs while in the open-ocean, or to settlement sites while on reefs is extremely limited. We then show experimentally that turbulence generated by fine scale structure is required to deliver larvae to the substratum even in conditions mimicking calm back-reef flow environments. We conclude that structural complexity at a number of scales assists coral recovery by facilitating both the delivery of coral larvae to the substratum and settlement. Reef corals have evolved in a highly dynamic environment repeatedly subject to many types of disturbances; in particular storms and floods 1 and, more recently, coral bleaching and mortality caused by global warming 2 . The scale and severity of many of these disturbances are predicted to increase in response to climate change 3 , and consequently there is an urgent need to improve our understanding of the mechanisms underlying reef recovery. For example, while reef structural complexity is often associated with increased rates of recovery, the precise mechanism is unknown 4 . The recovery of reef coral assemblages from catastrophic disturbance is generally dependent on larval replenishment from other reefs 5,6 (but see ref. 7). Therefore, any process that increases larval supply should also assist recovery. For example, reefs with high levels of connectivity should recover more quickly than reefs isolated by distance or currents from sources of larvae 8,9 . Similarly, reefs that are more effective at capturing larvae from the plankton should recover more quickly than other reefs 10 . Recovery is also dependent on successful settlement and recruitment. For example, the role of reef micro-structure, such as the crevasses excavated by echinoderms while grazing 11,12 , in providing a refuge from predation [13][14][15] and thereby enhancing post-settlement survivorship is well establish. In contrast, very little is known about the process of settlement on reefs. Under controlled laboratory conditions, numerous factors can influence whether or not a coral larva settles, including chemical cues and phototaxis 16,17 . However, there is a paucity of direct observations of coral larval settlement in complex topographical and hydrodynamic environments 18,19 . The entrapment of larvae by reefs, their interaction with complex reef structure, and settlement are all likely to be influenced by larval swimming speeds and sensory capacity. An important historical theme in marine ecology has been the tendency to underestimate the capacity of larvae to influence their fate. Marine larvae were initially considered to be passive particles with little capacity to sense or respond to their environment [20][21][22] . These ideas led to the paradigm of massive export of larvae from the reef of origin followed by dispersal over a large spatial scale 23,24 . However, subsequent research demonstrated that the larvae of many marine taxa can respond to a diverse array of environmental and chemical cues that give them some capacity to influence their fate 18,25 . For example, crustacean larvae are able to remain in estuaries by performing tidally-synchronized vertical migrations 26 , and reef fish larvae can smell nearby reefs 27 and swim towards them for sustained periods 28 . Similarly, patterns of dispersal among coral species are influenced by aspects of their biology, such as rates of larval development 29,30 and larval response to settlement cues 31,32 . Nonetheless, the capacity of marine larvae to influence patterns of dispersal and settlement through swimming behavior is likely to be limited because the larvae of many taxa, in particular scleractinian corals, are very poor swimmers 20 . For example, even if coral larvae can distinguish between waters from healthy and degraded reefs 33 , it is highly unlikely they will be able to navigate to healthy reefs if their swimming speeds are less than currents in the open ocean. Here, we first compare data from the literature on coral larval swimming speeds with empirical measurements of water flow both on and off reefs to explore the capacity of swimming ability to influence dispersal and settlement. We then test the ability of coral larvae to settle in a flume using current speeds that mimic the flow regime on reefs and explore the role of micro-structure in affecting settlement. Methods Larvae swimming speeds. We included all data on coral larval swimming speeds based on our knowledge of the literature, because most of these data are inaccessible to current search engines, such as the work of Japanese scientists in Palau in the 1930s and 1940s. Swimming speeds were measured as distance covered per unit time in all studies. The only data we filtered was from Harrigan 34 , where swimming speeds for only the first 7 days were used to allow direct comparisons with the other studies. The only data excluded was Hodgson 35 , because we do not accept that it is possible to identify coral larvae collected in plankton tows to species. Our meta-analysis included 9 studies representing over 450 measurements of individual larva from eight coral species (Table 1). Using R 36 , an ANCOVA was run on the log-transformed swimming speeds to test for an effect of larval size and swimming direction (horizontal, upwards and downwards) on swimming speeds. Larval size did not have a significant effect on swimming speed and removing it greatly improved the model based on Akaike's Information Criterion. Differences among swimming directions in the final model were assessed using Tukey Honestly Significant Differences (TukeyHSD). 52 21 Seriatopora hystrix Up n/a n/a 3.33 n/a n/a 1.50 53 22 Table 1. Swimming speeds (in mm s −1 ) for hermatypic scleractinian coral larvae. n is number of larvae; SE is standard error; ^ is the mean calculated as average of maximum (max) and minimum (min) value; * is the mean calculated from larvae aged 2 to 7 days old; n/a is not available. depth layer, and these were averaged over the layers in the top 20 m of the water column to give a depth-average of horizontal and vertical current speeds. Measuring water motion on the reef. Water motion over the reef was measured on the fringing reef of Lizard Island, Australia, between South and Palfrey Islands (14.700°S, 145.449°E), known as Trimodal Reef. An acoustic Doppler velocimeter (ADV) was deployed from November 16 to 24 2013, recording samples at 8 Hz in a burst of 2,048 samples once every 20 min. Data were summarized by recording the mean velocity components for each burst in the toward-shore (u), along-shore (v) and vertical (w) directions for each burst. The spectra of individual bursts were examined to determine the dominant period of oscillation. Water velocities were measured using particle image velocimetry (PIV) 38 . A vertical plane of water parallel to water motion was illuminated by a laser sheet (300 mW, 532 nm). Waterborne particles within the laser sheet were filmed at 30 fps using a digital video camera in an underwater housing with a 532 nm band pass filter. Both laser and camera were attached to an aluminum tripod frame 40 × 40 × 30 cm (L × W × H) in size, and a black felt curtain was extended ~60 cm behind the laser sheet to reduce background light. Eight sites were chosen along a transect perpendicular to the reef crest and in line with the ADV (±2 m to either side along-shore), from the crest to 50 m towards shore. At each site, 2 min of video were recorded in an 8 × 5 cm (W × H) field of view (FOV) over a relatively flat area of the site, approximately central to the shoreward and seaward edges of the raised feature. The FOV was directly above the substrate, including the upper 0.5 to 1 cm of the substrate. Footage from two sites (located 1.0 m and 3.2 m behind the reef crest) were chosen for subsequent PIV analysis. Video footage was stabilized using Deshaker software 39 package in VirtualDub 40 and 30 s of each two-min clip was then analyzed using PIVlab 41 in Matlab (Mathworks). The velocity measurements (u, w) of each site were recorded as a single vertical velocity profile located centrally in the FOV for each frame, starting approximately 1 mm from the substrate and increasing in 1.3 mm height increments. For each site, mean toward-shore velocity (u) as a function of distance from substrate (h) was calculated across all processed frames. The velocity gradient was assumed to be linear between the substrate (where u = 0) and the closest velocity measurement to the substrate, a conservative estimate. Assessing settlement behavior of Isopora cuneata larvae. The study organism, Isopora cuneata (Family Acroporidae), a brooding coral, was chosen because of its high abundance on Trimodal Reef and the fast swimming speeds of congeners ( Table 1). Branches of I. cuneata colonies were collected in the field and were placed in an outdoor flow-through seawater tank. Flow was suspended overnight and larvae were collected in the morning by pipette and kept in 0.2 μm filtered seawater (FSW) until used in the experiments. To assess larval settlement behavior in flow conditions similar to those found on coral reefs, I. cuneata larvae were placed in a recirculating flume capable of generating oscillating water motion ( Fig. 1a and b). The oscillating flume was composed of acrylonitrile-butadiene-styrene pipe (7.6 cm outer diameter; 45 × 30 cm, L × H) with a clear, rectangular plexiglass working section (15 × 2.8 × 5 cm inner L × W × H). Flow recirculated in a closed vertical loop, driven by a propeller located in the vertical arm of the flume downstream of the working section. The propeller was attached to a servomotor. Rotation rate of the servomotor was controlled by an amplified analog voltage signal output by a custom Matlab script and transduced by a data acquisition card (National Instruments, model NI USB-6211). Flow straightening grids were placed on either end of the working section. The middle of the working section was illuminated from above by light from an LED source (LED Lenser ® , model P14) passed through a narrow slit (3 mm) sitting atop the working section that spanned the length of the chamber across the middle of the working section. Larval swimming behavior was initially measured in still water in the presence of substrates containing a settlement cue. Cue-laden substrates were prepared in two ways: 1. Slide treatment: Crustose coralline algae (CCA) chips and attached coral matrix collected from the field were dried, pulverized, and then secured to a standard glass microscope slide with silicone adhesive. The slide was cured for 12 h before being placed on the floor of the flume working section. 2. Tile treatment: A rectangular fragment of a brick settlement tile (deployed onto the reef ~3 months prior and collected immediately prior to the experiment) containing live CCA and other algae (7 × 2.5 × 1 cm, L × W × H) was deposited directly onto the floor of the flume working section. Larvae were exposed to each substrate separately and were not reused for any trials in this experiment. For each treatment, the flume was filled with FSW, and water was allowed to stabilize for ~10 min. Twelve I. cuneata larvae were place in the chamber with a pipette, and water motion was allowed to stabilize for 1 min. Larvae were filmed for 10 min at 30 fps across a 4 × 2 cm (W × H) FOV. Kinematic data (position, velocity, orientation, rotation) of individual larvae were tracked from recorded footage using a custom Matlab script. Footage of larvae that did not remain in the illuminated midsection of the camera's FOV were excluded from analysis. Flow velocities and oscillation periods for the flume experiment were set based on field flow measurements on the reef. These were calculated by manually tracking 10 particles in field PIV footage recorded near the reef crest (3 m behind ADV) and on the back reef (27 m behind ADV). Particles approximately 2 cm above the substrate were tracked using the MTtrackJ plugin 42 in ImageJ (NIH). Near the crest, water motion oscillated between 0 and 11 cm s −1 while flows on the back reef oscillated between 0 and 5 cm s −1 . These two velocity ranges simulated high-flow (crest) and low-flow (back-reef) flume conditions, with a 3 s oscillation period. Flow patterns were calibrated by manually tracking video footage of neutrally buoyant hydrated Artemia cysts (serving as passive particles) and adjusting the analog voltage signal to the servomotor until the velocity ranges matched the above values. Initial trials showed that larvae exposed to the high-flow regime had no chance of successful attachment. As a result, only low-flow conditions were used in all trials of this experiment. (Fig. 1c). FOV: The entire block surface. The three treatments represent conditions of increasing local turbulence within the FOV as a result of increasing topographical or structural complexity, which was predicted to increase larval contact with the substratum. Before each treatment, the flume was emptied and rinsed of larvae and particles then filled with fresh FSW. For each treatment, approximately 60 I. cuneata larvae were introduced into the flume via pipette while water was in motion. The experimental setup was allowed to stabilize for 2 min, then the working section of the flume was filmed for ~1 h. Subsequently, ~100 neutrally buoyant Artemia cysts (passive particles) were then introduced by pipette and filmed for approximately 10 min to characterize flow conditions. Successful adhesion (attachment to substrate for at least 5 min after initial contact) was observed solely in the block treatment. The kinematic data of larvae were tracked using a custom Matlab script, and incidences of contact with the substrate (either followed by successful attachment or immediate detachment) were recorded. Analysis of larvae contact and attachment. To estimate statistical differences among topography treatments and live and dead larvae, we used bias-reduced logistic multiple regression to avoid complete separation in non-block treatments that had none or few successes 43 . Bais-reduced generalized linear models with logit link function were run for larvae contact and attachment separately using R 36 Results and Discussion Our meta-analysis of coral larval swim speeds [46][47][48][49][50][51][52][53] (Table 1) shows that swimming speeds are much lower than the speeds of the tidal currents and orbital wave motions found in and around coral reefs (Table 2). Coral larval swimming speeds were not associated with larval size but did vary with the direction of swimming (F 2,15 = 13.72, p < 0.001). Larvae swam faster when heading downwards than in the horizontal or upwards direction (Fig. 2). Mean swimming speeds for each species ranged from 1.57 mm s −1 for Heliofungia actiniformis, swimming horizontally, to 4.79 mm s −1 for Pocillopora damicornis, swimming downward ( Table 1). The range of swimming speeds among all 450 measurements was 0.08 mm s −1 to 6.49 mm s −1 (Table 1). Horizontal water current speeds in the ocean were 1-4 orders of magnitude greater than larval horizontal swimming speeds, and vertical water current speeds were 1-3 orders of magnitude greater than larval vertical swimming speeds (Tables 1 and 2). In addition, periods of slack water (i.e., the time when current speeds are potentially slow enough to allow larvae to make headway; less than 5 mm s −1 based on maximum swimming speeds in Table 1) were extremely limited over this two-week period ( Table 2). Horizontal current speeds never dropped below this 5 mm s −1 threshold, and vertical currents were greater than the threshold for over 90% of the time (Table 2). We conclude that coral larval swimming speeds 20 are orders of magnitude lower than measurements of water flow both on and off reefs. Therefore, the ability of coral larvae to navigate to reefs while in the open-ocean, or to settlement sites while on reefs is extremely limited. Even if coral larvae can distinguish between waters from healthy and degraded reefs 33 , they will not be able to navigate to healthy reefs because their swimming speeds are far too low to overcome currents. Our meta-analysis indicated that the genus Isopora contains particularly fast-swimming larvae (Table 1) and this was confirmed in the flume where I. cuneata larvae ranked among the fastest larvae, with swim speeds up to 5.8 mm s −1 (Fig. 3). However, this high swimming capacity did not translate into a high capacity for settlement. Of the 95 I. cuneata larvae tracked in the flume, only one made contact with the substratum in the tile treatment and none became attached (Fig. 4). Attachment only occurred when a protruding structure in the form of a block was introduced to break up the flow (Fig. 4b). For the block treatment, contact and attachment also occurred regardless of whether larvae were dead or alive (Fig. 4) although both contact and attachment were higher for live larvae (Table 3). Indeed, our estimates of current velocities on the reef substratum (Fig. 3b) suggest that turbulence generated by the boundary layer 19 is insufficient to enable coral larvae to settle. We suggest that without the additional turbulence and eddies generated by complex micro-structure coral larvae will not be able to navigate to the substratum even when exposed to low-flow, back-reef flow conditions. We conclude that this fine-scale Years for which data were not available are marked as n/a. structure assists coral recovery from disturbance by facilitating both the delivery of coral larvae to the substratum and settlement. We further hypothesize that structural complexity at a larger scale 4 works in a similar way by creating turbulence to capture larvae from the water column as they flow across the reefs. In conclusion, despite the well-documented sensory capacity of coral larvae 16,17 , their swimming speeds are much too low to enable them to navigate to suitable settlement sites under most conditions. When in the open ocean, and even on reefs, coral larvae are essentially lost at sea 10 . Furthermore, because flows that are sufficiently benign for coral larvae to swim directly to the substratum are very rare, some form of topographic structure is required to generate turbulence to capture larvae from the plankton and to deliver them to the substratum. We Box plots show the median, interquartile range, ±1.5 interquartile range with outliers in red. (b) Mean water velocities above the substratum at two positions back from the reef edge at Lizard Island (Great Barrier Reef, Australia) measured using particle image velocimetry. Grey bands represent 99% confidence intervals. suggest that this is one mechanism by which structural complexity promotes reef recovery following disturbance 4 . Consequently, maintaining structural complexity at a number of scales on reefs is vitally important in terms of aiding reef recovery. Relevant management actions include limiting factors that reduce complexity, such as destructive fishing practices 54 and promoting factors that enhance complexity, such as herbivory 11,12 . . Settlement success of coral larvae in oscillating flume tank. Proportion of Isopora cuneata larvae making (a) contact with settlement substrates covered with CCA and forming (b) attachment for more than 5 min. Substrate treatments were: microscope slide, flat settlement tile and block. For substrates where contact was observed, flume experiments were run separately with live and dead larvae. Standard errors illustrate statistical differences; also see Table 3.
4,639.6
2017-05-22T00:00:00.000
[ "Environmental Science", "Biology" ]
A Note on Large N Thermal Free Energy in Supersymmetric Chern-Simons Vector Models We compute the exact effective action for \cN=3 U(N)_k and \cN=4,6 U(N)_k\times U(N')_{-k} Chern-Simons theories with minimal matter content in the 't Hooft vector model limit under which N and k go to infinity holding N/k, N' fixed. We also extend this calculation to \cN=4,6 mass deformed case. We show those large N effective actions except mass-deformed \cN=6 case precisely reduce to that of \cN=2 U(N)_k Chern-Simons theory with one fundamental chiral field up to overall multiple factor. By using this result we argue the thermal free energy and self-duality of the \cN=3,4,6 Chern-Simons theories including the \cN=4 mass term reduce to those of the \cN=2 case under the limit. Introduction The rest of this paper is organized as follows. In Section 2, we compute the exact effective action of N = 3 U(N) k and N = 4, 6 U(N) k × U(N ′ ) −k Chern-Simons theories including N = 4, 6 mass terms by taking the 't Hooft vector model limit. In Section 3, using the result obtained in Section 2, we discuss the thermal free energy and self-duality of the supersymmetric Chern-Simons matter theories. 1 Section 4 is devoted to summary and discussion. In Appendix, supersymmetric Chern-Simons matter actions are written in our convention. 2 Exact large N effective action 2 .1 Fundamental matter fields In this preliminary section we study U(N) k Chern-Simons theory coupling to M fundamental scalar and fermionic fields in the 't Hooft limit, in which N and k go to infinity with λ = N/k fixed. We denote M fundamental scalar fields and fermionic ones by q A , ψ A respectively, where A = 1, 2, · · · , M. The case M = 1 was studied in detail in [8]. We are interested in a situation where the theory has U(M) flavor symmetry, which we assume in what follows. The main purpose of this section is to demonstrate how to generalize the previous result to M copies of the matter fields reviewing the technique employed in the previous study. For this purpose let us start by a generic action S = d 3 x(κL cs [A] + L m ). (2.1) Here κ is related to the Chern-Simons level k by κ = k 4π and where D µ is the covariant derivative acting on the fields in a way that (2.4) V m represents a gauge-invariant potential in this system given by a function of bilinears of the elementary fields q A , ψ B in a flavor-singlet way. We suppress contraction of fundamental gauge indices for notational simplification. A specific example is N = 3, whose action in our notation is given in A. 3. Firstly we separate the gauge field into U(1) part and SU(N) one. The Chern-Simons coupling of U(1) gauge field is given by Nk, which means the gauge propagator of U(1) gauge field has extra 1/N factor compared to that of SU(N) part. Therefore the contribution of U(1) part of the gauge field is sub-leading in the large N limit. So let us focus on the case when the gauge group is SU(N). In order to determine the exact effective action we fix the gauge degrees of freedom by the (Euclidean) light-cone gauge [1]. This gauge fixing gets rid of the cubic interaction of gauge field, which enables us to integrate it out. From the equation of motion for A + we obtain where T a is a generator of SU(N) gauge group. A solution in the Fourier space is given by 2 (2.6) Plugging the solution into the action (2.1), we find [8] 2π) 3 C 2 (P 1 , P 2 , q 1 , q 2 , q 3 )χ A B (P 1 , q 1 )χ B C (P 2 , q 2 )χ C A (−P 1 − P 2 , q 3 ) where C 1 (P, q 1 , q 2 ) = 2πiN k (−P + q 1 + q 2 ) 3 (P + q 1 + q 2 ) − (q 1 − q 2 ) − , (2.11) C 2 (P 1 , P 2 , q 1 , q 2 , q 3 ) = 4π 2 N 2 k 2 (P 1 − P 2 + 2q 1 + 2q 2 ) − (P 1 + 2P 2 + 2q 2 + 2q 3 ) − (P 1 + P 2 + 2q 1 − 2q 2 ) − (P 1 − 2q 2 + 2q 3 ) − . (2.12) 14) The ellipsis in (2.7) represents 1/N correction terms and those which contain η AB ,η AB . The next step is to introduce auxiliary bilocal fields so that the interaction terms disappear in the action. We can add the following terms without changing the dynamics . Adding this into (2.7) gives , (2.17) and the ellipsis contains 1/N sub-leading and γ AB ,γ AB terms. Since this is quadratic in terms of the elementary fields q A , ψ A , they are integrated out by gaussian integration, which results in Our interest is in the leading behavior of the large N limit. For this purpose we shall focus on evaluating this on the saddle point. A natural ansatz for saddle point equations is such that solutions satisfy the translational, rotational invariance and covariance with respect to flavor indices. To proceed further we need to specify a potential form of matter fields. We shall do case study by using N = 3 Chern-Simons theory in the next subsection. In this subsection we apply the result obtained in the previous section to N = 3 U(N) k Chern-Simons theory with minimal matter content. The matter content is two fundamental complex scalar fields q A and fermionic fields ψ A , where A = 1, 2. N = 3 U(N) k Chern-Simons Lagrangian is given by (A.11). The potential of the matter fields reads from (A.11) as follows. We again contract gauge indices by bracket notation. For example, where m is a gauge index of the fundamental representation. Therefore S m in (2.13) is given by Under the assumption (2.19) this is simplified as follows. (2.25) One will soon notice that this is twice as that of the N = 2 U(N) k Chern-Simons theory with minimal matter content: To show this, let us read off the matter potential in N = 2 case from (A. 10). In the same way, we can compute S m (2.28) and under the assumption (2.19), which proves the relation (2.26). By taking account of (2.20), the total large N effective action in the minimal N = 3 Chern-Simons theory is exactly twice as that of the minimal N = 2 Chern-Simons theory in the 't Hooft limit. One might wonder why the large N effective action is insensitive to the difference between the N = 2 Chern-Simons theory and N = 3 one. To understand this, let us consider N = 2 Chern-Simons theory with one pair of chiral/anti-chiral fields (Q, Q) perturbed by a superpotential of the form W 0 = a( QT b Q) 2 , where a is a small positive number. It was shown in [22] that this N = 2 Chern-Simons matter theory with the superpotential flows to the same N = 2 one in the infra-red (IR) except that the superpotential is given by where a IR is a fixed number of order 1/κ, and N = 2 supersymmetry is enhanced to N = 3 in the IR so that the IR theory becomes the same as the N = 3 Chern-Simons theory considered above. On the other hand, in the large N limit, large N factorization occurs so that the leading contribution of the superpotential is given by W = a IR QQ 2 , which vanishes on the SU(2) symmetric vacuum (2.19). 3 This is the reason why the large N effective action cannot see the difference between the N = 3 Chern-Simons theory and the N = 2 one with the same matter content. Bi-fundamental matter fields In this section we study U(N) k ×U(N ′ ) −k Chern-Simons theory coupling to M bi-fundamental matter fields by taking N and k to infinity and holding λ = N/k and N ′ fixed. We denote 3 We used N b=1 (T b ) n m (T b ) q p = δ q m δ n p to rewrite the form of superpotential. M bi-fundamental scalar fields and fermions by q A , ψ A respectively, where A = 1, 2, · · · , M. A generic form of the action of this class of Chern-Simons theories is given by where L m is given by Here the covariant derivative acts on the fields in a way that V m represents a gauge-invariant potential of the matter fields in this system. Specific examples are N = 4 and 6 Chern-Simons-matter theories, whose actions in our notation are given in A.4 and A.5. Firstly we separate the U(1) gauge fields. where b µ , b ′ µ are the trace part and A µ , A ′ µ are the traceless part. After this replacement A µ , A ′ µ always represent SU(N), SU(N ′ ) gauge fields. Plugging this into the matter action gives and L m in the right-hand side is the same as (2.31) except A µ , A ′ µ are now SU(N) and SU(N ′ ) gauge fields. D µ is the covariant derivative of the SU(N)×SU(N ′ ) gauge group. Only a relative combination of U(1) × U(1) gauge fields, b − µ , couples to the matter fields. Let us turn to Chern-Simons term and also separate the U(1) part of the gauge fields from the Chern-Simons term. Substituting (2.33) into the Chern-Simons term we obtain so that the term ε µνρ b − µ ∂ ν b − ρ cancels. Since this b + µ does not couple to the matter fields, one can integrate it out by solving equation of motion. The equation of motion is We can solve this by Plugging back this into (2.35) gives By collecting all the terms the whole action (2.30) becomes In summary, the first line, which contains b − µ , is coming from U(1) × U(1) part and the second one is SU(N) × SU(N) part. Now let us fix the gauge degrees of freedom by the light-cone gauge for A µ , A ′ µ , b − µ and integrate them out as done in the previous section. While the equation of motion for A + has the same form as (2.5), those for the gauge fields where T a ′ is a generator of SU(N ′ ) gauge group and Tr N ′ is trace for N ′ × N ′ matrix. In the Fourier space they become where we used the notation χ A B , ξ − A B analogous to (2.8), (2.10), which are now N ′ by N ′ matrices. Solving these and substituting back into (2.41) we find the analog of (2.7), which now also contains the contribution from the gauge fields b − , A ′ . Then we introduce auxiliary fields to eliminate all the interactions by adding the terms, which has the same form as (2.15) except the auxiliary fields are now N ′ × N ′ matrices and suitable contractions for SU(N ′ ) indices. By this manipulation what we shall do is essentially to exchange χ, ξ into α, β. For example the constraint equations of After this treatment we can integrate out the elementary fields q A , ψ A and obtain the analog of (2.18), which contains the contribution from b − , A ′ . Then we evaluate the action at saddle points to study the leading expression in the large N limit. We assume the same ansatz for saddle pints such as translational, rotational invariance and covariance of flavor indices. In addition to these we also naturally expect saddle points to satisfy covariance of SU(N ′ ) fundamental indices. In other words, we treat SU(N ′ ) gauge symmetry as flavor symmetry under the 't Hooft vector model limit. We set to zero for the fermionic fields. Under this ansatz consistent solutions for b − 3 , A a ′ 3 in (2.45) become trivial. This is because under the ansatz (2.46) the right-hand side in (2.45) becomes proportional to δ 3 (q) but the left-hand side q − , which requires b − 3 , A a ′ 3 to vanish. Note that A a ′ 3 = 0 is also required from gauge index contraction of the right-hand side since Tr N ′ T a ′ = 0. Accordingly the right-hand side in (2.45) also has to vanish so that it is required to satisfy This has to be checked after solving saddle point equations for α, β, but we can check now because we already know that the solutions of α, β are given by exact propagators of scalar and fermion respectively of the following form [8] α(r) = 1 where c B,0 , c F,0 are pole masses of scalar and fermion respectively. Plugging (2.48) into (2.47) one can see that the left-hand side vanishes by performing the angular integral. As a result U(N ′ ) sector does not contribute at all under the limit. In other words, U(N ′ ) gauge factor is so weakly gauged as to decouple from the leading contribution under the 't Hooft vector model limit. Thus the result of the effective action is essentially the same as that of the case with one gauge group (2.20). Let us apply this result to N = 4 Chern-Simons-matter theory with In this subsection we apply the result (2.49) to N = 4 U(N) k × U(N ′ ) −k Chern-Simons theory, whose matter content is two bi-fundamental complex scalar fields q A and fermionic fields ψ A , where A = 1, 2. This Chern-Simons-matter Lagrangian is given by (A.17). The potential of the matter fields is Under the assumption (2.46) this reduces which is 2N ′ times as that of the N = 2 U(N) k Chern-Simons theory with minimal matter content: By taking (2.49) into account, the total large N effective action in the N = 4 Chern-Simons theory with minimal matter content is exactly 2N ′ as that of the minimal N = 2 Chern-Simons theory in the 't Hooft limit. Mass-deformed N = 4 case In this subsection we investigate large N exact effective action of the previous N = 4 Chern-Simons matter theory deforming the theory by a mass term keeping N = 4 supersymmetry as well as SO(4) R-symmetry [23]. The N = 4 mass term is given by (A.24) where µ is a mass parameter. Since the mass term does not break the global symmetry, that of the vacuum is unchanged and thus the ansatz (2.46) holds. Under the ansatz this N = 4 mass term becomes which is completely the same as the term obtained from the N = 2 mass term (A.8) with w = 1 reduced under the assumption (2.46) with the over all multiplicative factor 2N ′ : As a result the relation of the exact effective actions for N = 2 and N = 4 given by (2.53) is unchanged under the N = 2 and N = 4 mass deformations. Again, N = 4 effective action reduces to that of N = 2 with an appropriate factor including the mass terms keeping the same amount of supersymmetry. In this subsection we apply the result (2.49) to ABJ theory, whose matter content is four bi-fundamental complex scalar fields Y A and fermionic fields Ψ A , where A = 1, 2, 3, 4. This theory possesses SU(4) R-symmetry and U(1) b global symmetry. The Lagrangian of ABJ theory is given by (A.27). The potential of the matter fields is which is 4N ′ times as that of the N = 2 U(N) k Chern-Simons theory with minimal matter content: By taking (2.49) into account, the total large N effective action in ABJ theory is exactly 4N ′ times that of the minimal N = 2 Chern-Simons theory in the 't Hooft limit. The reason why the large N effective action of ABJ theory has reduced to that of the N = 2 one will be the same as in the N = 3 case discussed in Section 2.1.1. The ABJ(M) action can be constructed by using N = 2 superfield formulation [24]. In the notation of [24], the superpotential of the N = 6 theory is of the form W N =6 ∼ ε AC ε BD Tr(Z A W B Z C W D ) up to some over all factor, where Z A , W B (A, B = 1, 2) are bi-fundamental, anti-bi-fundamental chiral superfields, respectively. Under the 't Hooft vector model limit, the contribution of the superpotential to effective action is given by is an N ′ × N ′ matrix. However this contribution vanishes under the SU(4) symmetric vacuum configuration (2.46), which will explain the reduction observed above. Mass-deformed ABJ case In this subsection we study large N exact effective action in ABJ model deformed by a mass term keeping N = 6 supersymmetry [25]. The N = 6 mass term is given by (A.35) where µ is a mass parameter and M B A = diag(1, 1, −1, −1). This mass term breaks the SU(4)×U(1) b global symmetry to SU(2)×SU(2)×U(1)×Z 2 and thus changes the vacuum structure. A plausible ansatz respecting the global symmetry may be the following. which includes the mass term. Let us rewrite this in terms of the following variables We observe a splitting of the mass term in the effective action due to the fact that the N = 6 mass term breaks SU(4) R-symmetry to two SU(2)s. Determining the saddle point equations and solving them is beyond the scope of this paper. As a trivial check we can see that in the case with µ = 0 this effective action reduces to that of massless ABJ, because we have a solution α (+) = α (−) , β (+) = β (−) . Comments on thermal free energy and duality In the previous section we have obtained the large N exact effective actions for N = 3, 4, 6 Chern-Simons matter theories. Once one obtains large N exact effective actions one can compute exact large N thermal free energies at an arbitrary temperature by performing Wick rotation for time direction and compactifying the Euclidean time in a circle whose circumference is the inverse temperature. Due to appearance of circle one has to care about boundary conditions and holonomy. We set boundary conditions for this circle such that the scalar fields satisfy periodic one and the fermionic fields do anti-periodic one to study thermal canonical ensemble of the system. According to the boundary conditions, we exchange the integration of the momentum for the time direction into the summation over the discrete Fourier modes satisfying suitable boundary conditions. The holonomy is zero mode of gauge field on the circle and it can be taken into account by implementing a constant shift by holonomy for the thermal-time component of momentum appearing in the propagators [10]. We normalize a thermal free energy in such a way that it vanishes at zero temperature. For holonomy configuration determined by minimizing the free energy, a crucial argument was made in [15] that each eigenvalue of holonomy matrix obeys the fermionic statistics in the high temperature limit so that the holonomy configuration does not cramp but spread around the origin with the width 2πλ and height 1 2πλ in the 't Hooft large N limit for U(N) level k Chern-Simons theory with one fundamental boson, fermion or both. This can be confirmed not only from the canonical formalism but also from the path integral formalism [17]. Taking account of this holonomy effect one can see three dimensional duality of this class of the theories at a high temperature of order √ N. Let us consider the holonomy distribution in the situations of this paper. First let us consider U(N) level k Chern-Simons theory with any finite number of fundamental fields. Under the 't Hooft limit holding the number of matter fields fixed the holonomy distribution clearly becomes the same as that in one fundamental flavor case. This implies, from the calculation in the previous section, the large N free energy of U(N) k N = 3 Chern-Simons theory with one pair of fundamental and anti-fundamental chiral fields (quark and antiquark) precisely reduces to twice of that of U(N) k N = 2 Chern-Simons theory with one chiral fundamental multiplet. Since this N = 2 Chern-Simons theory is self-dual under the exchange of λ and λ−sgn(λ) [14,15], this result suggests the minimal N = 3 Chern-Simons theory is also self-dual under the same transformation of λ. One may discuss this self-duality of N = 3 in the following way. For this purpose we first consider N = 2 U(N) k Chern-Simons theory with N F quark flavors (Q i , Q j ) with no superpotential. We call this electric theory for convenience. The dual of this theory, which we will call magnetic theory, is known as N = 2 U(N F +|k|−N) k Chern-Simons theory with N F dual quark flavors denoted by (q i , q j ) and gauge-singlet fields M j i with superpotential W 0 = q j M j i q i . These two theories are considered to be equivalent in the infra-red fixed point [13]. Now consider the case with N F = 1. Let us add a (marginally) relevant double trace chiral term in the superpotential ∆W = ( QQ) 2 in the electric theory and flow it to the N = 3 Chern-Simons theory [22] as discussed in Section 2.1.1. What is the corresponding deformation in the magnetic side? The answer is to add the superpotential of the form ∆ W = M 2 , since M corresponds to the mesonic field in the electric side [13]. Clearly this gives the mass term for the field M, which decouples in the IR. Integrating M out gives a double trace chiral term in the superpotential of the magnetic theory. Therefore the resulting IR theory of the magnetic side also achieves N = 3 supersymmetry by using the argument of [13], which will account for the self-duality of the minimal N = 3 theory. Next we consider the holonomy distribution for U(N) k ×U(N ′ ) −k Chern-Simons theory with any finite number of (bi-)fundamental fields. One has to take care of holonomy not only for U(N) but also U(N ′ ) in general N. But under the 't Hooft large N limit keeping N ′ and number of (bi-)fundamental fields fixed the contribution of holonomy for U(N ′ ) −k reduces to trivial one and that for U(N) k becomes the same as that for Chern-Simons theory with one fundamental flavor in the leading of large N limit. Therefore, as happened in N = 3 case, the free energy of N = 4 Chern-Simons theory (including the N = 4 mass term) and that of ABJ theory reduce to those of N = 2 Chern-Simons theory with one chiral multiplet (including the N = 2 mass term) up to overall integral factor. This suggests self-duality of N = 4 theory including the N = 4 mass term and ABJ theory. The self-duality of ABJ theory was already discussed in the original paper [26]. Their claim is N = 6 theories with gauge group U(N) k × U(N ′ ) −k and U(N ′ ) k × U(2N ′ + |k| − N) −k are equivalent. Under the 't Hooft large N limit with other parameters fixed this claim tells us that the physical quantities become the same under exchange of λ with λ − sgn(λ), which is the same self-duality transformation as that of N = 2 case. Our result gives strong evidence for this conjecture in a non-supersymmetric situation by confirming match of the large N thermal free energy under the duality transformation. One may presumably perform analogous discussion of the self-duality of N = 4 Chern-Simons theory including the case of finite N, but we leave further discussion in future. The same holonomy distribution is also the case to mass-deformed ABJ theory under the limit. From the calculation in the previous section we observed the large N effective action does not precisely reduce to that of N = 2 with one chiral field, so neither does the large N thermal free energy. Therefore it is not obvious to see self-duality of ABJ model with N = 6 mass term from our calculation. It is intriguing to explore this more by using not only the large N thermal free energy but also other tools such as three-sphere partition function. We leave detailed analysis to future work. Discussion In this paper we have computed the effective actions and thermal free energies for N = 3 U(N) k and N = 4, 6 U(N) k ×U(N ′ ) −k Chern-Simons theories with minimal matter content including the N = 4, 6 mass term exactly in the 't Hooft large N limit with the other parameters fixed. Under this limit all of them have reduced to the effective action or thermal free energy for N = 2 with one chiral multiplet with the overall factor MN ′ , where M is the number of a chiral or anti-chiral field (N ′ = 1 for N = 3 case), except the mass-deformed ABJ case. We have demonstrated that the self-duality of N = 3, 4, 6 Chern-Simons theories (including the N = 4 mass term) reduces to that of N = 2 with one chiral field (including the N = 2 mass term). In Section 2.2 we have shown that there is no leading contribution of the U(N ′ ) gauge fields under the 't Hooft vector model limit. As a result we observed that the resulting thermal free energy showed expected duality in Chern-Simons matter theories in the limit. This result also supports the prescription given in [22] to deal with gauge fields in the large number of flavor limit in study of thermal free energy in Chern-Simons matter theories. However, at a finite Chern-Simons level k there will be non-trivial contribution of U(N ′ ) gauge fields. Especially the contribution of U(1) part of the gauge fields will be important to see the relation between a Chern-Simons matter theory and the dual M-theory because the dual scalar field obtained by dualizing the U(1) gauge field represents M-circle of the dual M-theory with the radius of order 1/k. There is a straight-forward generalization of the results of this paper by including chemical potential as done in [10,19]. Under the duality transformation chemical potential for scalar fields exchanges with that for fermionic fields. But physics by including chemical potential is not so simple because it possibly gives rise to condensation of bosonic fields known as Bose-Einstein condensation and Fermi surface of fermionic fields, which is perhaps unstable by something like the Cooper instability [27]. It was observed that the duality works in the region where both bosonic and fermionic theories are in the uncondensed phase but the duality becomes unclear in the condensed phase [19]. It is interesting to explore the duality structure beyond the uncondensed phase. A technical but important issue is to calculate the next sub-leading correction of these theories by concurrently taking large M or large N ′ limit keeping M/N or N ′ /N fixed. (Some perturbative calculation was done in [28].) Especially the large N ′ limit is worthwhile to study properties beyond the vector model limit of this class of Chern-Simons matter theories. Under the large N ′ limit one has to take care of not only non-planar diagrams but also the sub-leading correction of holonomy distribution for U(N ′ ) as well as U(N). It is quite non-trivial to check whether the three dimensional duality holds up to the next leading of large N limit. Since the Chern-Simons system reduces to U(N ′ ) matrix model under the limit, the Vandermonde measure factor will play an important role to determine the correct holonomy distribution. 4 It is interesting to study how the 1/N corrected holonomy distribution and the behavior thereof under the duality transformation is modified from that found in [17]. It is of interest to explore mass-deformed Chern-Simons vector models as in [19,29]. In particular, a mass-deformed theory is free from infra-red divergence so that one can safely consider a scattering matrix. It is interesting to determine S-matrix of elementary particles perturbatively and exactly in the 't Hooft large N limit as done in correlation functions of conserved currents [4,5,3,6,7]. (See [30] for a recent computation of supersymmetric correlation functions.) It is also interesting to study a gravity theory dual to Chern-Simons vector models in the context of AdS 4 /CF T 3 correspondence, which is conjectured as a parity violating Vasiliev theory [31] on AdS 4 background with suitable boundary conditions [1,32]. (The original proposal was done in [33]. Related studies are, for example, [34,35,36,37,38]. See also [39,40,41,42,43] for reviews and recent computations of higher spin theories.) According to [32], a higher spin gravity theory dual to an U(N) k ×U(N ′ ) −k bi-fundamental Chern-Simons theories such as ABJ theory is constructed from higher spin fields with U(N ′ ) gauge indices. Therefore one can expand the bulk theory by a new bulk 't Hooft coupling N ′ /N by taking the large N ′ , N limit with their ratio fixed. This indicates new confinement/deconfinement transition for higher spin fields with respect to the U(N ′ ) gauge interaction. The field theory analysis by using a toy model in [32] suggested that the U(N ′ ) gauge deconfinement happens at temperature of order one while the Hawking-page one occurs at temperature of order N/N ′ . This implies that as N ′ goes to N, the higher spin fields become heavier so that the U(N ′ ) confinement/deconfinement phase transition point and the Hawaking-Page one for higher spin fields coalesce into the Hawking-Page one in non-higher spin gravity theory on a certain AdS 4 background. 5 To realize the bulk picture proposed in [32], it is important to clarify the confining mechanism of U(N ′ ) gauge symmetry in the bulk, which may be different from that in usual QCD. This is simply because the bulk 't Hooft coupling N ′ /N will not get renormalized. As a result the dimensional transmutation which is often expected to happen in four dimensional Yang-Mills theories may not happen here. It is of interest to study how to compute the dynamical scale in the bulk U(N ′ ) gauge theory and obtain the phase diagram thereof. We hope this note will become useful to address these issues in the future. A Supersymmetric Chern-Simons-matter action In this section we present N = 1, 2, 3, 4, 6 supersymmetric Chern-Simons-matter action of the minimal matter content with the mass term preserving the same amount of supersymmetry in our convention. A.1 N = 1 N = 1 U(N) k Chern-Simons-matter action with one chiral multiplet (q, ψ) in the fundamental representation of the gauge group is given in [8]. The action is given by where D = γ µ D µ , Wq q is a superpotential and W ′ x = dWx dx . κ is related to the Chern-Simons level k by κ = k 4π . The covariant derivative acts as (2.4). The contraction of gauge indices is understood by our bracket notation. For example, (qψ) =q m ψ m , where m is the fundamental gauge index. Supersymmetry transformation rule is Hereafter we shall suppress the spinor indices α, β. Superconformal action can be obtained by restricting the superpotential to be quadratic. where w is a real number. By putting W ′ qq = − w 2κq q, W ′′ qq = − w 2κ above, we obtain the superconformal N = 1 action and supersymmetry transformation. On the other hand, the N = 1 mass term can be obtained by adding a linear term in the superpotential. W mass (qq) = −µqq, which is of the following form in the action: Accordingly one has to add the following term in the fermionic supersymmetry transformation A.2 N = 2 N = 2 superconformal Chern-Simons-matter theory with one chiral multiplet was studied in [22]. The action with the gauge group U(N) turns out to be obtained from N = 1 superconformal action with the superpotential (A.6) by setting w = 1. For convenience we write down the explicit form of the action for U(N) case. When the gauge group is U(N), it is possible to add a mass term keeping N = 2 supersymmetry, which is of the form (A.8) with w = 1. 6 This is because in U(N) case one can turn on an FI D-term, which generates a mass term by integrating out auxiliary adjoint fields. A.3 N = 3 Let us consider N = 3 U(N) k Chern-Simons-matter theory with minimal matter content, which is one fundamental hyper-multiplet. We denote two complex scalar by q A and its super-partners by ψ A in the hyper-multiplet, where A = 1, 2. The action is given by where ε 12 = ε 21 = 1 and the covariant derivative acts as (2.4). Notice that the action has manifestly SU(2) R-symmetry, which accounts for N = 3 supersymmetry. The supersym-metry variation rule is given by where a supersymmetry parameter ω AB is in the symmetric representation in SU(2) Rsymmetry: ε AB ω AB = 0. We also use the following notation. 16) A.4 N = 4 N = 4 Chern-Simons-matter theory with minimal matter content [21] is given by a U(N) k × U(N ′ ) −k Chern-Simons theory with one bi-fundamental hyper-multiplet denoted by q A for two complex scalar and ψ A for their super-partners, where A = 1, 2. The action is given by The covariant derivative acts on the fields by (2.32). This action has SU(2) × SU(2) Rsymmetry, which explains N = 4 supersymmetry. The supersymmetric transformation rule is given by where ǫ AB is a supersymmetry parameter with two independent SU(2) indices and ǫ AB := ε AC ǫ CD ε DB = (ǫ AB ) * . (A.23) A mass term preserving not only N = 4 but also SO(4) R symmetry was constructed in [23]. In our notation, it is given by Accordingly we add the following variation in the fermionic supersymmetry variation. 25) A.5 N = 6 We consider N = 6 U(N) k ×U(N ′ ) −k Chern-Simons theory with four complex bi-fundamental scalars denoted by Y A and its super-partner Ψ A , where A = 1, 2, 3, 4 [44,26]. 7 The action is given by Here ε 1234 = ε 1234 = 1 and the covariant derivative acts on the fields by (2.32). Note that SU(4) R-symmetry is explicitly seen and thus N = 6 supersymmetry. The explicit 7 In the terminology of superfield, the matter content of N = 6 theory is one bi-fundamental hypermultiplet (q A , ψȦ) and anti-bi-fundamental (twisted) hyper-multiplet (qȦ, ψ A ). The relation between these fields and (Y A , Ψ A ) is given by Y A =(q A , q †Ȧ ), Y † A = (q † A , qȦ) Ψ †A =(ε AB ψ B , ψ †Ḃ εḂȦ), Ψ A = (ψ †B ε BA , εȦḂψḂ), ξ AB = 0 ǫ AĊ εĊḂ ε BC ǫ CȦ 0 (A. 26) where we use the notation A, B representing SU (4) indices only here. supersymmetry variation rule is where a supersymmetry parameter ξ AB is in the anti-symmetric representation in SU(4) R-symmetry: ξ AB = −ξ BA . We also use the following notation. and where the bracket means the normalized anti-symmetrization: X [AB] = 1 2 (X AB − X BA ). It is known that one can tern on a mass term keeping N = 6 supersymmetry [25]. The mass term is given by where M B A = diag(1, 1, −1, −1). The supersymmetry transformation is corrected so that one has to add the following variation in the fermionic supersymmetry transformation rule.
8,584.8
2013-10-03T00:00:00.000
[ "Physics" ]
Biomechanics of a cemented short stem: a comparative in vitro study regarding primary stability and maximum fracture load Purpose In total hip arthroplasty, uncemented short stems have been used more and more frequently in recent years. Especially for short and curved femoral implants, bone-preserving and soft tissue-sparing properties are postulated. However, indication is limited to sufficient bone quality. At present, there are no curved short stems available which are based on cemented fixation. Methods In this in vitro study, primary stability and maximum fracture load of a newly developed cemented short-stem implant was evaluated in comparison to an already well-established cemented conventional straight stem using six pairs of human cadaver femurs with minor bone quality. Primary stability, including reversible micromotion and irreversible migration, was assessed in a dynamic material-testing machine. Furthermore, a subsequent load-to-failure test revealed the periprosthetic fracture characteristics. Results Reversible and irreversible micromotions showed no statistical difference between the two investigated stems. All short stems fractured under maximum load according to Vancouver type B3, whereas 4 out of 6 conventional stems suffered a periprosthetic fracture according to Vancouver type C. Mean fracture load of the short stems was 3062 N versus 3160 N for the conventional stems (p = 0.84). Conclusion Primary stability of the cemented short stem was not negatively influenced compared to the cemented conventional stem and no significant difference in fracture load was observed. However, a clear difference in the fracture pattern has been identified. Introduction Cemented total hip arthroplasty (THA) has a long history of success, being a safe strategy for the treatment especially for elderly patients with potentially reduced bone quality [1]. Registry data from Sweden, Norway and England show a better long-term survivorship of cemented compared to cementless implant fixation [2][3][4]. Data from the national registries in Australia and New Zealand characterize a lower revision rate of cemented compared to cementless stems, especially in female patients over 75 years [5,6]. Femoral periprosthetic fractures following THA remain one of the leading causes of early failure requiring revision surgery [7][8][9]. In this regard, the main risk factors are reduced bone quality, advanced age and female gender [8,10]. Consequently, cemented femoral stem fixation is strongly correlated with a decreased risk of early periprosthetic fractures of the femur, particularly in female and elderly populations [11,12]. In the last decade, there has been a trend towards the development of shorter cementless femoral implants, aiming to enable a more bone-and soft tissue-sparing implantation technique [13][14][15]. Most implants of the latest generation 1 3 provide a curved stem design, which allows the implantation without compromising the trochanteric region and thus the pelvitrochanteric structures. Promising medium-and longterm term data already exist for several shorter cementless implant models [16,17]. Some authors confirm advantageous results regarding perioperative blood loss and a lower intraoperative complication rate compared to standard implants [18][19][20]. On the other hand, some authors propagate a limitation of this implant group, especially in poor bone quality, due to the shorter and ostensibly metaphyseal fixation [21,22]. A markedly reduced bone quality was seen to be associated with a dramatically increased risk for postoperative periprosthetic femoral fractures using a cementless calcar-guided short stem [23]. Taking this contraindication into account, implant survival rates up to 100% at 8 years have been reported [24]. Currently, efforts are being made to transfer the postulated potential advantages of uncemented short-stem THA to the concept of a cemented short stem, providing the same philosophy, to extend the range of indications to a patient collective with reduced bone quality [25]. To date, no curved short stem, providing cemented fixation, is officially available on the market. The aim of this in vitro study was to compare primary stability and fracture load of a newly developed, cemented curved short stem with an already well-established cemented conventional stem [26]. Implants The prototype of the cemented optimys short stem ( Fig. 1) is based on the design of the uncemented implant, which is available on the market since 2010 (optimys, Mathys Ltd., Bettlach, Switzerland). According to the concept of many successful cemented conventional stems on the market, the prototype is made of polished wrought high nitrogen stainless steel for implants based on ISO 5832-1. With 13 selectable sizes the stem length is between 80 and 118 mm. As a reference, the well-established cemented conventional straight twinSys stem (ODEP 7A*) was used in this study (twinSys, Mathys Ltd., Bettlach, Switzerland). The stem is available in 8 sizes with lengths between 140 and 170 mm (Fig. 1). Both implants provide a triple taper design, which converts shear forces into compression forces and thus allows the stem to wedge into the cement mantle. Two different offset versions are available for both implants, standard and lateralized, for offering a broad offset range to reconstruct the individual femoral offset. An earlier study of our group showed no inferiority in terms of primary stability and maximum fracture load of the cemented short-stem prototype, using a line-to-line cementation technique, compared to a standard technique using an undersized stem [25]. Thus, a line-to-line cementation technique was used. In contrast, the design of the conventional stem used for this study, is undersized by 1 mm compared to the final rasp and thus, offers a minimal space for a preferably homogeneous cement mantle. Preparation of cadaver femurs After institutional review board approval, six osteoporotic pairs of fresh-frozen human femurs were obtained via Sci-enceCare (Phoenix, AZ, USA). All donors were female, with a mean age of 71 years (range 63-81 years) and a mean Body Mass Index (BMI) of 30.2 kg/m 2 (range 18.9-42.6 kg/m 2 ) ( Table 1). Radiographs in two planes ruled out any malignant neoplasia or fractures. Digital 2D templating, using the original stem templates, estimated the size and positioning of the required femoral implants as well as the height of the neck resection. Minor bone quality was confirmed using dual-energy X-ray absorptiometry (DEXA) measurement [mean T-score: − 1.8 (range: − 3.0 to − 0.7)]. Specimen preparation included soft tissue removal and shortening to an equal length of 37 cm below the tip of the greater trochanter. Before cutting the femoral condyles, neck anteversion was recorded for subsequent orientation. Finally, specimens were fixed in a steel cup using Polymethylmethacrylate (Technovit 3040; Heraeus Kulzer, Wehrheim, Germany). The femur was tilted laterally by 8° in the frontal plane and by 6° dorsally in the sagittal plane to simulate single-leg stance and to create bending and torsional moments as previously described [25,27] (Fig. 2). Implantation and cementation technique The implantation of the investigated implants was performed alternating, either in the right or the left of six pairs of femurs, by an experienced orthopedic surgeon (TF) according to the manufacturer's specifications. A third-generation cementing technique was used. A cement restrictor (Bone-Plug PE, Mathys Ltd., Bettlach, Switzerland) was inserted, to occlude the femoral canal, providing 1 cm distance to the tip of the stem. Before implantation, cleaning of the femoral cavity was performed using a Jet Lavage system (InterPulse, Stryker Corp., USA). It was then thoroughly dried. One unit (40 g) of Palacos R + G bone cement (Heraeus Medical, Hanau, Germany) was vacuum mixed and applied in retrograde fashion via cement gun and pressurized using a femoral seal [28]. The implants were inserted manually and pressure was maintained until the cement was set. Measurement of primary stability under dynamic loading For measurement of relative motion between the implant and the cortical bone, two inductive miniature displacement transducers (HBM WI/5 mm-T; HBM, Darmstadt, Germany) with a precision of 1 µm were attached to the cortical bone. Relative axial implant-bone motion was measured at transducer S1 at the shoulder of the prosthesis (Fig. 2). Rotational stem motion was captured at transducer S2, which was attached perpendicular to the neck of the implant (Fig. 2). The measured micromotions were calculated into rotation around the femoral axis by gauging the distance between the tip of the transducer and the longitudinal axis of the femoral diaphysis [29]. The femur was mounted in a servo hydraulic material-testing machine (Instron, Typ 8871, Pfungstadt, Germany), which applied a vertical load. A ball bearing was attached between the device and the load cell to achieve a moment-free introduction of the load (Fig. 2). The material-testing machine applied 100,000 dynamic sinusoidal load cycles at a frequency of 2 Hz between 100 and 1600 N to simulate the load of the first 6 weeks in vivo [30]. Reversible implantbone motion was captured every 500 cycles at the two measurement points for all samples. Furthermore, irreversible implant migration in axial direction (S1) was calculated by the displacement between the initial implant position and the position at the end of 100,000 loading cycles. In the same way, irreversible torsion around the femoral axis was calculated from the displacement assessed at transducer S2. Assessment of fracture load and fracture pattern After dynamic loading, repeated radiographs in two planes were performed to exclude periprosthetic fractures of specimens. Subsequently, specimens were transferred to the testing machine and linearly loaded at a rate of 100 N/s under load control until a fracture occurred. The fracture load (Fmax) was assessed and fracture pattern was analyzed using the Vancouver Classification [32]. Statistical analysis Statistical analysis was performed using SAS 9.4 software (SAS Institute, Cary, NC). Normality testing indicated that the data were non-parametric in nature and so testing was performed using Wilcoxon signed-rank to analyze differences of reversible and irreversible micromotions as well as of fracture loads between the two implants. Significance was assumed for p ≤ 0.05. Fig. 2 The test setup. S1 and S2 demonstrate the locations of the two miniature displacement transducers Reversible micromotion measurement After 100,000 loading cycles mean micromotion amplitudes at both transducer locations did not display any statistical differences between both femoral implants. Mean axial micromotions were 5.3 µm (SD 3.9 µm) for the short stem in comparison to 9.3 µm (SD 6.6 µm) for the conventional stem (p = 0.23). The calculated rotation around the femoral axis was in direction of retroversion for both stems, with values of 0.03° (SD 0.02°) for the short stem and 0.04° (SD 0.02°) for the conventional stem (p = 0.44; Table 2). Irreversible migration measurement Mean axial migration after 100,000 loading cycles was − 20.4 µm (SD 38.3 µm) for the short stem and − 61.4 µm (SD 92.8 µm) for the conventional stem (p = 0.22). Only minor rotation towards retroversion was measured, with 0.003° (SD 0.04°) for the short stem and 0.09° (SD 0.12°) for the conventional stem (p = 0.09) with a tendency towards less retroversion in the group of the short stems (Table 2). Maximum fracture load and fracture pattern Mean maximum fracture load (Fmax) to induce a periprosthetic fracture was 3062 N (SD 332 N) in the short-stem group, whereas Fmax load of 3160 N (SD 544 N) induced a fracture in the conventional stem group (Table 3). No significant differences in Fmax load between the short and conventional stems were found (p = 0.84). All short stems fractured under maximum load according to Vancouver type B3 (stem loose, poor bone stock), whereas 4 out of 6 conventional stems suffered a periprosthetic fracture according to type C (well below the tip of the stem). Figures 3, 4a, b exemplify the fracture pattern of periprosthetic fractures induced in both stem designs under controlled conditions. Discussion The aim of this biomechanical in vitro study was to compare the primary stability and maximum fracture load of a newly developed cemented short stem with a clinically proven cemented conventional stem (twinSys) [26]. Our results show that the shorter curved implant design does not negatively influence primary stability and maximum fracture load. However, we found a clear difference in fracture pattern. The short stems fractured according to Vancouver type B3 whereas the conventional stems, except for two preparations, fractured according to type C. To date, only few studies regarding cemented shorter femoral stems in THA can be found and none, which correspond to the design concept of the present study. Recently, Santori et al. published their 14-year experience with a cemented short stem [31]. However, given that this implant is a derivative of the Exeter straight stem philosophy and has just been shortened, the comparability to the philosophy of new-generation, calcar-guided short stems is limited. The design of the Friendly short stem (LimaCorporate, San Daniele Friuli, Italy) requires the addition of proximal and distal centralizers in the attempt of attaining a 2-mm cement mantle all around the stem. As stated in our recent investigation regarding cementation techniques in contemporary, calcarguided THA, a line-to-line technique best supports the philosophy of the more individualized implantation technique, compared to most cemented conventional femoral implants [25]. In vitro, it was found to be equivalent to the standard cementation technique using an undersized stem. However, the mid-and long-term results presented by Santori et al. suggest a high reliability of a short, polished and tapered cemented stem without any drawbacks, compared to the conventional sized implants, being obvious [31]. Regarding existing literature involving shorter cemented stems, a second report can be found. Choy et al. presented their experience from the Australian Orthopaedic Association National Joint Replacement Registry (AOANJRR) regarding a 7-year follow-up of Exeter short stems compared to standard-length Exeter stems [32]. No significant difference was found in the cumulative percent revision rate in the short-stem group, compared with the standard-length stems, despite its use in a greater proportion of potentially more difficult hip dysplasia cases. Again, the comparability to the stem design, which was used in the present study, is severely restricted. The concept of calcar-guided short-stem THA has the potential to preserve bone and soft tissue, by reconstructing the individual anatomy of the patient the best [33]. Furthermore, given a facilitated and less traumatic implantation technique, intraoperative blood loss can be reduced [18]. Recent studies provided beneficial mid-term clinical results of uncemented short stems compared to conventional stem designs along with decreased intraoperative complication rates [19,20,34,35]. Especially neck-sparing short stems seem to have better maintenance of bone mineral density changes compared to conventional implants [35]. Furthermore, there are promising results regarding micromotion in vitro measurements, as well as clinical mid-term results of stem migration patterns and patient-reported outcome measures [24,36,37]. It remains unknown, if those potential advantages may be transferred also to the concept of cemented short-stem THA. The present biomechanical investigation, however, resulted in equivalent results regarding primary stability and maximum fracture load of the newly developed cemented short stem compared to the well-established cemented conventional stem. This is in line with our prior results analyzing the uncemented version of the short stem (optimys), compared to a well-established uncemented conventional stem using the same study protocol [27]. A less pronounced Fig. 3 a, b Fracture pattern of periprosthetic fractures induced in both groups. All short stems showed proximal fractures according to Vancouver type B3 (a), 4 out of 6 straight stems showed Vancouver type C fractures (b) axial and rotational irreversible migration was found for the uncemented short stem, confirming the triple-taper design leads to sufficient stability. In isolated specimens of both stem designs, the measurements showed positive values with regard to migration in axial direction, which in principle would correspond to an implant migration out of the femur. This phenomenon could be explained by a slight tilting of the implant and consecutive elevation of the implant shoulder, on which the displacement transducer was positioned. In contrast, biomechanical studies of cementless femoral implants showed lower load at failure of shorter stem designs [38,39]. A cadaver model comparing a double-wedged conventional stem with the Nanos short stem (Smith&Nephew, Marl, Germany) found an increased load at failure for the conventional stem design up to 20% [38]. All specimens of this study suffered a type B2 fracture compromising the medial wall. Gabarre et al. studied the load transmission of the Minihip stem (Corin, Cirencester, United Kingdom) in a finite element model and found a lever effect with high compressive stresses in areas of the stem in contact with bone [40]. This implant follows a neck-sparing concept similar to the uncemented optimys. Lateral loading is also supported by bone mineral density measurements for both implants [41,42]. This could explain decreased fracture load and the fact, that mainly type b2 fractures were observed. However, even small design differences have significant influence on load transmission [43]. Furthermore, cemented stem fixation significantly influences load transmission of the implant in the proximal femur [44]. A biomechanical investigation of Thomsen et al. compared maximum fracture loads and fracture patterns of cemented and uncemented conventional stems in non-osteoporotic bone [45]. The maximum fracture load was found to be significantly higher for cemented stems. Fracture patterns corresponded to Vancouver type B fractures in uncemented stems and Vancouver type C fractures in cemented stems. In the present study, the Vancouver Fig. 4 a, b Anteroposterior radiograph of the periprosthetic femoral fracture with consecutive stem loosening of a cemented short stem (a). Anteroposterior radiograph of the fracture of the femur with a cemented conventional stem (b) type C fracture pattern can be confirmed for cemented conventional stems. For the cemented short stem, however, the Vancouver type B3 pattern has to be acknowledged in all cases. A different fracture pattern of cemented femoral stems was observed in a biomechanical sawbone model [46]. Measurements showed a significantly lower torque to failure of a shortened Exeter stem compared to the conventional stem length. The authors conclude that both stems are safe to use as the torque to failure was 7-10 times higher than seen in activities of daily life. Furthermore, the authors observed only Vancouver type B2 fractures in both stem models. However, the test model included a single torsional torque which was applied by a material-testing machine until fracture occurred. A similar test setup with lateral load published by Klasan et al. compared a cemented and cementless double-tapered stem with conventional length in a biomechanical cadaver model [47]. The authors found an increased load-to-failure force by 25% for the cemented version. Similar to the above-mentioned study, they only observed fractures at the stem level for both implants with consecutive stem loosening. Our test setup included combined axial load and torsional torque, produced by tilting the preparations in the frontal (8°) and sagittal (6°) plane, which correspond to the conditions of a single-leg stand [48]. Furthermore, only specimens with reduced bone quality were used. This could explain the different results to our observations. Some limitations have to be acknowledged. The simulation of the first 6 weeks of loading, only allows conclusions to be drawn about the early stage following implantation. Mid-term and long-term characteristics of cemented short stems most likely can only be obtained in a clinical setup in vivo. Furthermore, in vitro models always simplify in vivo conditions. Muscle forces on the hip joint could not be taken into consideration, resulting only in a "worst case" scenario for proximal loading, however, featuring the advantage of high reproducibility. Conclusions The present in vitro study demonstrates that the concept of a cemented calcar-guided short stem can be further pursued. When comparing the cemented short-stem concept to a well-established cemented conventional stem in the present test setting, no significant differences were found regarding primary stability and fracture load. However, a clear difference in the fracture pattern has to be acknowledged. Further investigations should include a clinical observational study, to confirm the present results under clinical conditions in vivo.
4,505.2
2021-03-23T00:00:00.000
[ "Engineering", "Medicine" ]
Research and Method of Roughness Prediction of a Curvilinear Surface after Titanium Alloy Turning This paper deals with the optimization of process parameters (such as cutting speed and feed rate) to minimize surface roughness in the turning of a titanium alloy (Ti-6Al-4V) workpiece with spherical shape. In the first part of the article, based on the results analysis, a mathematical model is developed. It is shown that cutting speed has little effect on the surface roughness. The second part of the paper presents the application of the developed method to optimize cutting data such as feed rate in order to obtain the surface roughness parameters Ra and Rz of the curvilinear surface of the titanium alloy workpiece at acceptable and aligned, values regardless of the surface shape and its tilted angle. A case study verifies the correctness of the proposed method. The machining time was substantially shortened in comparison to the non-optimized cutting process. Introduction Nowadays, titanium and its alloys are widely used in various areas, such as the aerospace, medical and automotive industries, due to their excellent properties (e.g., high strength-to-weight ratio and good corrosion resistance, relatively low density, high-temperature properties, excellent creep, biocompatibility) [1]. Vanadium, molybdenum, manganese and aluminum are often alloying elements and provide high strength [2]. However, titanium alloys are classified as difficult-to-cut materials. The poor machinability of the materials is caused by their properties [3]. A rapid tool wear rate due to the low thermal conductivity and high chemical reactivity causes high cutting temperature at the cutting zone [4]. Difficult-to-cut materials generally make it challenging to obtain the required surface integrity, high performance and economic of machining [5]. The indicators depend mainly on the kind of machined workpiece and tool life [6]. Various methods and technologies have been developed to improve the quality of machined surface and to increase performance of machining [7]. Additionally, the decrease of manufacturing costs plays an important role. Necessary optimization should simultaneously provide a short machining time and obtain the required quality of surface roughness. To minimize costs and increase performance machining, application of different values of cutting data can be used. Nowadays, to optimize machining process parameters, various evolutionary or meta-heuristic methods can be used, such as GA, PSO, ACO and ABC. The application of the techniques in optimizing machining process parameters has been proven in the literature [8]. Although these methods are applied in many practical cases, they characterize limitations related to their inherent search mechanism. The solutions of the techniques depend generally on the type of objective and constraint functions (linear, non-linear, etc.) and the type of variables used in the problem modeling (integer, binary, continuous, etc.). To predict the performance of machining processes, regression models based on experimental tests have been developed. These regression models can be solved by using traditional optimization methods which are sensitive to the initial assumption. Excellent solutions in the case of several input parameters is difficult [9]. In manufacturing technology, surface roughness is an important indicator that can affect the performance of mechanical parts (product wear, fatigue strength, tribological properties, corrosion resistance [10] and manufacturing cost [4]. Therefore, evaluating surface roughness parameters is significant. Hence, optimization is widely used to achieve the required surface quality and process performance. One of the numerous methods for surface roughness prediction and obtaining optimal cutting parameters is the Response Surface Methodology (RSM) [11]. In the turning process (one of the most used in manufacturing technology), the values of machining parameters (feed rate, cutting speed, and cutting depth), nose radius, cutting time, cutting fluid, and cutting forces are subjected to optimization [2,12]. The cutting data directly affects the surface roughness, dimensional accuracy, tool wear rate, machining performance and manufacturing costs. The selection of appropriate parameters in order to improve surface finish is difficult [13][14][15]. In the case of turning of titanium alloy, many researchers analyze the impact of machining parameters on surface roughness and optimize their values to provide the required surface quality. The authors [2] have developed mathematical models to predict the surface roughness after the turning process of titanium alloy. The impact of the cutting parameters and the kind of tool materials on the surface roughness were examined. The optimum conditions obtained for uncoated tools are cutting speed v c = 80 m/min, feed f = 0.05 mm/rev, and depth of cut a p = 0.25 mm. The authors proved that optimization techniques and mathematical models reduced the cost of machining. The proposed optimum process parameters resulted in an increase of surface finish. In [4], cutting parameters such as feed rate, cutting speed, and depth of cut were used to predict surface roughness in turning of aerospace titanium alloy (gr5). The proposed model studied the effect on surface roughness when varying the turning parameters using the surface plots. The analysis of results showed the feed rate to be the most influential parameter on the surface roughness. Yang at al. [11] present the prediction model to predict the surface roughness and cutting parameters in the turning process of TC11 titanium alloy. It also indicates that the feed rate is the most important parameter influencing surface roughness, followed by cutting speed. The cutting depth has minimal effect. In the literature, different approaches of prediction surface roughness in machining have been presented. Asiltürk et al. [16] used the Taguchi experimental test to design optimized turning parameters and to obtain the lowest degree of surface roughness parameters (Ra and Rz). The results of the study showed that the most influential factors included the feed rate and the interaction between feed rate and cutting speed over the surface roughness. Makadia et al. [17] proposed developing the surface roughness prediction model of AISI 410 steel with the aid of a statistical method under various cutting conditions, such as cutting speed, feed rate, depth of cut and tool nose radius. The results analysis showed that the feed rate is the main influencing factor on the roughness, followed by the tool nose radius and the cutting speed. Depth of cut proved to be an insignificant parameter on the surface roughness. Yamane et al. [18] presented a method for quantitatively estimating the cutting stability and the machining system stability in the turning process. This method used the machined surface roughness profile to evaluate the position of the cutter in order to obtain the cutting edge transferability and the stability in the feed direction and in the depth-of-cut direction. Three samples with different roughness were estimated using the proposed evaluation method. The results showed that it is possible to quantitatively evaluate cutting instability based on adhesion or built-up edge, as well as the system instability resulting from vibration during machining. In [19], the authors presented a method of predicting roughness profile by further developing a methodology linking the theoretically developed models to the real machining process conditions, which are different for different processing systems (machines). This is significant, because researchers usually focus on finding prediction models for the Ra and Rz (Rt) parameters. In this approach, the statistical equality of sampling lengths in surface roughness measurement proves to be a major parameter that provides information about the condition of the process with respect to surface roughness formation. The authors demonstrated that the indicator of the roughness profile condition can be successfully implemented in roughness profile prediction. Özel and Karpat [20] focused on the development of models based on feedforward neural networks in accurately predicting both surface roughness and tool flank wear in finish dry hard turning. The neural network models were trained based on the experimental data of measured surface roughness and tool flank wear. The authors assumed that the neural network models provided better prediction capabilities because they were able to model more complex nonlinearities and interactions in comparison to linear and exponential regression models. In [21], a model was proposed based on ANN (Artificial Neural Network) to predict surface roughness (Ra) in terms of cutting parameters (such as feed rate, cutting speed, depth of cut) during hard turning of AISI H13 tool steel with minimal cutting fluid application. The authors showed that the ANN model could be applied successfully in fixing the cutting parameters to achieve desired surface finish and to maintain the surface finish within the tolerance limits during automated hard turning of AISI H13 steel with minimal fluid application. Nowadays, machined parts have more and more complicated and nonlinear shapes. However, there is a lack of prediction models of curvilinear surface roughness in the turning process. There is a need to provide aligned values of roughness parameters on the whole surface after the turning process. The task is especially difficult when difficult-to-cut materials are machined. The prediction of surface roughness parameters values is a challenge that can result from the variable impact of cutting forces and change in the surface tilted angle. In this study, a new optimized method is presented that involves the prediction of the curvilinear surface roughness. The proposed method was formulated based on the experimental research results. The created model also results in a short machining time and low manufacturing cost. In the first part of the study, the results of experimental tests of turning six curvilinear surfaces made of Ti6Al4V alloy are presented. Selection of the cutting parameters plays an important role in achieving high cutting performance and the required surface roughness. The experimental research was focused on the impact of cutting speed and feed rate on the quality of the spherical surface (values of roughness parameters Ra and Rz) for various surface tilted angles. The second part of the paper presents the optimization procedure for obtaining aligned surface roughness parameters for the example spherical profile of the part. Characterization of the Analysed Problem Machining a curvilinear surface is performed with using CNC machines that apply a cutting insert with an R shape and using interpolated axes. Cutting speed is maintained at a constant value by modifying the rotational spindle speed value, which depends on the diameter of the machined surface fragment. During the tool movement along the curvilinear surface, modifications of the angles between the cutting insert flank face and work surface occur. During the turning, the change of the undeformed chip area shape A t take place ( Figure 1). The consequences of these phenomena are: machining with various parts of the cutting edge, change of the chip flow direction, and change of the machined surface roughness. Curvilinear surfaces with a radius of curvature R = 14 mm were subjected to turning with a cutting edge radius r ε = 1.588 mm. The selected parts of the undeformed chip areas A t for the different tilted angles of surface δ and feed rates f are shown in Figure 2. In further research, a simplification was applied. This was based on an approximation of a curved contour section part by means of elementary straight lines ( Figure 3). The simplification enabled the assumption that the machining part was a tilted flat surface instead of a curvilinear surface. This simplification is often used for the analysis of the machining process of curved surfaces [22][23][24][25][26]. The assumption enables the reduction of the number of variables by excluding the impact of the curvature surface radius. consequences of these phenomena are: machining with various parts of the cutting edge, change of the chip flow direction, and change of the machined surface roughness. Curvilinear surfaces with a radius of curvature R = 14 mm were subjected to turning with a cutting edge radius r = 1.588 mm. The selected parts of the undeformed chip areas At for the different tilted angles of surface  and feed rates f are shown in Figure 2. In further research, a simplification was applied. This was based on an approximation of a curved contour section part by means of elementary straight lines ( Figure 3). The simplification enabled the assumption that the machining part was a tilted flat surface instead of a curvilinear surface. This simplification is often used for the analysis of the machining process of curved surfaces [22][23][24][25][26]. The assumption enables the reduction of the number of variables by excluding the impact of the curvature surface radius. A literature analysis reveals a lack of studies and mathematical models observing the phenomenon. A similar phenomenon is observed during the milling of curvilinear surfaces using spherical cutters, which has been widely analyzed in the currently available scientific literature. The geometrical analysis confirms the insignificant effect of the simplification on the analyzed contour shape (e.g., for an arc with radius R = 14 mm, its bend deflection is 1.3 µ m for 0.12 mm length) and the shape of the undeformed chip area. During the milling process, similar simplifications are applied for the analysis of the curvilinear surface. In further research, a simplification was applied. This was based on an approximation of a curved contour section part by means of elementary straight lines ( Figure 3). The simplification enabled the assumption that the machining part was a tilted flat surface instead of a curvilinear surface. This simplification is often used for the analysis of the machining process of curved surfaces [22][23][24][25][26]. The assumption enables the reduction of the number of variables by excluding the impact of the curvature surface radius. A literature analysis reveals a lack of studies and mathematical models observing the phenomenon. A similar phenomenon is observed during the milling of curvilinear surfaces using spherical cutters, which has been widely analyzed in the currently available scientific literature. The geometrical analysis confirms the insignificant effect of the simplification on the analyzed contour shape (e.g., for an arc with radius R = 14 mm, its bend deflection is 1.3 µ m for 0.12 mm length) and the shape of the undeformed chip area. During the milling process, similar simplifications are applied for the analysis of the curvilinear surface. Application of the Model for Surface Roughness Prediction In the experimental study, the titanium alloy Ti6Al4V was used as a workpiece. The material was annealed to achieve an optimum combination of ductility, machinability, dimensional stability A literature analysis reveals a lack of studies and mathematical models observing the phenomenon. A similar phenomenon is observed during the milling of curvilinear surfaces using spherical cutters, which has been widely analyzed in the currently available scientific literature. The geometrical analysis confirms the insignificant effect of the simplification on the analyzed contour shape (e.g., for an arc with radius R = 14 mm, its bend deflection is 1.3 µm for 0.12 mm length) and the shape of the undeformed chip area. During the milling process, similar simplifications are applied for the analysis of the curvilinear surface. Application of the Model for Surface Roughness Prediction In the experimental study, the titanium alloy Ti6Al4V was used as a workpiece. The material was annealed to achieve an optimum combination of ductility, machinability, dimensional stability and structural stability in 750 • C temperature. The chemical composition and mechanical properties are shown in Tables 1 and 2, respectively. During the experimental test, six spherical surfaces with radius of curvature R = 14 mm and length L = 10 mm were subjected to turning ( Figure 4a) with a cutting edge radius r ε = 1.588 mm. Different initial parameters, such as feed rate f and cutting speed v c , were applied ( Table 3). The tests were carried out based on a complete plan for 3 levels of feed rate and 9 levels of tilted angle. Additional experiments for 3 levels of cutting speed variation were made to determine its effect on the surface roughness. During the tests the tool holder with symbol RF123F10-2525B and sintered carbide cutting insert N123F1-0318-R0S05F were used. Turning of a curvilinear surfaces was performed using CNC machines that applied a cutting insert with an R shape and using interpolated axes. Cutting speed was maintained at a constant value by modifying the rotational spindle speed value, which depends on the diameter of the machined surface fragment. During the tool movement along the curvilinear surface, the modifications of the angles between the cutting insert flank face and work surface occur. During the experimental test, six spherical surfaces with radius of curvature R = 14 mm and length L = 10 mm were subjected to turning ( Figure 4a) with a cutting edge radius r = 1.588 mm. Different initial parameters, such as feed rate f and cutting speed vc, were applied ( Table 3). The tests were carried out based on a complete plan for 3 levels of feed rate and 9 levels of tilted angle. Additional experiments for 3 levels of cutting speed variation were made to determine its effect on the surface roughness. During the tests the tool holder with symbol RF123F10-2525B and sintered carbide cutting insert N123F1-0318-R0S05F were used. Turning of a curvilinear surfaces was performed using CNC machines that applied a cutting insert with an R shape and using interpolated axes. Cutting speed was maintained at a constant value by modifying the rotational spindle speed value, which depends on the diameter of the machined surface fragment. During the tool movement along the curvilinear surface, the modifications of the angles between the cutting insert flank face and work surface occur. The surface topography was measured by using the Talysurf Intra 50 profilometer produced by the Taylor Hobson company (Leicester, UK) (Figure 4b). The fragments of the inclined surfaces at angles δ such as ±17°, ±14°, ±8°, ±4°, 0° were examined and used to measure surface roughness parameters such as Ra and Rz. To perform the surface roughness measurements, a measuring tip with a rounding radius of 2 μm was used. The measurements were made in the transverse direction to the machining marks (parallel to the measuring axis of the sample). A measurement speed of 1 mm/s was used. For measurements in the 2D system, the resolution in the X axis was equal to 1 m, The surface topography was measured by using the Talysurf Intra 50 profilometer produced by the Taylor Hobson company (Leicester, UK) (Figure 4b). The fragments of the inclined surfaces at angles δ such as ±17 • , ±14 • , ±8 • , ±4 • , 0 • were examined and used to measure surface roughness parameters such as Ra and Rz. To perform the surface roughness measurements, a measuring tip with a rounding radius of 2 µm was used. The measurements were made in the transverse direction to the machining marks (parallel to the measuring axis of the sample). A measurement speed of 1 mm/s was used. For measurements in the 2D system, the resolution in the X axis was equal to 1 µm, and five elementary sections 0.8 mm in length were applied. For measurements in the 3D system, an area of 0.8 × 0.8 mm was analyzed. The resolution in the X axis was 1 µm and in the Y axis was 10 µm. The research was carried out based on the ISO 4287 (for 2D measurements) and ISO25178 (for 3D measurements) standards with filter values such as λ c = 0.8 mm and λ c = 2.5 µm. The impact of different tilted angles of the surface (turning with different values of the feed rate f and cutting speed v c ) on the surface roughness is presented in Figures 5 and 6, respectively. On the basis of the obtained measurement results, mathematical models with regression equations for the surface roughness parameters Ra and Rz were developed. The ANOVA analysis was used to perform the results analysis. Tables 4 and 5 present the results of ANOVA variance analysis. The impact of the cutting data (f and vc) on the surface roughness parameters is shown in Figures 7 and 8. On the basis of the obtained measurement results, mathematical models with regression equations for the surface roughness parameters Ra and Rz were developed. The ANOVA analysis was used to perform the results analysis. Tables 4 and 5 present the results of ANOVA variance analysis. The impact of the cutting data (f and vc) on the surface roughness parameters is shown in Figures 7 and 8. On the basis of the obtained measurement results, mathematical models with regression equations for the surface roughness parameters Ra and Rz were developed. The ANOVA analysis was used to perform the results analysis. Tables 4 and 5 present the results of ANOVA variance analysis. The impact of the cutting data (f and v c ) on the surface roughness parameters is shown in Figures 7 and 8. The ANOVA variance analysis results indicate the lack of effect of the cutting speed on the surface roughness parameters (p = 0.531 for Ra and p = 0.355 for Rz). As a result, the regression equations can be expressed as follows: The ANOVA variance analysis results indicate the lack of effect of the cutting speed on the surface roughness parameters (p = 0.531 for Ra and p = 0.355 for Rz). As a result, the regression equations can be expressed as follows: The ANOVA variance analysis results indicate the lack of effect of the cutting speed on the surface roughness parameters (p = 0.531 for Ra and p = 0.355 for Rz). As a result, the regression equations can be expressed as follows: For the above reasons, the mathematical model and prediction method focus on the impact of feed and tilted angle of surface on the values of surface roughness (Ra and Rz). In Figure 9, the calculated and measured surface roughness parameters Ra and Rz for different tilted angles of surface and feed rate values are presented. For the above reasons, the mathematical model and prediction method focus on the impact of feed and tilted angle of surface on the values of surface roughness (Ra and Rz). In Figure 9, the calculated and measured surface roughness parameters Ra and Rz for different tilted angles of surface and feed rate values are presented. The contour maps were created based on the measured spherical surface roughness (Ra and Rz) for selected tilted surface angles δ. The area of possible solutions determined by using the mathematical models is depicted in Figure 10. The zones with red color include the IT7 class of the surface roughness (Ra = 1.25 m, Rz = 6.3 m). However, the zones with green color include the IT8 class (Ra = 0.63 m, Rz = 3.2 m). Designed Feed Rate Selection Method This section presents a method of selecting a feed rate that enables the machining of a curved The contour maps were created based on the measured spherical surface roughness (Ra and Rz) for selected tilted surface angles δ. The area of possible solutions determined by using the mathematical models is depicted in Figure 10. The zones with red color include the IT7 class of the surface roughness (Ra = 1.25 µm, Rz = 6.3 µm). However, the zones with green color include the IT8 class (Ra = 0.63 µm, Rz = 3.2 µm). For the above reasons, the mathematical model and prediction method focus on the impact of feed and tilted angle of surface on the values of surface roughness (Ra and Rz). In Figure 9, the calculated and measured surface roughness parameters Ra and Rz for different tilted angles of surface and feed rate values are presented. The contour maps were created based on the measured spherical surface roughness (Ra and Rz) for selected tilted surface angles δ. The area of possible solutions determined by using the mathematical models is depicted in Figure 10. The zones with red color include the IT7 class of the surface roughness (Ra = 1.25 m, Rz = 6.3 m). However, the zones with green color include the IT8 class (Ra = 0.63 m, Rz = 3.2 m). Designed Feed Rate Selection Method This section presents a method of selecting a feed rate that enables the machining of a curved surface, characterized by the acceptable and aligned values of the roughness surface parameters (Ra, Figure 10. Graphs representing the relation between the tilted angle δ of the machined surface and the feed rate f for the surface roughness parameters: (a) Ra; and (b) Rz. Designed Feed Rate Selection Method This section presents a method of selecting a feed rate that enables the machining of a curved surface, characterized by the acceptable and aligned values of the roughness surface parameters (Ra, Rz), regardless of the tilted surface angle. In the first step, a mathematical model is constructed describing the influence of the cutting speed and feed rate on the values of the roughness parameters of the surface fragments inclined at different angles δ. Initial parameters, such as Ra max , Rz max , f min , and f max , are also determined and transferred to an optimization module. In the second step, the NC machining code is analyzed. On the basis of the results analysis, a machined surface contour and values of the surface tilted angles δ I (I = 1 . . . n) are defined. In the optimization module, the values of the feed rates f Ra_OPTi and f Rz_OPTi are calculated based on the mathematical model equations (Equations (1)-(4)) and tilted angle values δ i . In the next step of the optimization procedure realization, the feed rate f OPTi is defined as lower values of the feed rates f Ra_OPTi and f Rz_OPTi . Then, the inequalities, such as f OPTi ≥ f min and f OPTi ≤ f max , are checked. If both inequalities are true, the value of the feed rate f OPTi is applied to the NC code NC i (f OPTi ). If one of them is not true, the value of the feed rate f OPT is replaced by f min or f max (this depends on the unfulfilled criterion) and applied to the NC code NC i (f OPTi ). Then, the correctness of the NC code is investigated by performing all iterations (i = n). If the equality is not true, the next iteration is performed (i = i + 1). If the criterion is fulfilled, the NC code is generated as the optimized NC code and applied to create the prototype of the part. At the end of the optimization procedure, surface roughness measurements (Ra and Rz) are performed to verify the correctness of the obtained optimization results. The developed method is presented in Figure 11. (4)) and tilted angle values i. In the next step of the optimization procedure realization, the feed rate fOPTi is defined as lower values of the feed rates fRa_OPTi and fRz_OPTi. Then, the inequalities, such as fOPTi ≥ fmin and fOPTi ≤ fmax, are checked. If both inequalities are true, the value of the feed rate fOPTi is applied to the NC code NCi (fOPTi). If one of them is not true, the value of the feed rate fOPT is replaced by fmin or fmax (this depends on the unfulfilled criterion) and applied to the NC code NCi (fOPTi). Then, the correctness of the NC code is investigated by performing all iterations (i = n). If the equality is not true, the next iteration is performed (i = i + 1). If the criterion is fulfilled, the NC code is generated as the optimized NC code and applied to create the prototype of the part. At the end of the optimization procedure, surface roughness measurements (Ra and Rz) are performed to verify the correctness of the obtained optimization results. The developed method is presented in Figure 11. Case Study-Method Verification The verification of the proposed method was performed based on an experimental test. During the experiment, the surface of a Ti6-Al-4V alloy part, marked by the red line in Figure 12, was machined. Case Study-Method Verification The verification of the proposed method was performed based on an experimental test. During the experiment, the surface of a Ti6-Al-4V alloy part, marked by the red line in Figure 12, was machined. Case Study-Method Verification The verification of the proposed method was performed based on an experimental test. During the experiment, the surface of a Ti6-Al-4V alloy part, marked by the red line in Figure 12, was machined. The selected surface was machined by using the cutting insert with symbol N123F1-0318-RO S05F. In the first part of the research, the following cutting data with constant values was applied: depth of cut a p = 0.7 mm, feed rate f = 0.085 mm/rev and cutting speed v c = 80 m/min. The surface was characterized by the required surface roughness class IT8 (Ra = 0.63 µm, Rz = 3.2 µm). On the machined surface, changes in the surface roughness values were observed. The changes were analogous to those described in Sections 2.1 and 2.2. The second part of the research consisted of generating and optimizing the NC code according to the algorithm presented in Figure 11. Due to the possibility of saving only one feed rate in the one code line (one tool movement), the analyzed contour of the part was divided into smaller fragments with a length 0.5 mm. Next, the tilted angle δ I of the contour and the initial parameters (such as: f min = 0.03 mm/rev, f max = 0.14 mm/rev, Ra max = 0.63 µm and Rz max = 3.2 µm) were determined. In the next step, feed rate optimization was performed on the basis of the mathematical model, presented in the Section 2.2 (Equations (3) and (4)). The calculated feed rate values (f Ra_OPTi , f Rz_OPTi and f OPTi ) were compared with the limit values f min and f max . In Figure 13, the optimized feed rate values f OPT and the marked contour of the machined surface are presented. The selected surface was machined by using the cutting insert with symbol N123F1-0318-RO S05F. In the first part of the research, the following cutting data with constant values was applied: depth of cut ap = 0.7 mm, feed rate f = 0.085 mm/rev and cutting speed vc = 80 m/min. The surface was characterized by the required surface roughness class IT8 (Ra = 0.63 m, Rz = 3.2 m). On the machined surface, changes in the surface roughness values were observed. The changes were analogous to those described in Sections 2.1 and 2.2. The second part of the research consisted of generating and optimizing the NC code according to the algorithm presented in Figure 11. Due to the possibility of saving only one feed rate in the one code line (one tool movement), the analyzed contour of the part was divided into smaller fragments with a length 0.5 mm. Next, the tilted angle I of the contour and the initial parameters (such as: fmin = 0.03 mm/rev, fmax = 0.14 mm/rev, Ramax = 0.63 m and Rzmax = 3.2 m) were determined. In the next step, feed rate optimization was performed on the basis of the mathematical model, presented in the Section 2.2 (Equations (3) and (4)). The calculated feed rate values (fRa_OPTi, fRz_OPTi and fOPTi) were compared with the limit values fmin and fmax. In Figure 13, the optimized feed rate values fOPT and the marked contour of the machined surface are presented. As a result of the feed rate optimization, the desired goal was achieved, meaning that the values of the surface roughness parameters were below their upper limit values (Ra ≤ 0.63 m and Rz ≤ 3.2 m) regardless of the surface shape and its tilted angle. The average values with standard deviations of the surface roughness parameters Ra and Rz, measured in points such as A-I (according to Figure 13), are shown in Figure 14. As a result of the feed rate optimization, the desired goal was achieved, meaning that the values of the surface roughness parameters were below their upper limit values (Ra ≤ 0.63 µm and Rz ≤ 3.2 µm) regardless of the surface shape and its tilted angle. The average values with standard deviations of the surface roughness parameters Ra and Rz, measured in points such as A-I (according to Figure 13), are shown in Figure 14. As a result of the feed rate optimization, the desired goal was achieved, meaning that the values of the surface roughness parameters were below their upper limit values (Ra ≤ 0.63 m and Rz ≤ 3.2 m) regardless of the surface shape and its tilted angle. The average values with standard deviations of the surface roughness parameters Ra and Rz, measured in points such as A-I (according to Figure 13), are shown in Figure 14. During the experimental tests, the measurements of 3D surface roughness were carried out. Examples of the topographies and isometric views of points such as B, D, E, and G (according to Figure 13), are shown in Figure 15. Figure 13). During the experimental tests, the measurements of 3D surface roughness were carried out. Examples of the topographies and isometric views of points such as B, D, E, and G (according to Figure 13), are shown in Figure 15. Discussion The results analysis of the experiment showed the significant effect of the feed rate f and the tilted angle of surface  on the surface roughness The results analysis indicated the insignificant impact of cutting speed on the surface roughness parameters (Ra and Rz). Similar results have been Discussion The results analysis of the experiment showed the significant effect of the feed rate f and the tilted angle of surface δ on the surface roughness. The results analysis indicated the insignificant impact of cutting speed on the surface roughness parameters (Ra and Rz). Similar results have been presented in the scientific literature. The optimization procedure focuses on optimizing values of the feed rate in order to obtain sufficient surface roughness, regardless of the tilted angle surface. The proposed method was examined during the case study verification. As a result of the feed rate optimization, the desired aim was achieved. Mean values of the surface roughness parameters were below their upper limit values (Ra ≤ 0.63 µm and Rz ≤ 3.2 µm) and values of feed rate were set up in a range of 0.03-0.115 mm/rev. The optimization method proposed in this paper relies on the feed rate value modification, which affects the processing time t m . The cutting time with optimized feed rate f OPT (generated by NC code) is 160 s. The machining time when applying a constant feed rate f = 0.085 mm/rev is 120 s. However, this value of the feed rate results in acceptable surface roughness being obtained for the curvilinear surface with the tilted angle only in the range −1 • < δ < 12 • and a flat surface with the tilted angle δ = 0 • . The applied constant value of the feed rate f = 0.05 mm/rev results in a machining time of t m = 204 s and an acceptable surface roughness for the tilted angle in the range −17 • < δ < −12 • and −4 • < δ < 17 • . The acceptable surface roughness obtained for the full range of the tilted angle δ is available only for the constant feed rate value f min = 0.03 mm/rev, but in that case, the machining time is longer, at t m = 304 s. Thus, it causes a more than twofold increase in the cutting time in relation to the application of the optimized feed rate. Conclusions The research analysis presented in this paper concerns the significant problem of the locally variable roughness of curvilinear surfaces occurring after turning. This problem appears to be important from the point of view of the quality of the surface of manufactured machine parts. The authors of the paper have developed a mathematical model for predicting the values of the curvilinear surface roughness parameters Ra and Rz. The method for optimization of cutting data (cutting speed and feed rate) for machining the curvilinear surfaces was proposed by taking into account the alignment of the surface roughness parameters. The research indicated the insignificant effect of the cutting speed on the surface roughness parameters. This means that the cutting speed value has been correctly selected in the investigations, meaning that in this case, the regression equation can be simplified to optimize only the feed rate and tilted angle. The case study presented in the second part of the paper verified the correctness of the developed machining strategy. According to the proposed optimization method, the machining time of the example curvilinear surface of the titanium alloy workpiece was shortened by almost half in comparison to the non-optimized cutting process for the full range of the tilted angle δ. Surface roughness parameters were below their upper limit values (Ra ≤ 0.63 µm and Rz ≤ 3.2 µm) regardless of the surface shape and its tilted angle. In the literature, curvilinear surfaces are more often machined using a milling process, so the proposed optimization method can increase the applicability of the turning process to create the curvilinear surfaces with acceptable and aligned surface roughness parameters. In the future, the authors plan a modification of the proposed method to apply the new calculation method based on a neural network. The approach should make it possible to provide a more accurate prediction of surface roughness parameters and the impact of additional cutting data such as cutting depth, radius of surface curvature, tool wear and total cutting force components.
8,701
2019-02-01T00:00:00.000
[ "Materials Science" ]
Entrepreneurial Risks in the Realities of the Digital Economy The topic of the development of the digital economy has become one of the priorities at the international level, and on the agenda of the G20 Summit, held July 7-8, 2017 in Hamburg. In the communiqué of the leaders on the results of the summit in the framework of the "digital block", the heads of state of the G20 stressed the importance of developing digital literacy. Russia has initiated a discussion on consumer protection in the G20 format. The particular relevance of these issues for the global community is noted in the article by the President of the Russian Federation V.V. Putin dedicated to cooperation in the framework of the "twenty". The advantages and opportunities of the digital economy are undoubted. However, the risks and challenges that consumers of the digital economy face daily threaten the harmonious development of new models of this sector of the economy. In this article, the authors tried to consider business risks in the realities of the digital economy. People and their level of confidence in new technologies and market models are not only key elements, but also the most important indicators of the successful development of the digital economy. Introduction In this context, digital consumer literacy is of particular importance. The formation of "confident users" of the digital economy is the basis for increasing the potential of consumers themselves to protect their rights in the context of e-commerce. The commitment of the G-20 leaders to the development of this topic indicates global trends. Rospotrebnadzor conducts systematic work aimed at improving consumer protection in the new realities of the digital world, both at the national level and at international sites, including the G20. This includes the development of a draft law regarding the regulation of platforms that aggregate information about goods or services, as well as the formation of common approaches in the field of consumer protection in the context of electronic commerce in the Eurasian space and on the CIS market. [1] At present, with the direct participation of Rospotrebnadzor at various international platforms, including the UN Conference on Trade and Development and the OECD (Organization for Economic Cooperation and Development) , the issues of forming common approaches in the field of digital economy regulation are being discussed. Russia has considerable experience and practice in improving consumer protection systems, both at the national and at the regional level. To meet the challenges outlined in the statement of the G20 leaders, the Rospotrebnadzor intends to progressively develop cooperation with the G20 countries and relevant international organizations to increase consumer protection in the digital economy era. Today, the agency has developed specific initiatives of international cooperation on the development of digital literacy and consumer protection in the context of e-commerce, which will be presented to G-20 partners for consideration and further implementation as pilot G-20 projects. Methodology The growth rate of the digitalization of society and the introduction of progressive IT technologies have left very few people indifferent to this process. And like any element of the system, this process is accompanied by certain risks. [2][3][4] If we analyze this phenomenon with a somewhat different typology of risks than it has been recently accepted to lead a discussion, i.e. not to affect the threat of turning the country into a digital colony of leading IT-countries, in particular, the dependence of software platforms and interfaces on Windows, MS Office, Oracle, SAP, Facebook, Google, etc., there are enough professional points of view on this topic, but the realities of the business activity of entrepreneurship, in the foreseeable future we risk to be in the realities of practically non-overlapping types of business. [5][6][7] Less than a year ago, when conducting a regular analytical study on the current understanding of the use of analytics in business, Dun & Bradstreet jointly Forbes Insights jointly obtained the following data: about a third of top managers from leading companies in North America, the United Kingdom and Ireland working in various sectors of the economy hold that in their companies there is a so-called digital divide -the gap between real data use skills and the demands that the market puts forward. [8] The results of the study of the World Economic Forum, presented in the Global Information Technologies report on assessing the readiness of countries for the digital economy, confirm the previous one. According to the study, the Russian Federation ranks 41st in readiness for the digital economy with a significant margin from dozens of leading countries, such as Singapore, Finland, Sweden, Norway, the United States of America, the Netherlands, Switzerland, the United Kingdom, Luxembourg and Japan. From the point of view of the economic and innovative results of the use of digital technologies, the Russian Federation is in 38th place with a large lag behind leading countries such as Finland, Switzerland, Sweden, Israel, Singapore, the Netherlands, the United States of America, Norway, Luxembourg and Germany. [9, 10] The government-approved Digital Economy Program of the Russian Federation is represented by the following three levels, which in their close interaction affect the lives of citizens and society as a whole: • markets and sectors of the economy (areas of activity) where specific subjects interact (suppliers and consumers of goods, works and services); • platforms and technologies where competencies are formed for the development of markets and sectors of the economy (fields of activity); • An environment that creates the conditions for the development of platforms and technologies and the effective interaction of market entities and sectors of the economy (spheres of activity) and covers regulations, information infrastructure, personnel and information security. [11] 3 Results and Discussion In modern realities, these levels above can be segmented almost indefinitely, because as soon as we take the degree of digitization of the company's business processes as a point of reference, even regardless of the specific platforms and markets, it automatically determines the vector of business principles. The digital economy is created by business models, and technology plays the role of a tool. What is important is not the digital technologies themselves, but the business effect they give. In today's realities, a common trend for all businesses is the formation of complex digital platforms and business chains that unite a multitude of participants, allowing them to access a huge pool of resources, customers, or opportunities. In practice, this means that an organization must build its business processes and IT systems in such a way as to gradually integrate its customers, intermediaries, suppliers, and so on into the processes. And in fact, the types of doing business with the inclusion in the vector of the digital economy can be divided into the following areas: [12] • traditional enterprises that have a business and assets in the "offline" world, but actively use modern technologies as their infrastructure, in particular, equipment, communication systems, software products of a wide range: from user software to ERP and CRM systems; • enterprises selling products exclusively through virtual channels; • enterprises that can be considered virtual: they are not tied to any physical asset: the number of business models of such companies is very large and is constantly supplemented by innovative start-ups. At the same time, it would be wrong to assume that only companies belonging to the typology of small businesses are gathered in this block. And all these types of companies, in turn, are still segmented in the framework of B2B and B2C. And if with B2B, in principle, the technology of interaction doesn't fluctuate much, the B2C segment becomes extremely curious from the point of advance: according to official data from Rosstat, a year ago the share of residents of the Russian Federation using broadband Internet access was only 18, 77%. [13] That is, taking into account regional realities, the level of income of the population, migration policy, and the budgets of the subjects of the Russian Federation, more than three quarters of the population, in principle, are directly outside of any "digital economy". In other words, those 19% that can be physically used in the B2C segment, the economy will be fully involved in the process of monetization of the business, and the remaining target group will be the subject of close analysis of companies operating offline. And the task of this type of companies will be to optimize their business processes, but not within the framework of attracting online customers, testing the target audience with modern and fashionable online resources, but as part of reducing the costly component of the business mechanism itself, and here we are It is not about the digital economy, but about an elementary IT device that most clearly meets the needs of a particular business. And the corresponding competitive segment here will also not be focused on the methods of the digital economy. It is also necessary to take into account and apply complex models of active control systems in modern developing enterprises. [14] As for companies that sell products exclusively through virtual channels. Thus, based on their experience, the consultants of The Boston Consulting Group believe that digital provides an opportunity to increase profitability by 20% and reduce costs by 30%, while reducing CIR (operating costs to operating income) by 12%. At the same time, the "transition to the figure" can be long and difficult, since includes many entry points in the value chain, and here we should expect an increase in process efficiency and a reduction in the risks themselves. And it is here that "technologies" are connected, -close cooperation with solution providers to support regulatory requirements (RegTech). Often, large companies have difficulty with the rapid change of strategy or the introduction of innovative solutions. The main reason for this is the bureaucratic component within large organizations. To solve this problem, they establish partnerships with fintech companies and start-ups, or simply buy them. Virtual companies: from virtual goods and services to almost completely remote control. Currently, there are many digital platforms that provide markets for goods, services and information, delivered in both physical and digital form. [15 ]The development of the digital economy in Russia. Program until 2035] With this approach, we get almost polar business technologies and mechanisms for attracting customers, and what is a value in a market economynamely, the formation of a competitive field and the degree of difficulty in entering the market. position of flexible risk management [16] -is being transformed into monopolistic competition: new business models, new large companies, new mass services and information services: the risk of absorption of new markets by transnational companies. Labor productivity growth, efficiency growth, the introduction of artificial intelligence (AI), automation, robotization. That is, a monopoly on the use of the digital economy is obvious, no matter how the system resists this. And it would be appropriate here to recall the study of Swiss scientists from the University of Zurich regarding the fact that only less than 1% of companies control about 40% of world capital, while the share of control over operating global profits is about 60%. [17] Analysts are also considering the digital transformation of companies in the context of reassessing their business processes. The study showed that the real advantages of digital transformations are felt only in those companies where management was able to realize the relationship between people, processes and technologies. In other words, significant changes in business require synchronization and mutual penetration of these three components. Transformation is much more concerned with changes in the company's culture and competent management of enterprise resources, rather than investment in new technologies. [18] And the relevance of the application of Big Data here becomes indisputable. According to IDC, revenues generated by working with big data will increase from $ 130 billion (world-wide recorded in 2016) to $ 204 billion by 2020. Only those companies that have the appropriate IT infrastructure will be able to gain commercial benefits from this. [19] By introducing complexes working with Big Data, companies gain competitive advantages. Among the main ones: • operational search for solutions for problem situations -the system processes data files, establishes patterns and cause-and-effect relationships, which allows to identify problems at an early stage (sometimes even at the threat stage) and eliminate them; • literacy management decisions; • optimization -meaning the rational use of resources, their centralization to achieve a specific goal; • forecasting, including macroeconomic -analysis of current data is necessary for building models, predicting future scenarios, but not in a pure form through extrapolation of data. An important advantage of such systems is the handling of unstructured data, which is almost impossible to group, combine according to one attribute, present in the form of tables, interrelated patterns. By implementing SAP HANA systems to work with Big Data, companies are beginning to work effectively with complex analytical queries, the system provides storage of large amounts of information and high transaction processing speed. According to a study by the IBM Institute, customer-oriented companies most often work with such systems (53% of cases). Analyzing the data, you can create a portrait of the "ideal buyer", build a model of behavior and determine the best distribution channels. This is how a personal offer is formed at the right time. Big Data is used to assess the effectiveness of the company -43%, risk analysis of all -7%. [19] The value of customer orientation lies in the capabilities of the system: • a single view of the client (the ability to get an overview of all the data about the client in one place) • targeted marketing with micro-segmentation (using analytics to generate unique marketing offers for a specific client), • ensuring multi-channel communication with the client, its acceleration, consolidation and automation. Thus, the risks are spreading more and more deeply into the company's operations, connected not only in the format of interaction with contractors, but also in the cost of attracting and retaining the client.
3,231.2
2018-01-01T00:00:00.000
[ "Economics" ]
Synthesis and Characterization of Gallium Oxide/Tin Oxide Nanostructures via Horizontal Vapor Phase Growth Technique for Potential Power Electronics Application ,e monoclinic β-gallium oxide (Ga2O3) was viewed as a potential candidate for power electronics due to its excellent material properties. However, its undoped form makes it highly resistive. ,e Ga2O3/SnO2 nanostructures were synthesized effectively via the horizontal vapor phase growth (HVPG) technique without the use of a magnetic field. Different concentrations of Ga2O3 and SnO2 were varied to analyze and describe the surface morphology and elemental composition of the samples using the scanning electron microscopy (SEM) and energy-dispersive X-ray (EDX) spectroscopy, respectively. Meanwhile, the polytype of the Ga2O3 was confirmed through the Fourier transform infrared (FTIR) spectroscopy. ,e current-voltage (I–V) characteristics were established using a Keithley 2450 source meter. ,e resistivity was determined using the van der Pauw technique.,emobility and carrier concentration was done through the Hall effect measurements at room temperature using a 0.30-Tesla magnet. It was observed that there was an increase in the size of the nanostructures, and more globules appeared after the concentration of SnO2 was increased. It was proven that the drop in the resistivity of Ga2O3 was due to the presence of SnO2.,e data gathered were supported by the Raman peak located at 662 cm, attributed to the high conductivity of β-Ga2O3. However, the ε-polytype was verified to appear as a result of adding SnO2. All the samples were considered as n-type semiconductors. High mobility, low power loss, and low specific on-resistance were attained by the highest concentration of SnO2. Hence, it was clinched as the optimal n-type Ga2O3/SnO2 concentration and recommended to be a potential substrate for power electronics application. Introduction Silicon-based technology has been the mainstream in power electronics [1]. However, its power devices are approaching their physical limitation [2] when operating at extreme voltage, current, power, and temperature environments, allowing other semiconductor materials to dominate large market sectors untouched by Si-based devices [3]. Consequently, research and development on wide bandgap materials [4] are carried out in the past years so that the volume and weight of power electronic devices can be improved for more extensive applications [5]. However, the undoped β-Ga 2 O 3 is highly resistive because of its wide bandgap. Electrical property measurements of Ga 2 O 3 nanowires and nanoribbons have revealed n-type semiconductor behavior [13], which has been attributed to oxygen vacancies (V o ) or Ga interstitials [3,14]. As reported by Varley et al. [15], V o acts as a deep donor and does not contribute to its conductivity. Hence, doping with elements acting as shallow donors is necessary to enhance its electrical conductivity [16]. e mixing of Ga with Sn inevitably causes the replacement of Ga +3 ions with Sn +4 ions [20]. e tetravalent Sn ion is most often chosen as donor dopant [18] since it is also an n-type that enhances the natural conductivity of β-Ga 2 O 3 [21,22] and their ionic radii are close with each other [18,21]. In the past years, a high amount of work was devoted to the growth of the undoped semiconductor nanowires by several approaches. Up to this date, a cost-efficient synthesis technique of manufacturing nanostructure is still a grand challenge [1]. e fabrication of one-dimensional structures gained interests due to their importance in understanding the dependence of properties on the size and dimensionality of materials, and their potential applications as functional building blocks for electrical, optical, and magnetic devices [14]. Although there are several synthesis methodologies, the thermal evaporation using a metal catalyst is a successful route to fabricate semiconducting oxide nanostructure from single nanowires or nanorods to hierarchical nanostructures [23]. Nevertheless, few papers have been reported for the conductivity control of β-Ga 2 O 3 by doping [24]. Experimentally determined results for free charge carrier concentrations and mobility parameters are currently scarce for β-Ga 2 O 3 [25]. e horizontal vapor phase growth (HVPG) is a homedeveloped [26] and low-cost synthesis technique which proved to produce various one-dimensional nanostructures using different starting materials such as SnO 2 [27,28], Fe 2 O 3 [26], and In 2 O 3 [29]. Recently, undoped nanowires were successfully fabricated by the presence of a magnetic field via HVPG for high-concentration ethanol vapor detection [30]. In this regard, SnO 2 was chosen as dopant since there were studies already conducted using HVPG and its small amount was known to increase the electrical conductivity of Ga 2 O 3 [21]. is pursuit was an initial investigation on the synthesis of Ga 2 O 3 /SnO 2 via the horizontal vapor phase growth (HVPG) technique without the application of the magnetic field. e concentration of SnO 2 was varied to determine its effect on the surface morphology and electrical attributes of Ga 2 O 3 for potential power electronic applications. e characterizations were performed using SEM and EDX, while the polytypes were affirmed through the known peaks of Raman. e I-V curves were uncovered using a two-point probe test and the van der Pauw technique for the electrical resistivity. e Hall effect measurements revealed the carrier concentration and mobility. Additionally, the specific onresistance and power loss were studied for potential power electronics applications. Synthesis of Ga 2 O 3 /SnO 2 Nanostructures. is study employed the horizontal vapor phase growth (HVPG) technique patented by Santos et al. [28] for the synthesis of Ga 2 O 3 /SnO 2 nanostructures. e HVPG is a deposition method that follows a spontaneous growth or vapor-solid (VS) process, which employs the evaporation-condensation process at a very low pressure of 10 −6 Torr. e annealing process requires metal oxide material to evaporate at a very high temperature. Subsequently, the vapor nucleates into particles and transports to the substrate. e source material then condenses and deposits on the substrate's surface because of the temperature difference along the silica quartz tube, resulting in the formation of distinct nanomaterials. Fifty milligrams of Ga 2 O 3 powder was mixed with or without SnO 2 powder purchased from Sigma-Aldrich. e variation of Ga 2 O 3 and SnO 2 was based on the weight percent (wt.%) ratio of 98 : 2 used in relation to [31]. Consequently, the mass loadings were named as sample A (100 : 0 wt.%) or the asgrown, sample B (99 : 2 wt.%), sample C (98 : 2 wt.%), and sample D (90 :10 wt.%). e samples were poured in fused-silica quartz tubes and sealed under a high-vacuum system, while the pressure was maintained at 10 −6 Torr. Afterward, the sealed tubes were injected midway through a ermolyne horizontal tube furnace and then annealed at 1,200°C with a ramp time of 40 minutes for 8 hours as shown in Figure 1(a). Regions of interest were assigned according to their position in the furnace during the annealing process. Figure 1(b) shows that zone 1 contained the powder which was positioned inside the furnace. Meanwhile, the middle portion of the tube was designated as zone 2. e last part of the tube that was completely outside the furnace was designated as zone 3 [27]. As affirmed by [27], the temperature in zone 1 was 1200°C, 353°C to 800°C in zone 2, and 63°C to 352°C in zone 3. After the tubes cooled down, they were removed from the furnace and then ruptured to collect the nanomaterials for characterization. e Ga 2 O 3 /SnO 2 was compared to the initially prepared as-grown specimen. Characterization of Ga 2 O 3 /SnO 2 Nanostructures. JEOL JFC-1200 fine coater was used to gold (Au) sputter every sample at 50 mA for 90 seconds to become conductive. en, the Au coated samples were subjected to Phenom XL Scanning Electron Microscope equipped with energydispersive X-ray spectroscopy for surface morphology and elemental composition of the nanostructures. Point analyses were performed to know and to verify the elemental contents of the nanostructures and globules. e traditional manual image analysis using ImageJ was utilized to determine the size of the nanostructures. e zone where the nanostructures were found was subjected to polytype analysis. e Fourier transform infrared (FTIR) spectroscopy confirmed the occurrence of β-Ga 2 O 3 , which was reported by Higashiwaki et al. [1] to be suitable for power electronics applications. Furthermore, FTIR was utilized to know the other Ga 2 O 3 phases present in the specimens based on their known peaks [32]. A two-point probe using Keithley 2450 source meter was operated to establish the current-voltage (I-V) characteristics of the samples. e voltage sweep was set to 200 points, while the voltage was varied from −5.0 V to +5.0 V. e resulting current was recorded automatically and plotted by the source meter. On the other hand, Hall effect and electrical resistivity measurements were carried at room temperature (RT) using the van der Pauw method. e sense terminals of Keithley 2450 Source Meter were used to measure the voltage of the sample under test (SUT), while the force terminals sourced current to SUT. For Hall effect measurement, the method used by Matsumura and Sato [33] was followed where a magnetic field of 0.30 T was applied in the direction of the sample's thickness, and the change in voltage between the point contacts placed at diagonally opposite corners was measured. Consequently, mobility and carrier concentrations were calculated based on the resistivity and Hall coefficients of each sample. Both power loss and specific on-resistance were likewise computed based on the applied current and the established value of the electrical resistivity for potential power electronics applications. Surface Morphology and Elemental Composition Analyses. e nanostructures presented in Figure 2(a) were excellently viewed at 6,500x magnification showing a pile of bulk and rigid nanostructures. eir size ranged from 4.46 μm to 11.357 μm and an average of 3.177 μm. Figure 2(b) shows that Ga and O had 32.30% and 31.20% compositions, respectively, leading to the ratio of 1 : 1. It can be seen that Au appeared in the EDX since all the zones were sputtered to make them conductive for surface morphology and elemental analysis. It was noticed that the samples charged up when sputtering was done less than 90 seconds. However, the concentration of Au became more apparent than those of Ga and O. Similar observations were noticed across all the zones and prepared specimens. No other impurities appeared besides Au. It was perceived in Figure 3(a) that combinations of straight, crossing, and twisted nanowires were in good agreement with the literature [34]. A manual assessment revealed that their diameter ranged from 52.287 nm to 167.401 nm and a mean diameter of 90.976 nm. No nanobelts nor nanorods were observed in this zone. icker yields were noticed compared to [34][35][36]. On the other hand, thinner nanowires were seen compared to [30] using the same deposition technique with an applied magnetic field. Figure 3(b) shows Ga to O ratio of approximately 2 : 3. e deposits found in Figure 2(a) were very similar to those in Figure 4(a) but much smaller considering the fact that they were viewed at the same magnification. eir size ranged from 0.156 μm to 2.971 μm and a mean size of 0.837 μm. e ratio of Ga to O was not proportionate with Ga 2 O 3 due to few Ga atoms. No other impurities appeared in the analysis as seen in Figure 4(b), besides Au. Figure 5(a) shows the same SEM image taken in relation to [30]. e assessed size of the nanostructures ranged from 233.836 nm to 998.948 nm and a mean size of 575.231 nm. A similar ratio was assessed with zone 3 of sample A based on Figure 5(b). e nanostructures had smoother and straighter morphology but much thicker compared to the as-grown sample. e appearance of globules was evident in all SEM images in Figure 6, which were not seen in the asgrown sample. ey were not only attached to the tip of the nanostructures but likewise on its lateral surface [32]. It was likewise examined that the globules fit the nanostructures and none of which fell off onto the substrate due to their sufficient amount. Few globules were seen in the image due to a very small concentration of SnO 2 . More globules attached to the nanostructures were considered as salient and novel information of this study. e size of the nanostructures ranged from 30.864 nm to 277.055 nm with an average of 98.781 nm and was considered as 1D nanomaterials, specifically nanowires. On the other hand, the size of the globules ranged from 0.575 μm, while the largest was 6.086 μm with an average of 2.336 μm. Based on Figure 7(a), the ratio of Ga to O of the nanostructure was approximately 2 : 7, which was due to the presence of the dopant, while the globule was 1 : 6. According to Jessen et al. [37], the chemical composition of the droplet was purely Sn through the EDS and EDX A similar image is noticed in Figure 8(a) and the asgrown sample, which matches the bulk Ga 2 O 3 powder, with size ranging from 190 nm to 1,937 nm and mean size of 662.265 nm. Zone 3 was found to be Ga deficient due to a very small concentration as seen in Figure 8(b). Small and large pieces of crystalline nanoblocks are spotted in Figure 9(a) signifying that the Ga 2 O 3 powder was not melted totally during the deposition. e size of the nanostructures ranged from 248 nm to 2,364 nm and an average size of 804.615 nm. As assessed in Figure 9(b), Ga to O ratio was verified to be approximately 2 : 3. It was observed that more globules of different sizes and twisted nanostructures were present in this concentration compared to sample B. Furthermore, smaller globules po- assessed diameter of the globules ranged from 0.169 μm to 9.782 μm and a mean diameter of 2.042 μm. e quantity of the globules depends on the increase in the concentration of SnO 2 to Ga 2 O 3 . Figure 11(a) reveals that the nanostructures were composed of combined O, Sn, and Ga atoms with 34.40%, 6.73%, and 6.34%, respectively. Ga to O ratio was found to be 2 : 11 as contributed by SnO 2 . Compared with sample B, the nanostructures' main content was Ga; however, Sn in this concentration was seen in Figure 11(b). Consistently, the main component of the globule was Sn with 14.44 atm% as shown in Figure 11(c). Since few nanoparticles were found, Figures 12(a) and 12(b) prove that Ga has the least share of 1.54% compared with Au, O, and Si with 62.31%, 24.13%, and 12.02%, respectively. Consequently, Ga to O ratio was found to be 2 : 12. Figure 13(a) shows bulk and small nanomaterials with a rigid structure. However, Figure 13(b) proves that this zone was made up of 7.50% Ga and 37.98% O, leading to Ga to O ratio of 2 : 8. No other impurities were found in this zone besides 47.18% Au and 7.34% Si. A similar image is observed in Figure 14(a) and supported by Ga to O ratio of 5 : 2 based on EDX analysis in Figure 14(b). More globules but larger in diameter were produced in this concentration compared with samples B and C. Figure 15(a) shows an excellent view of a melting nanostructure with a globule attached on its end, while a globule was attached permanently on two nanostructures in Figure 15(b). Meanwhile, it was observed that the nanostructures were not sufficient to hold all the globules permanently in Figure 15(c). It was further noticed that there were many globules compared to nanostructures causing them to fall onto the substrate. Only a small amount of Sn has been shown to incorporate successfully without segregation effectively [22]. e size of the nanostructure ranged from 65.788 nm to 526.316 nm and an average diameter of 247.341 nm. On the contrary, the globules' size ranged from 0.521 μm to 18.872 μm and a mean diameter of 2.725 μm. e presence of nanorods was significant since the globules were attached to its surface. A huge globule cannot be supported by a single nanostructure. us, if the nanostructures were neither aggregated nor thicker, the huge globules just fell off onto the substrate. Due to this reason, the concentration was not further increased. SEM image shown in Figure 16(a), supported by Figure 16(b), proved that the nanostructures' main Advances in Materials Science and Engineering composition was Ga with 13.06 atm% and a lesser concentration of Sn of 1.74%. O had a 21.10% share, which led to a Ga to O ratio of 2 : 3. Conversely, the major component of the globules was Sn with 24.89% as supported by its higher EDX peak in Figure 16(c) compared with Ga and O. Ga and O compositions had 4.79% and 22.11% shares, respectively, leading to Ga to O ratio of 2 : 9. Polytype Analysis. According to [17], 142 cm −1 and 179.69 cm −1 low modes were attributed to the vibration and translation of doubly connected straight chains of GaO 6 octahedra. e Raman modes at 449.87 cm −1 and 477.68 cm −1 according to [22,38], respectively, were related to the deformation of GaO 6 octahedra. e group of Raman modes located at 632.31 cm −1 and 650.67 cm −1 represents the (a) Figure 17. As noticed in Figure 18, only 1 low peak of β phase was observed, particularly at 160.73 cm −1 , in agreement with [39]. However, the peaks of the deformation of GaO 6 octahedra dominated this sample located at 349.02 cm −1 , 419.52 cm −1 , and 470.89 cm −1 . On the other hand, 2 high spectra were located at 627.86 cm −1 and 661.81 cm −1 resulting in the distortion of Ga 2 O 3 caused by the presence of Sn. Nonetheless, 1 peak of ε-Ga 2 O 3 was observed at 714.93 cm −1 as a result of adding SnO 2 to Ga 2 O 3. e peak of SnO 2 was not observed in this concentration due to its small concentration [22]. Figure 19 reveals 3 mid peaks at 302.04 cm −1 , 317.60 cm −1 , and 348.62 cm −1 and 2 high peaks at 662.61 cm −1 and 759.87 cm −1 . According to [22], the Raman peak located at 662.61 cm −1 indicates that the majority of the nanostructures grew in [010] direction, which exhibits the highest electrical conductivity for β-Ga 2 O 3 nanowires. Similar to sample B, 1 peak of ε-Ga 2 O 3 was noticed at 211.25 cm −1 as reported in the previous study [40]. Conversely, 2 peaks of rutile SnO 2 were analyzed at 473. 25 Sample D showed 1 peak of β-Ga 2 O 3 for the low and mid modes, particularly at 159.60 cm −1 and 415.88 cm −1 , respectively, as spotted in Figure 20. Raman peak located at 662.62 cm −1 was likewise observed. No other peaks of the Ga 2 O 3 phases were detected in this concentration. On the other hand, 2 peaks of SnO 2 were seen specifically: 493.84 cm −1 , which appeared as the consequence of disorder activation in its rutile structure, and 638.73 cm −1 , which corresponds to the classical vibration modes as shown in [41]. Electrical Properties for Potential Power Electronics Application. e I-V curves shown in Figures 21(a) and 21(b) were almost straight and similar to the graph of a resistor and the as-grown Ga 2 O 3 , which is in good agreement with the previous research [42]. On the other hand, Figure 21(c) shows resemblance to the I-V curve of a diode and a single Sn-doped Ga 2 O 3 in relation to [42]. e shifting of the I-V graph of specimen C implied that there was an enormous increase in its conductivity. is was consistent with other works on Ga 2 O 3 nanowires in which a strong enhancement of the electrical conductivity was observed due to the incorporation of Sn [42]. Nevertheless, the measured currents for sample D, shown in Figure 21 Advances in Materials Science and Engineering greatest correlation among the other specimens. is sample was expected to have the greatest increase in electrical conductivity since it has the greatest concentration of Sn. However, the length of the contacts attached to this during the 2-point probe test might affect the result. As can be gleaned in Table 1, the resistivity of sample A was higher compared to the reported value of 1.43 × 10 −1 Ω·cm in [24]. Conversely, the resistivity of sample B was found to be 2.01757 × 10 −1 Ω·cm. ere was a slight decrease in the resistivity since a very small amount of Advances in Materials Science and Engineering SnO 2 was added to Ga 2 O 3 . An increase of 1.15328 S/cm was still observed, which proved that the incorporation of SnO 2 increases the conductivity of the resistive Ga 2 O 3 . Smaller resistivity of 1.67619 Ω·cm was observed in sample C compared to that of A and B. e conductivity increased by approximately 2.162714 S/cm and 1.0094 S/cm for samples A and B, respectively. As discussed in the previous research, when Sn +4 substitutes Ga 3+ on the octahedral site, it donates an electron to the Ga 2 O 3 lattice which increases the carrier concentration and thus conductivity [43]. e data obtained in sample C was clear evidence that the chosen dopant increased the conductivity of Ga 2 O 3 . Another sample proved this information since there was a drastic increase in conductivity in sample D. It was analyzed that the increase resulted in 10.596741 S/cm for sample A, 9.443455 S/cm for sample B, and 8.434027 S/cm for sample C. e mentioned resistivities of Ga 2 O 3 /SnO 2 specimens fell on the range of 10 −3 to 10 12 Ω cm with changing doping concentration, which was in good agreement with [11]. e conductivity of the specimens might be affected by the Au sputtering; (a) synthesized Ga 2 O 3 /SnO 2 specimens. e carrier density increases to the order of 100 times as the Sn-concentration increases, implying that the electrical resistivity and the carrier concentration of β-Ga 2 O 3 can be controlled by Sn doping in the range of 10 16 to 10 18 cm −3 [24]. As per Table 2, sample D showed the smallest power loss with 4.3403 × 10 −6 W compared with samples B and C with 1.2610 × 10 −5 W and 1.0476 × 10 −5 W, respectively. Sample A had the greatest power loss compared to the three samples with SnO 2 . Sample D had the least conduction loss due to its low specific on-resistance among the Ga 2 O 3 /SnO 2 specimens. Conclusions e result of this study highlighted that the HVPG technique was effective in the production of different nanostructures. e as-grown Ga 2 O 3 nanowires were produced with an average diameter of 90.976 nm. When SnO 2 was mixed up, the appearance of globules was apparent indicating the presence of Sn in the samples. ere was a direct correspondence on the size of nanostructures and globules to the concentration of Ga 2 O 3 and SnO 2 as observed in the SEM images. When the concentration of SnO 2 was increased, the size of the nanostructures likewise increased and the appearance of the globules became more apparent. A large amount of SnO 2 produced more globules, which led to an increase in the size of the nanostructures but decreased on its quantity. Consequently, the globules fell off onto the substrate since the number of nanostructures was not enough to hold them permanently. EDX findings revealed that both globules and nanostructures were composed of mixed Ga, Sn, and O atoms. e major composition of nanostructures was Ga, while the globules were Sn. Nevertheless, the long exposure of the samples to Au sputtering affected the EDX results. All Ga 2 O 3 /SnO 2 samples were dominated by Raman peak located at 662 cm −1 and grew at [010] direction, which exhibits the highest electrical conductivity for β-Ga 2 O 3 . Doping of SnO 2 was proven to unfold the ε polytype of Ga 2 O 3 . e rutile peaks of SnO 2 and peaks due to its disorder activation were identified in the Ga 2 O 3 /SnO 2 samples. e I-V curves of the samples showed similarity to the literature. Furthermore, the incorporation of Sn was proven to lower the resistivity of the undoped sample. e decrease in resistivity was attributed to the presence of Sn +4 . When some of the Ga +3 ions in the lattice were replaced by Sn +4 or some of the Sn ions as interstitial atoms were located in the lattice, conduction electrons were produced as supported by the presented Raman data and previous research works. It was likewise clinched that Ga 2 O 3 /SnO 2 has the potential to meet the criteria for the selection of semiconductor substrate suitable for the fabrication of power electronics devices. On the other hand, there was an indirect correspondence between mobility and carrier concentration since the former increased and the latter decreased when the concentration of SnO 2 was augmented. For the application, it was concluded that the highest concentration of SnO 2 exhibited the lowest power loss and specific on-resistance. It was ascertained that it has the potential to be used for power electronics applications. Future research shall be geared towards the analysis of the mechanical behavior of the Ga 2 O 3 /SnO 2 nanostructures. Also, exploring the p-type and ohmic contacts suitable for Ga 2 O 3 /SnO 2 shall be investigated to realize its full potential for the fabrication of power electronic devices such as the Schottky barrier diode and metal oxide semiconductor field-effect transistor. Data Availability All data used in this manuscript are cited within the article. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
6,091.4
2020-11-21T00:00:00.000
[ "Materials Science" ]
Near-Field Scanning Millimeter-Wave Microscope Operating Inside a Scanning Electron Microscope: Towards Quantitative Electrical Nanocharacterization : The main objectives of this work are the development of fundamental extensions to existing scanning microwave microscopy (SMM) technology to achieve quantitative complex impedance measurements at the nanoscale. We developed a SMM operating up to 67 GHz inside a scanning electron microscope, providing unique advantages to tackle issues commonly found in open-air SMMs. Operating in the millimeter-wave frequency range induces high collimation of the evanescent electrical fields in the vicinity of the probe apex, resulting in high spatial resolution and enhanced sensitivity. Operating in a vacuum allows for eliminating the water meniscus on the tip apex, which remains a critical issue to address modeling and quantitative analysis at the nanoscale. In addition, a microstrip probing structure was developed to ensure a transverse electromagnetic mode as close as possible to the tip apex, drastically reducing radiation effects and parasitic apex-to-ground capacitances with available SMM probes. As a demonstration, we describe a standard operating procedure for instrumentation configuration, measurements and data analysis. Measurement performance is exemplarily shown on a staircase microcapacitor sample at 30 GHz. Introduction Microwave characterization methods and related instrumentations have been widely described in the literature. In its essence, a vector network analyzer (VNA) is connected to a microwave sensor to measure the electrical and electromagnetic properties of the device or material under investigation. Microwave characterization is commonly classified into two categories. On the one hand, we find broadband techniques, including free-space [1][2][3], guided (including on-wafer) [4] and open-ended coaxial probing [5][6][7] methods, which have the ability to characterize materials with medium to high loss on a broad frequency range. On the other hand, we find narrowband techniques mostly based on resonant structures to achieve accurate measurements of the dielectric properties of low-loss materials [8]. All of these techniques require a sample volume at least in the order of the fraction of the free-space wavelength of excitation. To address the issue of the microwave characterization of nanomaterials and nanodevices, near-field scanning microwave microscopy (NSMM) tools have been introduced [9]. SMM is a measurement technique that interfaces an atomic force microscope (AFM) with a VNA to simultaneously measure surface topography and microwave impedance with a submicrometer resolution [10][11][12][13]. To that end, a subwavelength probe interacts closely or in contact mode with the sample under test. The spatial resolution is therefore mainly governed by geometry. SMM has received a growing interest from the research community to address a wide range of applications, including semiconductor materials such as 1D and 2D materials [14][15][16][17], biology [18][19][20][21][22][23][24][25], quantum physics [26][27][28][29][30] or energy materials [31][32][33]. There is an urgent need to develop SMM traceability to yield quantitative and calibrated data. In this effort, we developed a SMM operating inside a scanning electron microscope (SEM) using a microstrip probe structure operating up to 67 GHz [34][35][36][37][38]. Our previous works were completed by first presenting quantitative data performed at 30 GHz on microsized metal oxide semiconductor (MOS) capacitors. The modeling, measurement configuration, experimental part, data analysis and discussion proposed in this manuscript demonstrate the ability of the new instrument to simultaneously provide electronic, topography and calibrated complex impedance images. Description of the Scanning Microwave Microscope Built Inside a Scanning Electron Microscope The instrumentation developed incorporates 3 imaging modes (topography, radiofrequency and electronic) that can be run individually or simultaneously. Preliminary developments have been presented in [34][35][36][37][38]. Consequently, this section provides an overview of the modes of implementation to help the reader. The atomic force microscope (AFM) works in contact mode using optical beam detection for monitoring the probe detection. To that end, a fiber-coupled Fabry-Perot 635 nm laser source from Thorlabs ® delivering up to 2.5 mW is used to generate the optical signal. A Fixed Focus Collimation Packages (FC/PC) F280FC-B connector from Thorlabs ® (max beam diameter = 3.4 mm, focal length = 18.24 mm) is used to collimate the signal from the output of the single-mode fiber ( Figure 1). A quadrant photodiode referenced QD50-0-SD from OSI Optoelectronics ® with associated circuitry is used to provide two different signals and a sum signal. Appl. Sci. 2021, 11, x FOR PEER REVIEW 2 of 16 interacts closely or in contact mode with the sample under test. The spatial resolution is therefore mainly governed by geometry. SMM has received a growing interest from the research community to address a wide range of applications, including semiconductor materials such as 1D and 2D materials [14][15][16][17], biology [18][19][20][21][22][23][24][25], quantum physics [26][27][28][29][30] or energy materials [31][32][33]. There is an urgent need to develop SMM traceability to yield quantitative and calibrated data. In this effort, we developed a SMM operating inside a scanning electron microscope (SEM) using a microstrip probe structure operating up to 67 GHz [34][35][36][37][38]. Our previous works were completed by first presenting quantitative data performed at 30 GHz on microsized metal oxide semiconductor (MOS) capacitors. The modeling, measurement configuration, experimental part, data analysis and discussion proposed in this manuscript demonstrate the ability of the new instrument to simultaneously provide electronic, topography and calibrated complex impedance images. Description of the Scanning Microwave Microscope Built Inside a Scanning Electron Microscope The instrumentation developed incorporates 3 imaging modes (topography, radiofrequency and electronic) that can be run individually or simultaneously. Preliminary developments have been presented in [34][35][36][37][38]. Consequently, this section provides an overview of the modes of implementation to help the reader. The atomic force microscope (AFM) works in contact mode using optical beam detection for monitoring the probe detection. To that end, a fiber-coupled Fabry-Perot 635 nm laser source from Thorlabs ® delivering up to 2.5 mW is used to generate the optical signal. A Fixed Focus Collimation Packages (FC/PC) F280FC-B connector from Thorlabs ® (max beam diameter = 3.4 mm, focal length = 18.24 mm) is used to collimate the signal from the output of the single-mode fiber ( Figure 1). A quadrant photodiode referenced QD50-0-SD from OSI Optoelectronics ® with associated circuitry is used to provide two different signals and a sum signal. The radiofrequency scanning microscopy augmented up to 67 GHz uses a SMM cantilever consisting of a modified 25Pt300B microwave probe from Rocky Mountain Nanotechnology ® (RMN) with a sub-100 nm apex radius ( Figure 1). The probe has been rede- The radiofrequency scanning microscopy augmented up to 67 GHz uses a SMM cantilever consisting of a modified 25Pt300B microwave probe from Rocky Mountain Nanotechnology ® (RMN) with a sub-100 nm apex radius ( Figure 1). The probe has been redesigned to support a transverse electromagnetic mode (TEM) through a propagating microstrip structure. This new NSMM cantilever is fed by a coaxial cable connected to the microwave measurement system, i.e., the VNA. Consequently, a dedicated coaxial-tomicrostrip transition built up with two parts was developed. In particular, the cantilever is embedded into a PCB waveguide structure that can be exchanged in the case of destroyed tips by using a solder-less PCB mount 1.85 mm connector from Rosenberger Corp. with a clamping and screwing mechanism. Building hybrid scanning probe tools from scratch requires design considerations different from those conventionally found in single AFM. In contrast with conventional SEM used to image the sample surface, the objective here is to visualize the apex tip in contact with the sample. Consequently, the sample scanner is mounted vertically and parallel in respect to the electron beam of the SEM. The electron column occupies most of the space in the chamber and drastically limits the height of the AFM system. The system was designed to be as compact as possible to allow SEM operation during AFM/SMM measurements in the best conditions possible although observation with the highest resolution is not possible. It has to be noticed that the AFM/SMM stage is mounted on the chamber of the SEM compared to conventional stages fixed on the SEM door. Traceability in the SMM Mode In contrast to microwave-guided measurements, including a metallic waveguide, coaxial, on-wafer propagating structures for which the traceability has been established for decades, the normalization of SMM technology, including the experimental set-up and the measurement configuration and calibration standards, are still an issue. Whereas SMM technology has been identified as a unique solution to provide a microwave and millimeterwave characterization at the submicron scale, there is an urgent need to harmonize the best practices at the international level. In particular, we identified the main bottlenecks to be tackled for offering quantitative and traceable SMM measurements. Firstly, whereas AFM can operate in air to provide a topography image of the sample under test, water meniscus in the vicinity of the apex tip of the probe contributes to the overall complex impedance at the apex tip, especially because water has a high dielectric constant and loss tangent in the microwave range. Moreover, the shape of the water meniscus is usually unknown; therefore, only approximations of the water meniscus can usually be derived by 3D electromagnetic modeling and are not easily discriminated from other parasitic capacitances involved in the measurement. Operating in a vacuum presents the advantage to allow the elimination of the water meniscus by heating the sample, simplifying the electrical modeling. Secondly, a second advantage of operating in an SEM is the possibility to directly image the probe in contact with the sample, even during the scanning operation. Indeed, probe microscopy tools, especially in the contact mode, are methods that may damage the sample or the tip apex with impacts on the electrical measurement, especially in the case of RF electrical measurements using sub-100 nm platinum/iridium wire as a sensing element. Finally, it is well accepted in the SMM community that spatial resolution is mainly governed by the apex tip geometry [36]. In particular, to surpass the diffraction limit imposed by the half-wavelength of radiation, waveguide structures with dimensions far below the wavelength of excitation exhibit evanescent electrical fields in the vicinity of the apex tip. Nevertheless, the collimation of the electrical fields is frequency dependent. Therefore, operating at a higher frequency improves the distribution of the electrical fields and the lateral resolution and, incidentally, the signal-to-noise ratio (SNR). To verify this assumption, electromagnetic simulations using a high-frequency structure simulator (HFSS) were performed at three test frequencies (1, 10 and 30 GHz) by designing an RMN probe and plotting the distribution of the electrical fields (the magnetic field here being negligible), as shown in Figure 2 As expected, the lateral resolution, i.e., the footprint of the electrical field distribut at the probe apex, decreases for higher frequencies. It has to be mentioned that the dep resolution is of course lower. These results are in favor of operating in the millimet wave regime. Nevertheless, as the transmission losses increase with frequency, especially the RF cables and transitions from the input of the VNA to the AFM/SMM tip, there is a co promise between the lateral resolution and SNR. As an illustration, we present in Figure 3 measured return loss measured up to 50 GHz of the probe. For frequencies greater th 35 GHz, the standing wave ratio is more pronounced, leading to a mixing of the amplitu and phase-shift of the complex reflection coefficient at the probe apex. It has to be m tioned that the measured response can be enhanced by considering the optimization the mounting and soldering of the AFM/SMM probe on the PCB substrate, both do manually. In the following, we consider measurements performed at the test frequency 30 GHz. As expected, the lateral resolution, i.e., the footprint of the electrical field distribution at the probe apex, decreases for higher frequencies. It has to be mentioned that the depth resolution is of course lower. These results are in favor of operating in the millimeter-wave regime. Nevertheless, as the transmission losses increase with frequency, especially in the RF cables and transitions from the input of the VNA to the AFM/SMM tip, there is a compromise between the lateral resolution and SNR. As an illustration, we present in Figure 3 the measured return loss measured up to 50 GHz of the probe. For frequencies greater than 35 GHz, the standing wave ratio is more pronounced, leading to a mixing of the amplitude and phase-shift of the complex reflection coefficient at the probe apex. It has to be mentioned that the measured response can be enhanced by considering the optimization of the mounting and soldering of the AFM/SMM probe on the PCB substrate, both done manually. In the following, we consider measurements performed at the test frequency of 30 GHz. As expected, the lateral resolution, i.e., the footprint of the electrical field distribution at the probe apex, decreases for higher frequencies. It has to be mentioned that the depth resolution is of course lower. These results are in favor of operating in the millimeterwave regime. Nevertheless, as the transmission losses increase with frequency, especially in the RF cables and transitions from the input of the VNA to the AFM/SMM tip, there is a compromise between the lateral resolution and SNR. As an illustration, we present in Figure 3 the measured return loss measured up to 50 GHz of the probe. For frequencies greater than 35 GHz, the standing wave ratio is more pronounced, leading to a mixing of the amplitude and phase-shift of the complex reflection coefficient at the probe apex. It has to be mentioned that the measured response can be enhanced by considering the optimization of the mounting and soldering of the AFM/SMM probe on the PCB substrate, both done manually. In the following, we consider measurements performed at the test frequency of 30 GHz. Reference Staircase Microcapacitor Sample The electrical devices considered in this work are microsized metal oxide semiconductor (MOS) capacitors that have been widely studied by the metrology and research communities [38][39][40][41]. The MOS capacitors are composed of circular gold electrodes evaporated on silicon dioxide (SiO 2 ) deposited on a highly doped P-type silicon substrate of resistivity 0.01 Ω.cm. The SEM image of the reference kit is depicted in Figure 4. In order to vary the capacitance values, the diameter of the upper gold pad diameter varies from 1 to 4 µm, and the SiO 2 thickness ranges from 50 to 300 nm, with about 80 nm steps. Reference Staircase Microcapacitor Sample The electrical devices considered in this work are microsized metal oxide semiconductor (MOS) capacitors that have been widely studied by the metrology and research communities [38][39][40][41]. The MOS capacitors are composed of circular gold electrodes evaporated on silicon dioxide (SiO2) deposited on a highly doped P-type silicon substrate of resistivity 0.01 Ω.cm. The SEM image of the reference kit is depicted in Figure 4. In order to vary the capacitance values, the diameter of the upper gold pad diameter varies from 1 to 4 μm, and the SiO2 thickness ranges from 50 to 300 nm, with about 80 nm steps. The reference kit is used for both calibration and verification. Prior to the measurements, the analytical derivation of the theoretical capacitances is considered. The impedance of the MOS structures measured at the tip apex of the probe is modeled by a series model consisting of an oxide capacitance Cox and a depletion capacitance Cdepl. Both capacitances can be described by the parallel plate capacitor formalism. The resulting capacitance CTOT is given by The capacitance Cox is calculated from the areas of the gold pads A and the SiO2 thicknesses dox. The silicon dioxide is assumed to have a relative dielectric constant of ε rSi0 2 = 3.9. The charge stored on the capacitor is distributed across a certain depth that adds the depletion series capacitance Cdepl in series to Cox. The capacitance was estimated to be proportional to the area A of the metallic electrode and inversely proportional to the depleted zone depth ddepl according to and where = 12 is the relative permittivity of the silicon bulk substrate, Ψ represents the interface band bending at the Si/SiO2 interface and is set to 300 mV, q is the charge of the electron (1.6 × 10 −19 C) and NA is the doping level of The reference kit is used for both calibration and verification. Prior to the measurements, the analytical derivation of the theoretical capacitances is considered. The impedance of the MOS structures measured at the tip apex of the probe is modeled by a series model consisting of an oxide capacitance C ox and a depletion capacitance C depl . Both capacitances can be described by the parallel plate capacitor formalism. The resulting capacitance C TOT is given by The capacitance C ox is calculated from the areas of the gold pads A and the SiO 2 thicknesses d ox . The silicon dioxide is assumed to have a relative dielectric constant of ε rSi0 2 = 3.9. The charge stored on the capacitor is distributed across a certain depth that adds the depletion series capacitance C depl in series to C ox . The capacitance was estimated to be proportional to the area A of the metallic electrode and inversely proportional to the depleted zone depth d depl according to with d depi = 2ε 0 ε r Ψ/qN A and where ε rSi = 12 is the relative permittivity of the silicon bulk substrate, Ψ represents the interface band bending at the Si/SiO 2 interface and is set to 300 mV, q is the charge of the electron (1.6 × 10 −19 C) and N A is the doping level of the silicon bulk around 5 × 10 18 /cm −3 . Although the interface band bending at the interface is relatively unknown, due to the high doping, the depletion capacitance is higher than the oxide capacitance. Therefore, since the two capacitances are in series, an uncertainty on Ψ has a negligible effect on the total capacitance. The calibration procedure consists of determining a two-port error terms box to convert the measured complex reflection coefficient Γ M by the VNA into the complex reflection Γ at the apex tip. Then, the calibration model established can be used to determine the other capacitance values. The one-port vector calibration method model used to make the link between the reflection coefficient Γ M measured by the VNA and the reflection coefficient Γ is given by The complex terms e 00 , e 01 e 10 and e 11 correspond, respectively, to the directivity, source match and reflection tracking errors. These calibration parameters depend on the microwave path between the apex tip of the probe and VNA receivers. System (4) is resolved by a derived SOL calibration method that makes use of the measurements of the reflection coefficient Γ M1 , Γ M2 and Γ M3 of three assumed reference loads, called load Z REF1 Z REF2 and Z REF3 with theoretical reflection coefficients Γ 1 , Γ 2 and Γ 3 . Capacitors that have spaced capacitances values are ideally chosen on the desired range of capacitances to be measured. Measurement Configuration and Verification The standard operation procedure (SOP) described here follows the material preparation presented in the previous section. First, we ensure a stable lab climate in a controlled environment (temperature, humidity) to enable the stable operation of the SMM with minimum mechanical and electrical drift. The measurements are performed at 30 GHz using a modified 25PT300A AFM tip from Rocky Mountain Nanotechnology ® . A PNA Keysight ® E8364B VNA with the RF power source set to 2 dBm and the intermediate frequency bandwidth (IFBW) to 50 Hz is used. Highly stable coaxial cables and feedthrough coaxial transitions are used to connect the VNA to the probe. Nanonis ® Signal Conditioning (SC5) and Real-Time Controller (RC5) modules are used to drive the AFM measurements. The cantilever deflection voltage is set to 90 mV, the approach-retract factor is about 6 nm/mV and the resulting force is estimated to be 9.7 µN. The images were scanned over 40 × 40 µm 2 with 256 pixels with a scanning time (forward and backward) of 5119 s. Prior to the measurement, SEM imaging of the apex tip is performed to check the tip shape ( Figure 5a). Topography, together with both real and imaginary parts of the measured complex reflection coefficient Γ M images, is acquired simultaneously (Figure 5b,c). We keep raw data (Nanonis ® *.sxm format) and postprocess data separate and make sure to not overwrite raw data during analysis treatment, presented in the following sub-section. Data Analysis The raw data are transferred to Gwyddion ® software for analysis. We cross-check the images from Figure 5 together. Then, each capacitor on the atomic force microscopy image in Figure 5b is referenced according to Figure 6 We keep raw data (Nanonis ® *.sxm format) and postprocess data separate and make sure to not overwrite raw data during analysis treatment, presented in the following subsection. Data Analysis The raw data are transferred to Gwyddion ® software for analysis. We cross-check the images from Figure 5 together. Then, each capacitor on the atomic force microscopy image in Figure 5b is referenced according to Figure 6. We keep raw data (Nanonis ® *.sxm format) and postprocess data separate and make sure to not overwrite raw data during analysis treatment, presented in the following subsection. Data Analysis The raw data are transferred to Gwyddion ® software for analysis. We cross-check the images from Figure 5 together. Then, each capacitor on the atomic force microscopy image in Figure 5b is referenced according to Figure 6. From the topography image given in Figure 6, we can calculate the theoretical capacitances according to Relations (2) to (3). The four circular areas of the metallic electrodes have targeted diameters of 1, 2, 3 and 4 µm, respectively. The three oxide layers determined from the topography image by considering the 1D profile (indicated in Line 1 in Figure 5b) are, respectively, 87.5, 137.1 and 198.3 nm. The corresponding oxide capacitances have values in the range of 0.14-5.09 fF (10 −15 F). It has to be mentioned that MC2 Technologies ® has developed two reference calibration kits considering doped P-type silicon with substrate resistivity of 1 and 0.01 Ω.cm, respectively. In contrast with our previous studies based on the first type of reference capacitance kit, due to the high doping of the silicon substrate, the depletion capacitance is negligible (fifth row of Table 1). Consequently, it is highly recommended to consider highly doped materials for the fabrication of the MOS capacitance kit. Another possibility investigated in [21] is to consider indium tin oxide (ITO) as the metal substrate. From the theoretical capacitance values, the theoretical reflection coefficient Γ of the capacitors is calculated. The capacitors considered lossless (as demonstrated in Section 2) have phase-shifts of Γ in the range −0.15-−5.31 degrees. The calibration process developed in Section 2.3 is applied by considering three oxide capacitances values as the reference loads. From Table 1, we chose H2V2, H2V4 and H3V1 with capacitances 0.56, 2.24 and 5.09 fF to cover a wide range of capacitance values. As the calibrated measurements are very sensitive to the knowledge of the reference loads, we did not consider the smallest capacitances as reference. Using Equation (4), the complex error terms e 00 , e 01 e 10 and e 11 are determined ( Table 2). Whereas in conventional guided measurements, the directivity corresponds to a small incident signal that leaks through the forward path of the coupler and into the receiver of the VNA, the directivity around −6.68 dB corresponds mainly to mismatch effects in the path of the microstrip probe without reflecting off the device under test (DUT). Given the nature of the probe structure for which the end platinum wire of the cantilever is not supported by a TEM propagating mode, around 75% of the incident power is transmitted to the DUT. The reflection tracking around −32 dB indicates transmission losses from the apex tip to the VNA receiver of around 16 dB. From Figure 3, 3 dB transmission losses are attributed to the microwave probe, including the coaxial-to-microstrip transition. Consequently, the transmission losses around 13 dB are attributed to the coaxial cables and transition (vacuum coaxial transition at the air/SEM interface). The input power and the IFBW set to 0 dBm and 50 Hz, respectively, are therefore appropriate for accurate measurements. Ideally, in reflection measurements, all of the signal that is reflected off of the DUT is measured at the VNA receiver. Due to the high impedance of the probe apex in contrast with the 50 Ω impedance of the microwave instrumentation (including the microstrip part of the probe, coaxial-to-microstrip transition, coaxial cables and feedthrough, VNA), a large part of the signal reflects off the DUT, and multiple internal reflections occur between the probe apex and the DUT. In particular, the source match value of −0.87 dB indicates that 80% of the microwave power is reflected off the DUT. All of these systematic errors are taken into account by the calibration procedure. By inverting Relationship (5), the calibrated complex reflection coefficient Γ in the reference plane of the apex tip can be determined. We developed a MATLAB ® program called "SPAR2Y' for the determination of the inverse problem, i.e., determination of the quantitative data from the measured complex reflection coefficient (Figure 7). The input variables of the program consist of real and imaginary parts of Γ M,ij , the error terms e 00 , e 11 , e 10 e 01 and the test frequency to derive the data of interest in a text file format. After running the program "SPAR2Y," we present the images of the real part and imaginary part of the admittance Y obtained after calibration (Figure 8). In addition, we plot a 1D profile along the x-axis to appreciate the fluctuations. . Functional diagram of the MATLAB ® script developed for quantitative data determination. The input data file *.txt consists of the frequency of operation, measured real and imaginary parts of the complex reflection coefficient Γ M,ij and complex calibration error terms. The code Spar2Y determines the calibrated complex reflection coefficient Γ ij at the probe tip and the related complex impedance Z ij admittance Y ij (including capacitance C ij ). An output *.txt file is generated. After running the program "SPAR2Y," we present the images of the real part and imaginary part of the admittance Y obtained after calibration (Figure 8). In addition, we plot a 1D profile along the x-axis to appreciate the fluctuations. re 7. Functional diagram of the MATLAB ® script developed for quantitative data determination. The input data file consists of the frequency of operation, measured real and imaginary parts of the complex reflection coefficient ΓM,ij complex calibration error terms. The code Spar2Y determines the calibrated complex reflection coefficient Γij at the e tip and the related complex impedance Zij admittance Yij (including capacitance Cij). An output *.txt file is generated. After running the program "SPAR2Y," we present the images of the real part and imaginary part of the admittance Y obtained after calibration (Figure 8). In addition, we plot a 1D profile along the x-axis to appreciate the fluctuations. The image of the microwave conductance indicates a value close to 0, demonstrating that the DUT is only reactive. The extracted 1D profile given in Figure 8c indicates the insensitivity of the real part of the admittance along the x-distance. Figure 8b shows the imaginary part of the admittance that is a direct signature of the capacitance image. Along the 1D profile, the signal fluctuations are very low. Nevertheless, most of the DUTs show heterogeneity in the middle of their respective areas. Investigations were made to identify the origin. In particular, a fine analysis of the topography, microwave and SEM images lead to the conclusion that contamination effects mainly on the middle of the gold patch areas induce a reduction or loss of electrical contact between the apex tip and the gold patch. From the imaginary part image of Y, the capacitance image at 30 GHz is plotted in a 3D format in Figure 9. heterogeneity in the middle of their respective areas. Investigations were made to identify the origin. In particular, a fine analysis of the topography, microwave and SEM images lead to the conclusion that contamination effects mainly on the middle of the gold patch areas induce a reduction or loss of electrical contact between the apex tip and the gold patch. From the imaginary part image of Y, the capacitance image at 30 GHz is plotted in a 3D format in Figure 9. From Figure 9, the microwave capacitances are extracted. To quantify the error between theoretical and microwave capacitances, we present in Figure 10 the relative error between the two types of data after removing the reference capacitors used for calibration and erroneous microwave data (probe tip non contacting the DUT). From Figure 10, we demonstrate that the smallest MOS capacitance values present errors reaching 100%. The main reason is that the measurement accuracy depends on the reference devices used for the calibration. The smallest capacitances have not been considered in this study to focus mainly on capacitors whose measurements present a good signal-to-noise (SNR) ratio. Indeed, in contrast with capacitances values greater than 300 aF, those capacitors exhibit large relative fluctuations in the observed microwave signals. From Figure 9, the microwave capacitances are extracted. To quantify the error between theoretical and microwave capacitances, we present in Figure 10 the relative error between the two types of data after removing the reference capacitors used for calibration and erroneous microwave data (probe tip non contacting the DUT). heterogeneity in the middle of their respective areas. Investigations were made to identify the origin. In particular, a fine analysis of the topography, microwave and SEM images lead to the conclusion that contamination effects mainly on the middle of the gold patch areas induce a reduction or loss of electrical contact between the apex tip and the gold patch. From the imaginary part image of Y, the capacitance image at 30 GHz is plotted in a 3D format in Figure 9. From Figure 9, the microwave capacitances are extracted. To quantify the error between theoretical and microwave capacitances, we present in Figure 10 the relative error between the two types of data after removing the reference capacitors used for calibration and erroneous microwave data (probe tip non contacting the DUT). From Figure 10, we demonstrate that the smallest MOS capacitance values present errors reaching 100%. The main reason is that the measurement accuracy depends on the reference devices used for the calibration. The smallest capacitances have not been considered in this study to focus mainly on capacitors whose measurements present a good signal-to-noise (SNR) ratio. Indeed, in contrast with capacitances values greater than 300 aF, those capacitors exhibit large relative fluctuations in the observed microwave signals. From Figure 10, we demonstrate that the smallest MOS capacitance values present errors reaching 100%. The main reason is that the measurement accuracy depends on the reference devices used for the calibration. The smallest capacitances have not been considered in this study to focus mainly on capacitors whose measurements present a good signal-to-noise (SNR) ratio. Indeed, in contrast with capacitances values greater than 300 aF, those capacitors exhibit large relative fluctuations in the observed microwave signals. Further complicating this, the drift of the microwave signal observed in the Y scanning direction (see Figure 5d) has more impact on the determination of the smallest capacitances as they are physically positioned at the beginning and end of the scanning area. Further discussion on this point is proposed in the following section. For all other capacitors (>300 aF), the relative error reaches a maximum value of 20%, with a median value of 8.7%. In the following section, we analyze the capacitance fluctuation considering a straight 1D profile as presented in Figure 11a (non calibrated topography). In particular, we focus on the smallest capacitors related to the probe tip directly in contact with the silicon oxide layer. Figure 11b shows a zoomed-in version of Figure 11a in the x-range of 17-25 µm. In the following section, we analyze the capacitance fluctuation considering a straight 1D profile as presented in Figure 11a (non calibrated topography). In particular, we focus on the smallest capacitors related to the probe tip directly in contact with the silicon oxide layer. Figure 10b shows a zoomed-in version of Figure 11a in the x-range of 17-25 μm. Conclusions Calibrated capacitances values in the millimeter-wave regime considering a frequency of operation of 30 GHz (free-space wavelength of 1 mm) have been presented. The scanning probe instrumentation proposed built entirely from scratch is based on a combined AFM/SMM integrated inside an SEM. The stability of the microwave path is ensured by keeping the microwave probe and cables fixed during the scanning operation. Only the sample under test is moved under the probe. A high signal-to-noise ratio is obtained by choosing appropriate electrical parameters for the VNA, i.e., the RF power set to 0 dBm and IFBW to 50 Hz. No external electrical matching network commonly found in SMM set-ups is used. Instead, we increased the frequency of operation up to 30 GHz to obtain a good compromise between the collimation of the electrical fields in the vicinity of the tip apex and moderate losses in the microwave path between the microwave source and the apex tip. We focused mainly on MOS capacitors whose theoretical values are in the range of 0.5-5 fF. A dedicated calibration is developed to extract the capacitances values with a median error of 10%. To that end, a dedicated standard operation procedure, including the measurement protocol and data analysis, is used to derive the quantitative data of interest. The measurement performance is demonstrated with capacitance fluctuations in the order of ±5 aF. Discussion These results are very instructive and lay the background for future microwave nanometrology. Indeed, SMM techniques have become a mature technology in both academic and industrial laboratories. National metrology institutes have considerably contributed to enhancing SMM technology. In this effort, we analyzed the experimental results presented in this work, and we draw conclusions for future works in the following. − The theoretical capacitances do not consider fringing field effects. Future works will include analytical and electromagnetic simulations to derive more realistic values of capacitance values. In addition, the coupling effect from neighboring capacitors will be studied to yield the optimization of the disposition of the capacitors. − Electrical drift of the microwave instrumentation is inevitable. Although its impact is reduced for capacitances greater than 300 aF, there is an urgent need to introduce solutions to achieve stable measurements for capacitance as low as 1 aF. To tackle this issue, the microwave path must be reduced to the minimum. The research direction should be directed towards the implementation of the microwave instrumentation closest to the probe. In particular, we omitted capacitances below 300 aF as the longitudinal drift (along the y-axis) is not compatible with accurate extraction of the electrical parameters. To complicate, the reference kit is used with the smallest capacitors on the top and on the back of the scanning area. It is highly recommended to design reference kits by taking into consideration the electrical drift. In other words, the scanning area and in particular the scanning time must be reduced as necessary to yield consistent quantitative data. − The calibration reference kit in this study is based on microsized capacitors. Decreasing the size of the microcapacitors is still an issue to address nanoscale characterization. Indeed, the apex tip must be reduced to accommodate smaller footprints. In this effort, we considered a relatively small apex radius of 70 nm. The size can be further reduced as an apex size below 20 nm is commercially available. Operating inside a SEM is beneficial to limit the scanning area when fragile sub-20 nm apex sizes are considered. Ongoing works related to the free-space calibration procedure that exploits the stand-off between the tip apex and the material surface is also a possible alternative. The instrumentation proposed offers the unique possibility to image the apex geometry and to record simultaneously the microwave signals. Indeed, the apex geometry is the main factor that governs the theoretical derivation of the coupling capacitance.
8,412.6
2021-03-20T00:00:00.000
[ "Physics", "Engineering" ]
Synthesis and Characterization of Self-Assembled Nanogels Made of Pullulan Self-assembled nanogels made of hydrophobized pullulan were obtained using a versatile, simple, reproducible and low-cost method. In a first reaction pullulan was modified with hydroxyethyl methacrylate or vinyl methacrylate, further modified in the second step with hydrophobic 1-hexadecanethiol, resulting as an amphiphilic material, which self-assembles in water via the hydrophobic interaction among alkyl chains. Structural features, size, shape, surface charge and stability of the nanogels were studied using hydrogen nuclear magnetic resonance, fluorescence spectroscopy, cryo-field emission scanning electron microscopy and dynamic light scattering. Above the critical aggregation concentration spherical polydisperse macromolecular micelles revealed long-term colloidal stability in aqueous medium, with a nearly neutral negative surface charge and mean hydrodynamic diameter in the range 100–400 nm, depending on the polymer degree of substitution. Good size stability was observed when nanogels were exposed to potential destabilizing pH conditions. While the size stability of the nanogel made of pullulan with vinyl methacrylate and more hydrophobic chains grafted was affected by the ionic strength and urea, nanogel made of pullulan with hydroxyethyl methacrylate and fewer hydrophobic chains grafted remained stable. Introduction Pullulan is a water soluble, linear, neutral extracellular biodegradable homopolysaccharide of glucose produced by the fungus Aureobasidium pullulans (Pullularia pullulans) [1][2][3][4]. Pullulan consists of maltotriosyl units connected by α-D-1,6-glycoside linkages [3,5]. Pullulan is extensively used in food, cosmetic and pharmaceutical industries because it is easily modifiable chemically, non-toxic, non-immunogenic, non-mutagenic, and non-carcinogenic [5,6]. Furthermore, pullulan has good mechanical properties and attractive functional properties, such as adhesiveness, film formability, and enzymatically-mediated degradability [7]. In the form of self-assembled nanogels, it has been shown to exhibit chaperon like activity, thus being a promising technique for protein refolding [8]. It has been studied as a blood-plasma expander and substitute [9]. Pullulan arose as a promising polymer for various biomedical applications [10], such as surface modification of polymeric materials to improve blood compatibility (bioinert surfaces) [11,12], for gene [13,14] and drug delivery [5,[15][16][17][18][19], as a carrier for quantum dots for intracellular labeling to be used as a fluorescent probe for diagnostic bioimaging [20] and tissue engineering [21]. Self-assembled biotinylated pullulan acetate nanoparticles loading adriamycin were described as targeted anti-cancer drug delivery systems, internalized by HepG2 cells. The drug loading and release rate were accessed with a dialysis method [18]. Adriamycin loaded pullulan acetate/sulfonamide conjugate nanoparticles responding to tumor pH revealed pH-dependent cell interaction, internalization and cytotoxicity in in vitro studies using a breast tumor cell line (MCF-7). The drug loading profile was evaluated using a dialysis method [19]. Non-toxicity, efficient internalization and transfection in vitro of hydrogel pullulan nanoparticles encapsulating pBUDLacZ plasmid showed this system to be an efficient gene delivery carrier [14]. Pullulan potentially targets and accumulates in the liver because it is recognized by the asialoglycoprotein receptor expressed on the sinusoidal surface of the hepatocytes [22]. The asialoglycoprotein receptor was reported to be involved in pullulan receptor-mediated endocytosis [23]. The production of hydrophobically modified pullulan nanogels, using an approach similar to the one presented in this work, was achieved by other authors using cholesteryl group-bearing pullulan. The resulting nanogels were monodisperse, with a diameter of 20-30 nm and stable in water. Their size and density were controlled by the pullulan degree of substitution with cholesterol and the molecular weights of parent pullulan [24]. This nanogel was utilized in molecular complexation with bovine serum albumin (BSA) [25], insulin [26], lipase [27], human epidermal growth factor receptor 2 (HER-2) [28][29][30], interleukin-12 (IL-12) [31,32], among other therapeutic molecules, proving this system to be useful as a therapeutic delivery system. Self-assembled hydrogel nanoparticles of cholesterol-bearing pullulan spontaneously release insulin from the complex and thermal denaturation/aggregation were effectively suppressed upon complexation [26]. Cholesteryl group-bearing pullulan complexed with the truncated HER-2 protein, delivered a HER-2 oncoprotein containing an epitope peptide to the major histocompatibility complex class I pathway, and was able to induce CD8+ cytotoxic T lymphocytes against HER-2+ tumors and caused complete rejection of tumors. The results suggested this hydrophobized polysaccharide may help soluble proteins to induce cellular immunity with potential benefit in cancer prevention and cancer therapy [30]. The subcutaneous injection of cholesterol-bearing pullulan complexed with recombinant murine IL-12 led to a prolonged elevation of IL-12 concentration in the serum. Repetitive administrations of the complex induced drastic growth retardation of reestablished subcutaneous fibrosarcoma, without causing toxicity [31]. Raspberry-like assembly of nanogels encapsulated IL-12 efficiently (96%) and kept it stable in the presence of BSA (50 mg/mL) and showed high potential to maintain a high IL-12 level in plasma after subcutaneous injection in mice [32]. Cationic derivative, ethylenediamine group functionalization of cholesteryl group-bearing pullulan, was developed as an effective intracellular protein delivery system [33]. The same research group designed hybrid hydrogels with self-assembled nanogels as cross-linkers to achieve interaction with proteins and chaperone-like activity [32,34,35] Nanogel formulations, described as potential drug and vaccine delivery systems, have the potential to modify the drug, gene, protein, peptide, oligosaccharide or immunogen profile and the ability to cross biological barriers, the biodistribution and pharmacokinetics, improving their efficacy and safety, as well as the patient compliance [36]. Results and Discussion In the present work, hydrophobized pullulan was obtained with a two-step synthesis. The resultant self-assembled nanogels were characterized in terms of structure, size, shape, surface charge and stability by hydrogen nuclear magnetic resonance ( 1 H NMR), fluorescence spectroscopy, cryo-field emission scanning electron microscopy (cryo-FESEM) and dynamic light scattering (DLS). According to the literature and in the same way as other reported methacrylates, hydroxyethyl methacrylate (HEMA) and vinyl methacrylate (VMA) should be grafted on the 6 C of the glucose residues [37]. Then, by the Michael addition mechanism, the thiol from 1-hexadecanethiol (C 16 ) acting as a nucleophile reacts with grafted methacrylate (Scheme I). The success of the synthesis, purity, chemical structure and polymer degree of substitution of the reaction products were controlled using 1 H NMR spectra in deuterium oxide (D 2 O) ( Figure 1 and Table 1). Different independent batches of hydrophobized pullulan (pullulan-C 16 ) with various degree of substitution with the methacrylated groups and hydrophobic alkyl chains (DS HEMA or DS VMA and DS C16 , defined as the percentage of grafted HEMA or VMA or C 16 moieties relative to the glucose residues, respectively), were synthesized by varying the molar ratios of methacrylate groups to glucose residues and the molar ratios of C 16 to methacrylated groups. The synthetic procedure adopted proves to be versatile, simple and reproducible (Table 1). Self-assembly of Pullulan-C 16 The self-assembly of amphiphilic pullulan-C 16 in water was studied using 1 H NMR and fluorescence spectroscopy. Analyzing the 1 H NMR spectra of pullulan-C 16 (Figure 1), it can be observed that while the mobility of the polysaccharide skeleton was maintained in environments of different polarity, the shape and width of the proton signals of the methyl (0.8 ppm) and methylene (1.1 ppm) groups of C 16 depended on the polarity of the solvent used. In dimethyl sulfoxide-d 6 (DMSO-d 6 ), pullulan-C 16 was soluble, and the C 16 signals were sharp, as all hydrophobic chains were exposed to the solvent, having the same mobility (Figure 1b and 1f) [41]. Increasing the percentage of D 2 O in DMSO-d 6, the base of those signals broadened (Figure 1c and 1g). In pure D 2 O, a large broadening was obvious, which represents the superposition of peaks of chemically identical species, yet possessing various degrees of mobility (Figure 1d and 1h) [42]. These results give evidence that pullulan-C 16 dispersed in water has part of the alkyl chains exposed to hydrophobic domains, while others might have been exposed to the hydrophilic solvent. Differences in the environment and/or mobility of the molecules thus explain the broad peak observed for the aliphatic protons. Therefore, pullulan-C 16 nanogels are obtained upon self-assembly in water through the association of the hydrophobic alkyl chains in hydrophobic domains. The critical aggregation concentration (cac) or critical micelle concentration (cmc) of pullulan-C 16 was studied by fluorescence spectroscopy using hydrophobic dyes, Pyrene (Py) [43,44] and Nile red (NR) [45], whose solubility and fluorescence are weak in water but high in hydrophobic environments. The intensity of Py increased with increasing concentrations of pullulan-C 16 , and a red shift occurred in the excitation spectra (Figure 2a, 2b). Above cac, in the emission spectra ( Figure 2a, 2b), some bands in the 450 nm region associated to the presence of Py dimers are detected in pullulan-C 16 , suggesting high water penetration into the nanogel, which is in agreement with the 1 H NMR measurements. The intensity ratio of the third and first vibrational bands, I 3 /I 1, rapidly augmented above the cac, which was 0.06 mg/mL for PHC 16 -5.6-1.3 and for PVC 16 -10-7. This transition of intensity translated the transference of Py to a less polar and hydrophobic domain that was coincident to the onset of supramolecular formation of pullulan-C 16 nanogels (Figure 2c). A lower I 3 /I 1 ratio obtained for PHC 16 -5.6-1.3 indicates the location of Py in a more hydrophilic environment, while a higher I 3 /I 1 ratio for PVC 16 -10-7 indicates the location of Py in a more hydrophobic environment (Figure 2c) [43]. This is confirmed by a better defined vibronic structure of Py emission in the case of PVC 16 -10-7. Surprisingly, the resulting cac is the same for both nanogels despite their different DS C16 relative to methacrylated groups (70% for PVC 16 -10-7 and 23% for PHC 16 The area-normalized fluorescence emission intensity of NR was constant, without any shift in the maximum emission wavelength, for lower concentrations of pullulan-C 16 because individual molecules exist as premicelles in aqueous environment ( Figure 3; zone A). In contrast, for concentrations greater than the cac, fluorescence intensity increased and the maximum emission wavelength was blue-shifted due to the transfer of NR to the hydrophobic domains of the nanogels. The resultant cac was 0.04 mg/mL and 0.01 mg/mL for PHC 16 -5.6-1.3 and PVC 16 -10-7, respectively ( Figure 3). This variation is consistent with the C 16 loading of the studied pullulan nanogels as higher hydrophobicity results in lower cac. The PVC 16 -10-7 hydrophobic domains are dissimilar to those present in a typical surfactant system and have two types of hydration levels (Figure 3b; zones B and C), while in PHC 16 -5.6-1.3 only a type of hydrophobic domains is observed (Figure 3a; zone C). This observation shows a slight dependence of the formed hydrophobic domains on the type of linker used (HEMA or VMA). In the case of PHC 16 -5.6-1.3, the determined cac values are similar for both fluorescent probes. But that is not the case for PVC 16 -10-7. This is explainable by the fact that as Py molecules already start at a low hydrated pre-micellar environment they are unable to detect the micellar domains of type B, which have higher hydration levels than those domains of type C. For the last ones there is a sufficient variation of hydration level that can be detected by Py I 3 /I 1 ratio resulting in a cac value above the real one. We thus conclude that NR is a more sensitive fluorescence probe as it was able to follow all the variations in hydration level that occurred in the self-aggregation process of PVC 16 -10-7. For PHC 16 -5.6-1.3 the absence of B type micellar domains and the higher hydration of the premicellar environment, also seen in NR emission in zone A, allowed compatible determinations of cac for both probes. As pullulan-C 16 concentration augments above the cac, more hydrophobic domains are formed, solubilizing more Py and NR, which consequently increases the fluorescence detected, not occurring the typical second plateau (Figures 2c, 3c). The highest concentration of pullulan-C 16 used was insufficient to enclose all of the hydrophobic dyes-this might be caused by the continued redistribution of Py and NR molecules to the less hydrated hydrophobic domains and by the formation of Py dimers in the hydrophobic domains with greater hydration level. Size and shape The hydrophobic forces that sequester the hydrophobic chains in the core and the excluded volume repulsion between the chains mostly establish the micellar size [46]. The pullulan-C 16 nanogels appeared spherical in cryo-FESEM micrographs, with a large size distribution in the range of 100-700 nm for PHC 16 Storage The mean hydrodynamic diameter obtained using DLS for pullulan-C 16 nanogels dispersed in ultrapure water oscillated between 162 nm and 335 nm for PHC 16 -5.6-1.3 and between 115 nm and 369 nm for PVC 16 -10-7, over a six month storage period at room temperature (25 °C). Both materials exhibited fairly high polydispersity, with an average pdI of 0.59 ± 0.11 for PHC 16 -5.6-1.3 and 0.43 ± 0.23 for PVC 16 -10-7, which means that there may be macromolecular micelles with a distribution of sizes and shapes, as also revealed by the cryo-FESEM micrographs ( Figure 5). Effect of the Concentration of Pullulan-C 16 The mean hydrodynamic diameter tended to be much larger for lower concentrations of pullulan-C 16 , especially when closer to the cac. It appears that, for higher concentrations of the polymer, the remaining solvent is gradually released from the hydrophobic core, resulting in a decrease in size. In contrast, occasionally exposed hydrophobic domains within a less mobile shell formed by hydrophilic chains may originate secondary aggregation enlarging the resultant macromolecular micelles [46]. The zeta-potential values were always negative and close to zero, never lower than −20 mV. Once zeta potential approaches zero, electrostatic repulsion becomes small compared to the ever-present Van der Waals attraction. In these conditions, eventually, instability may arise, causing aggregation followed by sedimentation and phase separation. However, the pullulan-C 16 nanogels preserved their nanosize with the exception of PVC 16 -10-7 at 0.5 mg/mL that formed aggregates out of the nanoscale (Figure 6). Effect of Urea Urea is known for its ability to break intramolecular hydrogen bonds and to destabilize hydrophobic domains [47,48]. Urea and its derivatives are very efficient as modifiers of the aqueous solution properties participating at the level of the micellar solvation layer because it enhances the polarity and the hydrophilic character of water. An increased accessibility from the aqueous phase at higher urea concentrations could result in a stronger solvation of the polar groups in micellar aggregates by urea−water mixture than water alone. Urea is related to the enhancement of the solubility of hydrocarbon tails favoring their solvation and to the weakening of the hydrophobic interactions responsible for the formation and maintenance of the micellar assembly in aqueous solution. The action of urea on micellization depends on the way in which solvation occurs in a specific micellar system [49]. The results obtained show that urea did not affect the nanogel size of PHC 16 -5.6-1.3. In contrast, urea caused concentration dependent destabilization of PVC 16 -10-7, affecting the self-assembly of this amphiphilic system in water, leading to the formation of larger aggregates out of the nanoscale (Figure 7). Destabilization of PVC 16 -10-7, resulting in higher particle size, may be tentatively assigned to improved solvation of the hydrophobic domains. This possibility is supported by the fact that PVC 16 has a higher substitution degree than PHC 16 Effect of Ionic Strength Colloidal stability might be compromised in the absence of an electrostatic barrier. The addition of enough quantity of salt neutralizes the surface charge of the micelles in dispersion and compresses the surface double layer, facilitating the colloidal aggregation. Without the repulsive forces that keep macromolecular micelles separate, coagulation might occur due to attractive Van der Waals forces. Compared to salt-free pullulan-C 16 colloidal dispersion, while PHC 16 -5.6-1.3 denoted stability, PVC 16 -10-7 nanogel was larger as the ionic strength increased with increasing concentrations of NaCl ( Figure 8). Effect of pH Size distributions and zeta potential of pullulan-C 16 as a function of pH, using phosphate-citrate buffer (pH 2.2-8.0), were compared to values obtained in water and PBS. The mean hydrodynamic diameter values obtained either for PHC 16 -5.6-1.3 or PVC 16 -10-7 were similar in the range of pH studied. The size stability, in the range of pH studied, demonstrates that the organization of hydrophobic alkyl chains, in hydrophobic domains with low water content, protect the amphiphilic molecules from the hydrolysis of the carbonate ester at alkaline pH and from the hydrolysis of the methacrylate ester at low pH [50]. For both materials, small negative values of zeta potential were obtained indicating little repulsion between macromolecular micelles to prevent aggregation. However, even with zeta potential close to zero, particles denoted only slight instability in the nanoscale (Figure 9). The nearly neutral charge is valuable for in vivo use, since large positively charged materials cause non-specific cell sticking, while large negatively charged materials are efficiently taken up by scavenger endothelial cells or -professional pinocytes‖ found in the liver, which results in a rapid clearance from the blood [51]. Figure 9. Influence of pH on the size and zeta potential of pullulan-C 16 nanogels measured at 37 º C in DLS (mean ± S.D., n = 3). Pullulan-based nanogels synthesized and characterized in this work have high water content, tunable size, interior network for possible incorporation of therapeutics, and large surface area for potential multivalent bioconjugation with cell-targeting ligands such as protein, peptides and antibodies. With these characteristics, described nanogels might be useful as polymeric carriers for therapeutic targeted delivery. In our laboratory several nanogels are being developed, using different polysaccharides: dextrin, mannan, hyaluronic acid, glycolchitosan. The use of different polysaccharides allows the production of nanogels bearing different surface properties, namely size, charge and bioactivity. Among the applications envisaged for these materials, 1) the delivery of therapeutic proteins and of poorly water soluble pharmaceuticals, 2) vaccination, and 3) delivery of nucleic acid therapeutics are being developed. The comprehensive characterization of several nanogels provides a platform for the development of more sophisticated materials, with ability to perform as delivery systems. Recent results in our laboratory demonstrate the potential of dextrin nanogels for the delivery of cytokines, namely IL-10 [52]; the association of the nanogels with injectable hydrogels is also a promising field of application of the self-assembled nanogels, allowing the incorporation of hydrophobic molecules in the highly hydrated environment of hydrogels. Ongoing work addresses the study of biodistribution and drainage of nanogels to the lymphatic nodes. Preliminary results using radioactively labeled nanogels and immunohistochemical analysis of the lymphatic nodes confirm the ability of the nanogels to reach the nodes, internalized in phagocytic cells. The use of mannan opens interesting possibilities concerning the use of the nanogels for vaccination purposes, acting as a delivery system and as an adjuvant. Self-assembled nanogels are thus very promising materials that bring together the essential requisites of biocompatibility and performance. Synthesis of Amphiphilic Pullulan-C 16 Hydroxyethyl methacrylate-derivatized pullulan (pullulan-HEMA) was prepared as described by Van Dijk-Wolthuis et al. [38]. Briefly, pullulan was dissolved in dry DMSO in a nitrogen atmosphere with different calculated amounts of HEMA-CI, resulting in 0.20, 0.25 and 0.4 molar ratios of HEMA-CI to glucose residues. The reaction catalyzed by DMAP (2 mol equiv to HEMA-CI) was allowed to proceed and the mixture was stirred at room temperature for 4 days. The reaction was terminated with concentrated HCl (2% v/v), which neutralized DMAP and imidazole. The mixture was then dialyzed against frequently changed distilled water at 4 °C for 3 days. After being lyophilized, pullulan-HEMA resulted as a white fluffy product, which was stored at −20 °C. Vinyl methacrylated pullulan (pullulan-VMA) was synthesized by transesterification of pullulan with VMA, overall as described by Ferreira et al. [39] but without enzymes [53]. Briefly, pullulan was dissolved in dry DMSO, with calculated amounts of VMA resulting in 0.25 and 0.5 molar ratios of VMA to glucose residues. After stirring at 50 º C for 2 days, the resulting mixture was dialyzed for 3 days against frequently changed distilled water, at room temperature (~25 º C). Each sample of modified pullulan after being lyophilized resulted as a white fluffy product that was stored at room temperature. Finally, the amphiphilic molecules pullulan-HEMA-C 16 (PHC 16 ) and pullulan-VMA-C 16 (PVC 16 ) were produced as described elsewhere [41]. In brief, Pullulan-HEMA or Pullulan-VMA reacted in dry DMSO (equivalent HEMA or VMA = 0.03 M) with C 16 . The reaction was catalyzed by TEA in a 2 molar ratio of TEA to HEMA or VMA. After stirring for 3.5 days at 50 º C, the resulting mixture was dialyzed, lyophilized and stored as described above. 1 H NMR Spectroscopy Lyophilized reaction products were dispersed in D 2 O (5 mg/mL). The pullulan-C 16 was also dispersed in DMSO-d 6 and in 10% D 2 O in DMSO-d 6 (5 mg/mL). Samples were stirred overnight at 50 º C to obtain a clear dispersion, which was transferred to 5 mm NMR tubes. One-dimensional 1 H NMR measurements were performed in a Varian Unity Plus 300 spectrometer operating at 299.94 MHz. One-dimensional 1 H NMR spectra were recorded at 298 K with 256 scans, a spectral width of 5000 Hz, a relaxation delay of 1 s between scans, and an acquisition time of 2.8 s. Fluorescence Spectroscopy The cac of the pullulan-C 16 was fluorometrically investigated using hydrophobic guest molecules, such as Py and NR. The fluorescence intensity change of these guest molecules was calculated as a function of the pullulan-C 16 concentration. Briefly, lyophilized pullulan-C 16 was dispersed in ultrapure water (1 mg/mL) with stirring for 3-5 days at 50 °C. Consecutive dilutions of 1mL of each sample were prepared in ultrapure water. In the case of Py, a volume of 5 μL of a 1.2 × 10 −4 M Py stock solution in ethanol was added, giving a constant concentration of 6 × 10 −7 M in 0.5 % ethanol/water for all Py fluorescence measurements. In case of NR, a volume of 5μL of a 4 × 10 −5 M NR stock solution in ethanol was then added, giving a constant concentration of 2 × 10 −7 M in 0.5 % ethanol/water for all NR fluorescence measurements. The samples were stirred overnight. Fluorescence measurements were performed with a Spex Fluorolog 3 spectrofluorimeter, at room temperature. The slit width was set at 5 nm for excitation and 5 nm for emission. All spectra were corrected for the instrumental response of the system. The signal obtained for each sample was subtracted with the signal obtained with negative control, which corresponded to pullulan derivatives at exactly the same experimental conditions but without the guest NR or Py molecules. The cac was calculated using both the Py fluorescence intensity ratio of the third (384-385 nm) and first vibrational bands (372-374 nm) (I 3 /I 1 ) of the emission spectra (λ ex = 339 nm) and the maximum emission intensity of NR (λ ex = 570 nm) in the pullulan-C 16 /water system as a function of pullulan-C 16 concentration; in both cases, the cac was estimated as the interception of two trend lines. Cryo-FESEM Each colloidal dispersion of pullulan-C 16 was prepared with stirring of the lyophilized pullulan-C 16 in ultrapure water for 3-5 days at 50 °C (1 mg/mL) followed by filtration (pore size 0.45 μm), with insignificant material lost, as confirmed with the phenol-sulfuric acid method, using glucose as standard [54]. The colloidal dispersions were concentrated by ultrafiltration (Amicon Ultra-4 Centrifugal Filter Units, cutoff molecular weight 1 × 10 5 ) and negatively stained with phosphotungstic acid (0.01% w/v). Samples were placed into brass rivets, plunged frozen into slush nitrogen at −200 º C and transferred to the cryo stage (Gatan, Alto 2500, U.K.) of an electronic microscope (SEM/EDS: FESEM JEOL JSM6301F/Oxford Inca Energy 350). Each sample was fractured on the cryo stage with a knife. Once in the microscope, sublimation of ice was carried out in the cryo chamber for 10 min at −95 º C, allowing the exposure of the nanogel particles. The samples were sputter coated with gold and palladium at −140 º C, using an accelerating voltage of 10 kV. The antipollutant of copper covers and protects the sample. The samples were observed at −140 º C at 15 kV. The solvent used in the preparation of the samples (water and phosphotungstic acid) was also observed as a negative control. DLS The size distribution and zeta potential measurements for each colloidal dispersion, prepared as described above for cryo-FESEM, were performed in a Malvern Zetasizer NANO ZS (Malvern Instruments Limited, U.K.) using a He-Ne laser wavelength of 633 nm, a detector angle of 173° and a refractive index of 1.33. Size. For each sample (1 mL), the polydispersity index (pdI) and z-average diameter, which corresponds to the mean hydrodynamic diameter, were evaluated in 10 repeated measurements performed periodically during 6 months of storage in a polystyrene cell at 25 °C. The size distribution of each sample dispersed in ultrapure water (0.05-2 mg/mL), phosphate-buffered saline (PBS 1x, pH 7.4), phosphate-citrate buffer (pH 2.2-8.0), NaCl (0-0.6 M) or in urea (0-7 M) was executed at 37 °C in three independent experiments, three repeated measurements being performed in each one. Zeta Potential. Each sample dispersed in ultrapure water (0.05-2 mg/mL), phosphate buffered saline (PBS 1x, pH 7.4) or in phosphate-citrate buffer (pH 2.2-8.0) was analyzed at 37 °C in a folded capillary cell. The zeta potential values reported were calculated using the Smoluchowski equation with three independent experiments, three repeated measurements being performed in each one. Conclusions Hydrophobized pullulan nanogels were designed with a versatile, simple, reproducible and low-cost method. Above the cac, upon self-assembly in water, spherical polydisperse macromolecular micelles revealed long-term size stability in aqueous medium, with a nearly neutral negative surface charge and mean hydrodynamic diameter in the range 162-335 nm for PHC 16 -5.6-1.3 and 115-369 nm for PVC 16 -10-7. Size and zeta potential stability of pullulanC 16 nanogels was maintained when exposed to potential destabilizing conditions of pH. While the size stability of the nanogel made of VMA with C 16 grafted, PVC 16 -10-7, was affected by the ionic strength and urea, nanogel made of pullulan with HEMA and fewer C 16 grafted, PHC 16 -5.6-1.3, remained more stable. Pullulan-based nanogels have tunable size, high water content, interior network for possible incorporation of therapeutics, and large surface area for potential multivalent bioconjugation with cell-targeting ligands. With these characteristics, described nanogels might be useful as polymeric carriers for therapeutic targeted delivery. Further work is required to study molecular complexation, functionality and biocompatibility of these novel promising nanogels as drug and vaccine delivery systems.
6,189.4
2011-03-25T00:00:00.000
[ "Materials Science" ]
Graph-Based Audio Classification Using Pre-Trained Models and Graph Neural Networks Sound classification plays a crucial role in enhancing the interpretation, analysis, and use of acoustic data, leading to a wide range of practical applications, of which environmental sound analysis is one of the most important. In this paper, we explore the representation of audio data as graphs in the context of sound classification. We propose a methodology that leverages pre-trained audio models to extract deep features from audio files, which are then employed as node information to build graphs. Subsequently, we train various graph neural networks (GNNs), specifically graph convolutional networks (GCNs), GraphSAGE, and graph attention networks (GATs), to solve multi-class audio classification problems. Our findings underscore the effectiveness of employing graphs to represent audio data. Moreover, they highlight the competitive performance of GNNs in sound classification endeavors, with the GAT model emerging as the top performer, achieving a mean accuracy of 83% in classifying environmental sounds and 91% in identifying the land cover of a site based on its audio recording. In conclusion, this study provides novel insights into the potential of graph representation learning techniques for analyzing audio data. Introduction Graphs are powerful mathematical structures that have been extensively employed to model and analyze complex relationships and interactions across various domains [1].In passive acoustic monitoring applications, which help to create conservation plans, ecoacoustics has recently gained great importance as a cost-effective tool to analyze species conservation and ecosystem alteration.In this field, it is necessary to analyze a large amount of acoustic data to assess variations in the ecosystem.Moreover, in recent years, the field of graph representation learning has grown due to the increased interest in using these graph structures for learning and inference tasks [2].To learn from graphs, it is crucial to develop algorithms and models that can efficiently capture and make use of the detailed structural information present in graph data.These approaches have found applications in diverse fields, including bioinformatics, computer vision, recommendation systems, and social network analysis [3][4][5][6]. Graph neural networks (GNNs) have emerged as a prominent class of models for learning on graphs, offering distinct advantages over traditional artificial intelligence techniques [7].Unlike traditional methods that operate on independent data points, GNNs use the inherent connectivity and dependencies within the graph structure to learn and propagate information across nodes.By recursively aggregating and transforming node features based on their local neighborhood, GNNs can capture both local and global patterns, enabling them to model complex relationships in graph data effectively.Notably, significant advancements in tasks such as node classification, link prediction, and graph generation have been made by leveraging their ability to capture structural dependencies [8]. Automatic audio classification tasks have attracted attention in recent years, specifically the classification of environmental sounds [9], enabling applications ranging from speech recognition [10,11] to soundscape ecology [12,13].Traditional classification techniques such as k-nearest neighbors, support vector machines, and neural network classifiers have been used [14][15][16][17].However, its performance mostly relies on hand-crafted features from representations as temporal, spectral, or spectro-temporal domains.Moreover, deep learning techniques using 1D (raw waveform) [18][19][20][21] or 2D (spectrograms) [22][23][24][25] convolutional neural networks (CNN) have shown significant improvements over hand-crafted methods.Nevertheless, these networks do not consider the relationships that may exist between different environmental sounds.Recurrent Neural Networks were initially proposed to capture feature dependencies from audio data [26][27][28].More recently, Transformer models have emerged to model longer feature dependencies and leverage parallel processing [29][30][31][32].Transformer models can handle variable input lengths and utilize attention mechanisms, making them aware of the global context and allowing their application on audio classification tasks. Although graphs have been widely employed to represent and analyze visual and textual data, their potential to represent audio data has received relatively less attention [33][34][35].Nonetheless, audio data, ranging from speech signals to music recordings, inherently exhibit temporal dependencies and complex patterns that can be effectively captured and modeled using graph-based representations.Working with graphs presents several challenges in their construction and subsequent processing.Determining how to generate feature information for each node and establishing connections between nodes in the network remain open problems.In this study, we propose utilizing pre-trained audio models to extract informative features from audio files, enabling the building of graphs that capture the inherent relationships and temporal dependencies present in the audio data. Specifically, this study aims to address the problem of audio classification as a node classification task over graphs.To achieve this, we propose the following approach: (i) characterizing each audio with pre-trained networks to leverage transfer learning from models trained on large amounts of similar data, (ii) constructing graphs with each set of generated features, and (iii) utilizing the constructed graphs to classify nodes into predefined categories, taking advantage of their relationship.To accomplish this, we will use two datasets, a public one and one acquired in a passive acoustic monitoring study.We will evaluate the performance of three state-of-the-art GNNs: convolutional graph networks (GCN), graph attention networks (GAT), and GraphSAGE.These models leverage the rich structural information encoded in audio graphs in a transductive manner to learn discriminative representations capable of efficiently distinguishing between different audio classes.By comparing the performance of these models, we attempt to evaluate which of the graph models performs better on audio classification tasks. In conclusion, this study contributes to the emerging field of graph representation learning by exploring the application of GNNs for audio classification.In particular, we demonstrate the effectiveness of pre-trained audio models to generate node information for graph representations and compare the performance of three GNN architectures.The results not only advance the state-of-the-art in audio classification but also emphasize the potential of graph-based approaches for modeling and analyzing complex audio data. Graph Neural Networks A graph is a widely used data structure, denoted as G = (V, E ), consisting of nodes and edges E = {e ij } representing a link between node i (v i ) and node j v j .A useful way to represent a graph is through an adjacency matrix , where the presence of an edge is encoded as an entry with A ij = 1 if there is an edge between (v i ) and (v j ) and A ij = 0 otherwise.Additionally, each node i has associated feature information or embeddings denoted as h (0) i .GNNs are machine learning methods that receive data in the form of graphs and use neural message passing to generate embeddings for graphs and subgraphs.In [2], the author provides an overview of neural message passing, which can be expressed as follows: In this equation, h u is the current embedding of node u where the embeddings (h v ) of neighboring nodes will be sent; N(u), the neighborhood of node u; and update (k) and agg (k) , permutation-invariant functions. There exist various GNN models that differ in their approach to the aggregation or update function expressed in Equation ( 1) and in their ability to perform prediction tasks at node, edge, or network level [36].The theory of the three GNN models used in this study is presented below. Graph Convolutional Networks (GCNs) The goal of GCNs is to generalize the convolution operation to graph data by aggregating both self-features and neighbors' features [37].Following the update rule given by Equation ( 2), GCNs enforce self-connections by making à = A + I and stack multiple convolutional layers followed by nonlinear activation functions. In this equation, H is the feature matrix containing the embeddings of the nodes as rows, and D denotes the degree matrix of the graph, which is computed as Dij = ∑ j Ãij .Moreover, σ(•) is an activation function, and W is a trainable weight matrix. Graph SAmple and aggreGatE (GraphSAGE) GraphSAGE, a framework built on top of the original GCN model [38], updates each node's embedding information by sampling the number of neighbors at different hop values and aggregating their respective embedding information.This iterative process allows nodes to increasingly gain information from different parts of the graph. The main difference between the GCN model and GraphSAGE lies in the aggregation function.Where GCNs use an average aggregator, GraphSAGE employs a generalized aggregation function.Also, in GraphSAGE, self-features are not aggregated at each layer.Instead, the aggregated neighborhood features are concatenated with self-features, as shown in Equation (3). In this equation, B is a trainable weight matrix, and agg denotes a generalized aggregation function, such as mean, pooling, or LSTM. Graph Attention Networks (GATs) In GCNs (Equation ( 2)), graph node features are averaged at each layer, with weights determined by coefficients obtained from the degree matrix ( D).This implies that the outcomes of GCNs are highly dependent on the graph structure.GATs [39], for their part, seek to reduce this dependency by implicitly calculating these coefficients, taking into account the importance assigned to each node's features using the attention mechanism [40].The purpose of this is to increase the model's representational capacity. The expression for GATs is presented in Equation (4). In this equation, α uv represents the attention coefficients of the neighbors of node u, v ∈ N(u), regarding the aggregation feature aggregation at this node.These coefficients are computed as with a denoting a trainable attention vector [41]. UrbanSound8K UrbanSound8K is an audio dataset [42] that contains 8732 labeled audio files in WAV format and lasts four seconds or less.Each audio file belongs to one of the following ten classes: air conditioner, car horn, children playing, dog bark, drilling, engine idling, gun shot, jackhammer, siren, and street music. The audio files are originally pre-distributed across ten folds, as depicted in Figure 1.To avoid errors that could invalidate the results and enable fair comparisons with existing literature, it is advised to perform cross-validation using the ten predefined folds. Rey Zamuro Reserve This dataset arises from a passive acoustic monitoring study conducted at Rey Zamuro and Matarredonda Private Reserves (3°31 ′ 02.5 ′′ N, 73°23 ′ 43.8 ′′ W), located in the municipality of San Martín in the Department of Meta, Colombia.The reserve covers an expanse of 6000 hectares, predominantly characterized by natural savanna constituting around 60%, interspersed with introduced pasture areas.The remaining 40% is covered by forests.This region falls within the tropical humid biome of the Meta foothills, showcasing an average temperature of 25.6 °C. Data were acoustically recorded in September of 2022.A 13 × 8 grid was installed with 94 AudioMoth automatic acoustic devices placed 400 m from each other; of these recorders, one was not used due to deteriorated audio.The recording was made every fourteen minutes for seven consecutive days.The recordings were captured in mono format at a sampling rate of 192,000 Hz.The study encompassed various habitats, such as forest interiors, edges, and adjacent areas, each with distinct characteristics, including undergrowth.The recording heights were standardized at 1.5 m above the ground. Depending on the kind of land cover, each acoustic recording of Rey Zamuro soundscapes was classified as forest, savanna, or pasture.These labels were given based on the placement of each automated recording unit.A total of 71,497 recordings were obtained, of which 14,546 correspond to forest class, 14,994 to savanna class, and 41,957 to pasture class.In all, 80% of the dataset is used as the training set, and the remaining 20% as the test set. Pre-Trained Models for Audio Feature Extraction We use pre-trained deep learning audio models to extract deep features from each audio file, which will be used as node information in the constructed graphs, i.e., as the values h (0) i .Specifically, we employed the following three models: VGGish, YAMNet, and PANNs. VGGish VGGish is a pre-trained neural network architecture particularly designed to generate compact and informative representations, or deep embeddings, for audio signals [43].It is inspired by the Visual Geometry Group (VGG) network architecture originally developed for image classification [44].The deep embeddings generated by VGGish effectively capture relevant acoustic features and serve as a foundation for various audio processing tasks, such as audio classification, content-based retrieval, and acoustic scene understanding [45,46].VGGish was trained on AudioSet [47], a publicly available and widely used large-scale audio dataset comprising millions of annotated audio clips and 527 classes, including animal sounds, musical instruments, human activities, environmental sounds, and more. The architecture of VGGish consists of several layers, including convolutional, maxpooling, and fully connected layers.In this model, the processed audio is segmented into 0.96-second clips, and a log-Mel spectrogram is calculated for each clip, serving as the input to the neural network.Then, the convolutional layers apply a set of learnable filters to the input audio spectrogram, aiming to detect local patterns and extract low-level features.Following each convolutional layer, max-pooling layers are employed to reduce the spatial dimensions of the obtained feature maps while retaining the most important information.This process helps capture and preserve relevant patterns at different scales and further abstract the representations.Lastly, the final layers of VGGish, i.e., the fully connected layers, take the flattened output of the preceding convolutional and max-pooling layers and map it to a 128-dimensional representation.This mapping aims to capture global and high-level dependencies, resulting in deep embeddings that encode meaningful information about the audio signal and can serve as input for subsequent shallow or deep learning methods. PANNs Large-scale Pretrained Audio Neural Networks (PANNs) are pre-trained models specifically developed for audio pattern recognition [48].Their architecture is built upon CNNs, which are well-suited for analyzing audio mel-spectrograms.PANNs have multiple layers, including convolutional, pooling, and fully connected layers.These layers work together to learn hierarchical representations of audio patterns at various levels of abstraction. The training process of PANNs involves pre-training the model on the large-scale AudioSet dataset.By being trained on this dataset, PANNs learn to capture a wide range of audio patterns, making them strong audio feature extractors.These audio patterns are then mapped to a 2048-dimensional output space. YAMNet Yet another Audio Mobilenet Network (YAMNet) is a pre-trained neural network architecture that utilizes the power of deep CNNs and transfer learning to perform accurate and efficient audio analysis [49]. YAMNet is a mobilenet-based architecture consisting of a stack of convolutional layers, followed by global average pooling and a final fully connected layer with softmax activation.The convolutional layers extract local features by convolving small filters over the input audio spectrogram, thereby capturing different levels of temporal and spectral patterns.Then, the global average pooling operation condenses the extracted features into a fixed-length representation.Finally, the fully connected layer produces the classification probabilities for each sound class. YAMNet's primary objective is to accurately classify audio signals into a wide range of sound categories.However, the embeddings obtained after the global average pooling operation can also be useful. To process audio, YAMNet divides the audio into segments of 0.96 s with a hop length of 0.48 s.For each segment, a feature output comprising 1024 dimensions is generated. Graph Construction A popular way to determine the edges of a graph is to define whether two points are neighbors through the k-nearest neighbors (k-NN) algorithm.According to this method, the neighbors of node v i are those k-nearest neighbors in the feature space [50].Thus, the k-NN algorithm assigns edges between v i and its neighbors. Experimental Framework The proposed methodology of this study to assess the effectiveness of using graphs to represent audio data by leveraging pre-trained audio models to generate node information is depicted in Figure 2, and involves the following stages: (i) VGGish, YAMNet, and PANNs pre-trained audio models are used to extract features from both datasets, (ii) those deep features are used independently to construct graphs where each node represents an audio file, and edges are determined based on the k-NN algorithm, and (iii) the constructed graphs are used to train and optimize certain hyperparameters on GCN, GraphSAGE, and GAT models to perform node classification.As a first step, we employed the VGGish, PANNs, and YAMNet pre-trained models to extract features from the audio files in both datasets to be used as node embedding vectors.In the UrbanSound8K dataset, fold information was preserved for the extracted features, as shown in Figure 3. VGGish model generates a 128-dimensional deep feature vector for every 0.96 s of an audio clip, and YAMNet produces a 1024-dimensional deep feature vector for every 0.48 s.Since the audio files have a maximum duration of four seconds for UrbanSound8K and 60 s for Rey Zamuro, to obtain node embeddings of the same length, we averaged those 128-dimensional VGGish-based and 1024-dimensional YAMNet-based deep features.Subsequently, for each dataset characterized using the pre-trained models, we constructed a graph where the nodes represented the audio embeddings, and the edges were defined by applying the k-NN algorithm, where each node is connected with its k nearest neighbors.The value k was optimized for each architecture using Optuna [51].Then, we implemented the GCN, GraphSAGE, and GAT architectures using PyTorch Geometric [52].For the GCN and GraphSAGE models, we employed a two-layer architecture with a hidden dimension optimized by Optuna and an output dimension equal to the number of classes, i.e., three for the Rey Zamuro dataset and ten for UrbanSound8K.For the GAT model, we used a two-layer architecture, with the first layer having a value for hidden dimension optimized by Optuna and 10 heads, followed by a second layer with an output dimension corresponding to the number of classes and one head. To compute the attention coefficients, we employed a slope of 0.2 on the LeakyReLU activation function in Equation (5).For all trained GNNs, we used the ReLU activation function and a dropout with a probability of 0.5.All models were trained to minimize crossentropy loss using the Adam optimizer (with a learning rate of 0.001 and weight decay of 5 × 10 −4 ) for 300 and 1300 epochs for UrbanSound8K and Rey Zamuro dataset, respectively. Finally, for UrbanSound8K, we evaluated the performance of the models in terms of accuracy using ten-fold cross-validation, i.e., following the dataset's distribution across the ten predefined folds.Alternatively, due to the large amount of data and the associated computational cost for training use, the performance of the models for the Rey Zamuro dataset was evaluated with the test set. Results and Discussion Tables 1 and 2 present the accuracy results of the three GNN models (GCN, Graph-SAGE, and GAT) trained for audio file classification, with nodes representing the audio data in a graph.These nodes are characterized by three distinct feature sets derived from pre-trained models (VGGish, PANNs, and YAMNet) applied to UrbanSound8K and Rey Zamuro datasets.Additionally, the tables display the optimal hyperparameters determined by Optuna for each GNN model and node characterization combination.For the Urban-Sound8K dataset, where fold distribution is predefined, accuracy results are presented as mean values accompanied by their corresponding standard deviations.Conversely, accuracy results for the Rey Zamuro dataset focus solely on the test set.The results reveal the consistent superiority of PANNs across both datasets and all three trained GNN models.In particular, on the Rey Zamuro dataset, PANNs show a significant improvement of up to 18% in accuracy.The higher performance can be attributed to the larger dimensional feature space produced by PANNs, with 2048 dimensions, compared to VGGish and YAMNet, which have dimensions of 128 and 1024, respectively.This larger feature space of PANNs is more suitable for capturing detailed information from audio data.Furthermore, among the compared GNN models, GAT emerges as the top performer, demonstrating sustained superiority across both datasets.This underscores the effectiveness of the attention mechanism in exploiting graph information and optimizing aggregation strategies.Tables 3 and 4 present the computational costs of the experiments conducted, measured in terms of time and the number of trainable parameters of the networks for the UrbanSound8K and Rey Zamuro datasets, respectively.It is important to note that each model possesses a different number of neurons in the hidden layer due to the optimization performed with Optuna.The GAT model has the highest number of parameters for both datasets and the feature sets generated with the pre-trained models.Specifically, the largest GAT model for the UrbanSound8K dataset has 8M parameters when using PANNs' deep features.Regarding training time, the GAT model for this dataset can take up to 35 times longer than training GCN and GraphSAGE models.Concerning the Rey Zamuro dataset, we also calculate the time for each model under test.Once again, the GAT model demonstrates the largest number of parameters, as well as longer training and testing times.However, during testing, the times are closer to those of the other two models.Although training time can indeed be long, it is worth considering that a trained network can be scalable regardless of the amount of data.However, it is crucial to consider the computational requirements for building and storing the graph. Our results show that representing audio datasets through graphs and using deep features extracted from pre-trained models as node features enables sound classification.However, it is important to acknowledge an ongoing research challenge in the graphbuilding step, particularly in setting its node feature information and edges.To the best of our knowledge, only one study has employed GNNs for sound classification on Ur-banSound8K dataset [34].In one such study, the overall classification accuracy obtained using GNNs was 63.5%, which improved to 73% when GNNs were used in combination with features learned from a CNN.However, our results surpass this, even in the case of GraphSAGE, whose lowest accuracy is 76% for VGGish features.Moreover, our findings are comparable to those reported in other studies employing 1D CNN models.For example, in [18], RawNet CNN was presented, which worked with the raw waveform and achieved an accuracy of 87.7 ± 0.2.Additionally, in [19], a CNN called EnvNet-v2 obtained an accuracy of 78.3%, in [20] with very deep 1D convolutional networks a maximum accuracy of 71.68%only for the 10th fold used as the test set, while in [21], a proposed end-to-end 1D CNN achieved 89% accuracy.In addition, 2D CNN models have also been used on the UrbanSound8K dataset, reaching 79% [22], 70% [23], 83.7% [24], and 97% [25].It should be noted that although other studies used the UrbanSound8K dataset to train 1D or 2D CNNs, they often employ unofficial random splits of the dataset, conducting their own crossvalidations or training-test splits.This causes them to use different training and validation data than published papers that follow the official distribution, making comparison unfair. Conclusions In this paper, we explored using graphs as a suitable representation of acoustic data for sound classification tasks, focusing on the UrbanSound8K dataset and a passive acoustic monitoring study.Particularly, this study offers novel insights into the potential of graph representation learning methods for analyzing audio data. First, we utilized pre-trained audio models, namely VGGish, PANNs, and YAMNet, to compute node embeddings and extract informative features.Then, we trained GCNs, GraphSAGE, and GATs and evaluated their performance.For the UrbanSound8K dataset, we employed a ten-fold cross-validation approach with the dataset's predefined folds for performance evaluation.Additionally, we partitioned the Rey Zamuro Dataset into train and test sets to validate its results.Moreover, during the training stage, we conducted hyperparameter optimization to attain the best possible model for the built graphs. Our findings demonstrate the effectiveness of using graphs to represent audio data.In addition, they show that GNNs can achieve a competitive performance in sound classification tasks.Most notably, it is shown that it is possible to identify ecosystem states through audio and GNNs.Notably, the best results were obtained when employing PANNs-based deep features with the three GNN models.Among the GNN models, the GAT model outperforms the others.This advantage stems from its attention-based operation, enabling it to aggregate node information by assigning weights to its neighbors based on relevance. To further our research, we plan to explore the feasibility of using temporal GNNs for sound classification tasks to leverage graphs constructed using deep features based on temporal segments of the audio signal, such as those obtained with VGGish and YAMNet.Additionally, the proposed methodology will be applied to the area of soundscape ecology, seeking to generate acoustic heterogeneity maps from the treatment of large volumes of data with GNN techniques that allow exploiting the acoustic relationships between different recording sites. Figure 1 . Figure 1.Distribution of the ten classes across the predefined folds. Figure 2 . Figure 2.The workflow diagram proposed in this study illustrates that for each audio of a dataset (a) deep features are extracted with pre-trained audio models (b), then graphs are constructed by including those features as node information and setting edges with k-NN (c).For test data, the nodes present information but no labels (in the diagram the nodes unfilled are the test nodes).Subsequently, some GNN models are trained and optimized (d).Finally, trained models allow discriminating test nodes between classes (red or blue in the diagram) through transductive learning (e). Figure 3 . Figure 3. Feature extraction scheme.The audio files from each fold of the UrbanSound8K dataset were characterized using pre-trained models. Table 3 . Computational cost for UrbanSound8K dataset tests. Table 4 . Computational cost for Rey Zamuro dataset tests.
5,713.4
2024-03-26T00:00:00.000
[ "Computer Science", "Environmental Science", "Engineering" ]
Precipitate Characterization in Model Al-Zn-Mg-(Cu) Alloys Using Small-Angle X-ray Scattering Model 7000 series alloys with and without copper were fabricated into sheets to study precipitation hardening behavior under isothermal aging conditions. Samples of each alloy were subjected to 3 h annealing treatments at various temperatures to produce a range of precipitate size distributions. Hardness, electrical conductivity, and small-angle X-ray scattering (SAXS) were used to characterize the aging behavior of the two alloys. Precipitate size distributions were modeled from the scattering curves for each annealing condition using a maximum entropy method (MEM) and compared to select transmission electron microscopy (TEM) results. The measured average precipitate diameters from TEM were in good agreement with the average precipitate diameters determined from the scattering curves. Introduction The 7000 series alloys based on the Al-Zn-Mg-(Cu) system are used for high-strength structural components in aerospace, automotive, and military applications. These precipitation-hardenable alloys exhibit tensile yield strengths approaching 600 MPa due to densely distributed nano-sized precipitates formed during artificial aging [1]. The precipitation sequence in 7000 series alloys-like other precipitation-strengthened alloys-is influenced by alloy chemistry, thermo-mechanical processing, and final age hardening heat treatments. The Zn:Mg ratio, Cu content, homogenization and rolling practice, and aging practice collectively determine the final volume fraction and spatial distribution of precipitates in Al-Zn-Mg-(Cu) alloys [2]. Precipitates typically observed in artificially aged 7000 series alloys without Cu (Al-Zn-Mg) include the equilibrium η (MgZn 2 ) phase and its precursor η' phase. In artificially aged 7000 series alloys with Cu (Al-Zn-Mg-Cu), observed precipitates are the equilibrium η phase expressed as Mg(Zn,Cu,Al) 2 and its precursor η' phase [3]. In general, the precipitation sequence in Al-Zn-Mg-(Cu) alloys begins with the decomposition of a supersaturated solid solution (SSSS) into nano-sized (~3 nm) clusters of Mg and Zn atoms called Guinier-Preston (GP) zones. Two types of GP zones can form depending on quenching and aging conditions: spherical GPI zones or plate-like GPII zones [4]; both types impede dislocation movement and thus increase strength. GP zones evolve into the metastable ellipsoidal η' strengthening phase that is semi-coherent with the aluminum matrix [2,5,6]. Upon further aging, the η' evolves into the equilibrium η phase [6]. Precipitation in metallic alloys causes local composition fluctuations as precipitates nucleate and grow. Small-angle X-ray scattering (SAXS) signals are sensitive to local changes in electron density or atomic number, and thus SAXS is sensitive to local changes in composition and can be used to study precipitate evolution. High-energy synchrotron X-ray sources enable SAXS experiments on metallic alloys. For sheet materials, SAXS experiments can be run in transmission mode, where the incident X-ray beam passes through the sample. The scattered X-ray signal can be analyzed to determine the precipitate size and volume fraction [7,8]. The scattered signal or intensity I is measured as a function of the scattering vector q. q = 4πSin(θ)/λ (1) For synchrotron experiments, the X-ray wavelength λ is often fixed. In SAXS experiments, the scattering angle 2θ can range from 0.1 to 6 • . The scattering angle is determined by the distance between the sample and the detector as well as the detector size and the beam stop position. In transmission mode, the scattered intensity is recorded as a two-dimensional image. The two-dimensional scattering image is radially averaged, producing a one-dimensional scattering curve of intensity I vs. scattering vector q. The measured scattering signal I(q) is proportional to the squared difference between the scattering length densities of the scatterer ρ e,scatterer , and matrix ρ e,matrix . Scattering length density is related to atomic number. SAXS is most effective when the difference between the atomic number Z of a scatterer and the atomic number of the matrix is large. The study of precipitation in 7000 series alloys is well suited for SAXS because there is a high electron density contrast between zinc-bearing precipitates (Z Zn = 30) and the aluminum (Z Al = 13) matrix. The average precipitate size and volume fraction can be extracted from the scattering curves. The precipitate size and volume fraction information can be used to predict the strength contributed by precipitation hardening. Material Processing Two model 7000 series alloys were cast and rolled into sheet material at Michigan Technological University's pilot-scale casting and thermo-mechanical processing facility. The alloys were cast into 18-mm-thick plates using a chemically bonded sand mold with integrated Cu chill, as schematically shown in Figure 1. The chemical compositions for the non-copper and copper-containing model 7000 series alloys were determined by inductively coupled plasma optical emission spectroscopy (ICP-OES) ( Table 1). Metals 2020, 10, x FOR PEER REVIEW 2 of 12 or atomic number, and thus SAXS is sensitive to local changes in composition and can be used to study precipitate evolution. High-energy synchrotron X-ray sources enable SAXS experiments on metallic alloys. For sheet materials, SAXS experiments can be run in transmission mode, where the incident X-ray beam passes through the sample. The scattered X-ray signal can be analyzed to determine the precipitate size and volume fraction [7,8]. The scattered signal or intensity I is measured as a function of the scattering vector q. For synchrotron experiments, the X-ray wavelength λ is often fixed. In SAXS experiments, the scattering angle 2θ can range from 0.1 to 6°. The scattering angle is determined by the distance between the sample and the detector as well as the detector size and the beam stop position. In transmission mode, the scattered intensity is recorded as a two-dimensional image. The twodimensional scattering image is radially averaged, producing a one-dimensional scattering curve of intensity I vs. scattering vector q. The measured scattering signal I(q) is proportional to the squared difference between the scattering length densities of the scatterer ρe,scatterer, and matrix ρe,matrix. Scattering length density is related to atomic number. SAXS is most effective when the difference between the atomic number Z of a scatterer and the atomic number of the matrix is large. The study of precipitation in 7000 series alloys is well suited for SAXS because there is a high electron density contrast between zinc-bearing precipitates (ZZn = 30) and the aluminum (ZAl = 13) matrix. The average precipitate size and volume fraction can be extracted from the scattering curves. The precipitate size and volume fraction information can be used to predict the strength contributed by precipitation hardening. Two model 7000 series alloys were cast and rolled into sheet material at Michigan Technological University's pilot-scale casting and thermo-mechanical processing facility. The alloys were cast into 18-mm-thick plates using a chemically bonded sand mold with integrated Cu chill, as schematically shown in Figure 1. The chemical compositions for the non-copper and copper-containing model 7000 Both alloys were homogenized at 450 • C for 24 h. The homogenized plates were hot-rolled at 400 • C to 4 mm thickness with 1 mm thickness reduction per roll pass-plates were re-heated to 400 • C prior to each roll pass. The hot-rolled material was cold-rolled to a final thickness of 3 mm. The cold-rolled material was then solution heat treated at 470 • C for 1 h, followed immediately by a water quench. Coupons from the Al-Zn-Mg and Al-Zn-Mg-Cu alloys were naturally aged at room temperature for 24 h, then artificially aged for 3 h at 100, 120, 140, 160, 180, or 200 • C. Rockwell (B-scale) hardness and eddy current electrical conductivity were measured after each isothermal heat treatment to characterize precipitation as a function of isothermal aging temperature. Small-Angle X-ray Scattering Experiment and Analysis Method A synchrotron-based small-angle X-ray scattering (SAXS) experiment was performed at beamline 1-ID at Argonne National Lab (ANL)-Advanced Photon Source (APS). Ex-situ SAXS samples were prepared from each isothermal aging condition from the copper and non-copper alloys. Discs, 7.5 mm in diameter, were electro-discharge machined (EDM'd) from the 3 mm sheet and loaded into a wheel sample fixture ( Figure 2). A beam energy of 71.676 keV and a beam size of 150 µm × 150 µm was used to probe the aluminum samples in the wheel fixture. A Pixirad 2 detector equipped with PIXI III ASICs (Pixirad Imaging Counters S.R.L., Verona, Italy) was used to measure scattering intensity over a q range of approximately 0.01-0.35 Å −1 , allowing for the detection of microstructural features such as GP zones~10 Å in diameter and coarse precipitates~600 Å in diameter. Raw two-dimensional intensity data was radially averaged and corrected to give absolute intensity units (cm −1 ) vs. q (Å −1 ) for each artificial aging condition. Absolute intensity calibration was verified using a glassy carbon standard [9]. Figure 3 shows absolute intensity vs. q for the glassy carbon sample measured in this SAXS experiment (red curve) compared to the glassy carbon sample (green curve) previously measured with ultra-small-angle X-ray scattering (USAXS) at APS [9]. Both alloys were homogenized at 450 °C for 24 h. The homogenized plates were hot-rolled at 400 °C to 4 mm thickness with 1 mm thickness reduction per roll pass-plates were re-heated to 400 °C prior to each roll pass. The hot-rolled material was cold-rolled to a final thickness of 3 mm. The cold-rolled material was then solution heat treated at 470 °C for 1 h, followed immediately by a water quench. Coupons from the Al-Zn-Mg and Al-Zn-Mg-Cu alloys were naturally aged at room temperature for 24 h, then artificially aged for 3 h at 100, 120, 140, 160, 180, or 200 °C. Rockwell (Bscale) hardness and eddy current electrical conductivity were measured after each isothermal heat treatment to characterize precipitation as a function of isothermal aging temperature. Small-Angle X-ray Scattering Experiment and Analysis Method A synchrotron-based small-angle X-ray scattering (SAXS) experiment was performed at beamline 1-ID at Argonne National Lab (ANL)-Advanced Photon Source (APS). Ex-situ SAXS samples were prepared from each isothermal aging condition from the copper and non-copper alloys. Discs, 7.5 mm in diameter, were electro-discharge machined (EDM'd) from the 3 mm sheet and loaded into a wheel sample fixture ( Figure 2). A beam energy of 71.676 keV and a beam size of 150 μm × 150 μm was used to probe the aluminum samples in the wheel fixture. A Pixirad 2 detector equipped with PIXI III ASICs (Pixirad Imaging Counters S.R.L., Verona, Italy) was used to measure scattering intensity over a q range of approximately 0.01-0.35 Å −1 , allowing for the detection of microstructural features such as GP zones ~10 Å in diameter and coarse precipitates ~600 Å in diameter. Raw two-dimensional intensity data was radially averaged and corrected to give absolute intensity units (cm −1 ) vs. q (Å −1 ) for each artificial aging condition. Absolute intensity calibration was verified using a glassy carbon standard [9]. Figure 3 shows absolute intensity vs. q for the glassy carbon sample measured in this SAXS experiment (red curve) compared to the glassy carbon sample (green curve) previously measured with ultra-small-angle X-ray scattering (USAXS) at APS [9]. Lineouts showing absolute intensity vs. q measurements from glassy carbon sample from this SAXS experiment (red) and absolute intensity vs. q measurements from glassy carbon sample measured using USAXS; blue curve is air, shown for reference. The measured intensity I is related to the spatial distribution of scatterers and the characteristic shape of the scatterer. If the general shape of the scatterer-or, in this case-if the general shape of the precipitate is known, then information about precipitate size can be extracted by solving for the unknown particle size distribution ( ) in, where G(q, D) represents the scattering function at the scattering vector q of a single scatterer with a characteristic shape [10]. For this work, precipitates (i.e., scatterers) were modeled as a spheroid with an aspect ratio of 1.7-this morphology and aspect ratio was verified with TEM precipitate size measurements. The unknown particle size distribution ( ) was determined from each scattering curve (I vs. q) using the size distribution model in GSAS-II software (revision 3957, UChicago Argonne LLC, Chicago, IL, USA) [11,12]. The model uses a maximum entropy routine that compares measured intensities on a scattering curve to corresponding intensities calculated for a range of userdefined particle sizes [10,13,14]. The unknown particle size distribution for each heat treatment condition was reported as the volume distribution of particle sizes. The average precipitate diameter was computed from generated volume distribution plots. The integrated intensity or scattering invariant Q0 from the scattering curves is proportional to the precipitate volume fraction. The scattering invariant Q0 can be computed using the following: where I is the measured absolute intensity and q is the scattering vector. The integrated intensity (scattering invariant) can be used to compute the precipitate volume fraction fV in Al-Zn-Mg-(Cu) alloys using the following equation from Deschamps et al. [15]: where Q0 is the scattering invariant, fV is the precipitate volume fraction, Zp and Zm are the average atomic numbers of the precipitate and matrix, and Ω is the atomic volume of the precipitate (approximated as 16.5 Å 3 ) [16]. The average atomic numbers of the precipitate and matrix were computed as a function of the precipitate composition and matrix composition, respectively. The precipitate composition and matrix composition were taken from the Deschamps et al. tomographic atom probe (TAP) measurements in [15]. The precipitate volume fraction and average precipitate size determined from SAXS were used to estimate the precipitate strengthening contribution in both alloys after each heat treatment Lineouts showing absolute intensity vs. q measurements from glassy carbon sample from this SAXS experiment (red) and absolute intensity vs. q measurements from glassy carbon sample measured using USAXS; blue curve is air, shown for reference. The measured intensity I is related to the spatial distribution of scatterers and the characteristic shape of the scatterer. If the general shape of the scatterer-or, in this case-if the general shape of the precipitate is known, then information about precipitate size can be extracted by solving for the unknown particle size distribution x T (D) in, where G(q, D) represents the scattering function at the scattering vector q of a single scatterer with a characteristic shape [10]. For this work, precipitates (i.e., scatterers) were modeled as a spheroid with an aspect ratio of 1.7-this morphology and aspect ratio was verified with TEM precipitate size measurements. The unknown particle size distribution x T (D) was determined from each scattering curve (I vs. q) using the size distribution model in GSAS-II software (revision 3957, UChicago Argonne LLC, Chicago, IL, USA) [11,12]. The model uses a maximum entropy routine that compares measured intensities on a scattering curve to corresponding intensities calculated for a range of user-defined particle sizes [10,13,14]. The unknown particle size distribution for each heat treatment condition was reported as the volume distribution of particle sizes. The average precipitate diameter was computed from generated volume distribution plots. The integrated intensity or scattering invariant Q 0 from the scattering curves is proportional to the precipitate volume fraction. The scattering invariant Q 0 can be computed using the following: where I is the measured absolute intensity and q is the scattering vector. The integrated intensity (scattering invariant) can be used to compute the precipitate volume fraction f V in Al-Zn-Mg-(Cu) alloys using the following equation from Deschamps et al. [15]: where Q 0 is the scattering invariant, f V is the precipitate volume fraction, Z p and Z m are the average atomic numbers of the precipitate and matrix, and Ω is the atomic volume of the precipitate (approximated as 16.5 Å 3 ) [5]. The average atomic numbers of the precipitate and matrix were computed as a function of the precipitate composition and matrix composition, respectively. The precipitate composition and matrix composition were taken from the Deschamps et al. tomographic atom probe (TAP) measurements in [15]. The precipitate volume fraction and average precipitate size determined from SAXS were used to estimate the precipitate strengthening contribution in both alloys after each heat treatment condition. Strength was calculated for the cases of pure shear and pure by-pass using equations from [16]. For the pure shear case, the following equation was used: and the for the pure by-pass case, the following equation was used: where M is the Taylor factor, β and µ are phenomenological parameters, k is an adjustable fitting parameter, f V is the precipitate volume fraction, and R is the precipitate radius. In the pure shear case, moving dislocations cut through precipitates, whereas in the pure by-pass case, moving dislocations maneuver around or by-pass precipitates. TEM Experimental Setup and Analysis Method TEM samples were prepared from the Al-Zn-Mg and Al-Zn-Mg-Cu alloys at select artificial aging conditions. The samples were ground and polished in the rolling plane to approximately 100-µm-thick foils. Discs, 3 mm in diameter, were punched from the foils, dimpled to approximately 30-50 µm thickness, and ion milled until perforation. The as-prepared TEM specimens were examined with a Talos F200X TEM (Thermo Fisher Scientific, Hillsboro, OR, USA) operated at 200 kV. Selected area diffraction pattern (SADP) images, as well as bright field and high-resolution transmission electron microscopy (HRTEM) images, were taken in the <110> zone axis. ImageJ (1.51, National Institutes of Health, Bethesda, MD, USA) [17] was used to measure the average precipitate diameter from the bright field and HRTEM images. Three to five images from different locations on each TEM specimen were used to measure the diameter of at least 100 precipitates-diameter was taken as the length across the minor axis of the precipitate. Both major and minor axis lengths were measured to inform the 1.7 aspect ratio used in the GSAS-II size distribution model. Results and Discussion Hardness and electrical conductivity vs. heat treatment conditions are displayed in Figure 4 for the Al-Zn-Mg and Al-Zn-Mg-Cu alloys. Both alloys reached a peak hardness after 3 h at 140 • C (Figure 4a). The copper-bearing alloy had a greater peak hardness after 140 • C/3 h compared to the non-Cu-bearing alloy (90 vs. 85 HRb). For the Cu-bearing alloy only, the hardness appeared to plateau with very little change between the 120, 140, and 160 • C isothermal heat treatments. As-quenched hardness (13 HRb) for the Al-Zn-Mg alloy (Figure 4a) was lower than the as-quenched hardness (57 HRb) for the Al-Zn-Mg-Cu alloy (Figure 4a). After 24 h of natural aging, the Al-Zn-Mg alloy gained considerable strength, as indicated by the sharp increase in hardness from 13 to 52 HRb (Figure 4a). Chinh et al. [18] concluded that Cu-bearing vacancy-rich clusters (VRCs) can form immediately after quenching, offering a significantly greater strengthening effect than VRCs formed in ternary Al-Zn-Mg alloys. These Cu-bearing VRCs may explain the large difference in the observed as-quenched hardness since more VRCs would result in more GP zones and higher strength. Hardness and electrical conductivity vs. heat treatment conditions are displayed in Figure 4 for the Al-Zn-Mg and Al-Zn-Mg-Cu alloys. Both alloys reached a peak hardness after 3 h at 140 °C (Figure 4a). The copper-bearing alloy had a greater peak hardness after 140 °C/3 h compared to the non-Cu-bearing alloy (90 vs. 85 HRb). For the Cu-bearing alloy only, the hardness appeared to plateau with very little change between the 120, 140, and 160 °C isothermal heat treatments. In both alloys, regardless of copper content, natural aging after quench resulted in increased hardness and decreased electrical conductivity (Figure 4b). This confirmed GP zone formation, as GP zones impede dislocation movement and are thought to impair lattice periodicity, resulting in more restrictive electron movement, and thus reduced conductivity [19]. In contrast, conductivity increased with increasing heat treatment temperature for both Cu-and non-Cu-containing alloys. The increase in conductivity was due to the decomposition of the solid solution into precipitates. Solute in solid solutions tends to restrict electron movement. As solutes leave solid solution and form precipitates, electrons tend to move more freely throughout the aluminum matrix, resulting in increased electrical conductivity. Figure 5 shows TEM images of the two alloys after the 120 • C and 160 • C 3 h isothermal heat treatments. After the 160 • C/3 h treatment, η'/η phases were observed in the aluminum matrix for both the Al-Zn-Mg ( Figure 5a) and Al-Zn-Mg-Cu (Figure 5b) alloys. Precipitates in the Al-Zn-Mg alloy after the 120 • C/3 h treatment (Figure 5c) had little contrast, making observation difficult. However, the HRTEM inset image in Figure 5c shows evidence of coherent GP zones as dark agglomerates, with similar lattice structure to the surrounding light-gray aluminum matrix. Precipitates can be clearly observed in the TEM image of the Al-Zn-Mg-Cu alloy after the 120 • C/3 h treatment (Figure 5d). The HRTEM inset shows that these precipitates were coherent GP zones, indicated by the similarity in lattice structure between the dark contrast areas and light-gray aluminum matrix (Figure 5d). In summary, TEM observations indicated that GP zones were present after the 120 • C/3 h heat treatment for both the non-Cu-and Cu-containing alloys. After the 160 • C/3 h treatment, η'/η precipitates were observed in both alloys. to 52 HRb (Figure 4a). Chinh et al. [19] concluded that Cu-bearing vacancy-rich clusters (VRCs) can form immediately after quenching, offering a significantly greater strengthening effect than VRCs formed in ternary Al-Zn-Mg alloys. These Cu-bearing VRCs may explain the large difference in the observed as-quenched hardness since more VRCs would result in more GP zones and higher strength. The scattering curves for each 3 h isothermal heat treatment are plotted for the Al-Zn-Mg and Al-Zn-Mg-Cu alloys in Figure 6. The scattering curves from the Al-Zn-Mg-Cu alloy are plotted in Figure 6a,b, for 100, 120, and 140 • C isothermal heat treatments ( Figure 6a) and 160, 180, and 200 • C heat treatments (Figure 6b). The scattering curves for the non-Cu-containing Al-Zn-Mg alloy are plotted in Figure 6c,d for 100, 120, and 140 • C isothermal heat treatments ( Figure 6c) and 160, 180, and 200 • C heat treatments (Figure 6d). The scattered intensity at the high q-range is due to small precipitates such as GP zones and small, early-stage η' precipitates. Scattered intensity at the low q-range is due to larger precipitates such as η' and η phases. As isothermal heat treatment temperature increased, the curves shifted to lower q-range values and higher intensities as the precipitate size and volume fraction increased. For the scattering curves at lower temperatures (e.g., 100 °C, 120 °C, 140 °C), the curves begin with a sharp decline in intensity at low q, then rise to a maximum, followed by a gradual decline in intensity. This initial dip in intensity at low q is caused by a destructive interference effect due to high precipitate number densities [8]. This initial intensity dip was filtered out prior to modeling the precipitate size distribution using the GSAS-II maximum entropy method. The red dashed lines overlaid on the scattering curves in Figure 6 represent the portion of the scattering curve that was modeled using the maximum entropy method. The particle size distributions were calculated from these best-fit functions. The average precipitate diameter determined from the SAXS-MEM is plotted for each artificial age condition for the non-Cu-and Cu-containing alloys in Figure 7. The solid points plotted in Figure 7 are the average precipitate diameters measured from the TEM images, which are in good agreement with the SAXS measurements. The calculated precipitate volume fraction is shown in Figure 8. The volume fraction for the Cu-containing alloy was higher than the non-Cu alloy at low heat treatment temperatures (i.e., 100 °C, 120 °C). After the 140 °C and 160 °C heat treatments, both alloys had nearly the same volume fraction. The volume fractions in both alloys plateaued after the 180 °C and 200 °C heat treatments. The Cu-containing alloy had about a 20% higher volume fraction than the non-Cu alloy after the 180 °C and 200 °C heat treatments. For the scattering curves at lower temperatures (e.g., 100 • C, 120 • C, 140 • C), the curves begin with a sharp decline in intensity at low q, then rise to a maximum, followed by a gradual decline in intensity. This initial dip in intensity at low q is caused by a destructive interference effect due to high precipitate number densities [8]. This initial intensity dip was filtered out prior to modeling the precipitate size distribution using the GSAS-II maximum entropy method. The red dashed lines overlaid on the scattering curves in Figure 6 represent the portion of the scattering curve that was modeled using the maximum entropy method. The particle size distributions were calculated from these best-fit functions. The average precipitate diameter determined from the SAXS-MEM is plotted for each artificial age condition for the non-Cu-and Cu-containing alloys in Figure 7. The solid points plotted in Figure 7 are the average precipitate diameters measured from the TEM images, which are in good agreement with the SAXS measurements. The calculated precipitate volume fraction is shown in Figure 8. The volume fraction for the Cu-containing alloy was higher than the non-Cu alloy at low heat treatment temperatures (i.e., 100 • C, 120 • C). After the 140 • C and 160 • C heat treatments, both alloys had nearly the same volume fraction. The volume fractions in both alloys plateaued after the 180 • C and 200 • C heat treatments. The Cu-containing alloy had about a 20% higher volume fraction than the non-Cu alloy after the 180 • C and 200 • C heat treatments. Figure 9 shows the calculated strength increase due to precipitation hardening for both pure shear and pure by-pass mechanisms. The strength increase was calculated for each case using Equations 6 and 7; the precipitate size and volume fraction measurements presented in Figures 6 and 7 were used as inputs. The strength of a precipitation-hardenable alloy is governed by the interaction of the dislocations with the precipitates. Dislocations interact with precipitates by two mechanisms: (1) shearing or (2) by-pass. In the under-aged condition, the shearing mechanism is dominant-where strength increase due to precipitation hardening Δσ is proportional to the precipitate volume fraction fV and average precipitate radius R, Here, precipitate size is proportional to strength. Precipitates are shearable in the under-aged condition up to a critical radius. When precipitate size grows beyond the critical radius, then the Figure 9 shows the calculated strength increase due to precipitation hardening for both pure shear and pure by-pass mechanisms. The strength increase was calculated for each case using Equations 6 and 7; the precipitate size and volume fraction measurements presented in Figures 6 and 7 were used as inputs. The strength of a precipitation-hardenable alloy is governed by the interaction of the dislocations with the precipitates. Dislocations interact with precipitates by two mechanisms: (1) shearing or (2) by-pass. In the under-aged condition, the shearing mechanism is dominant-where strength increase due to precipitation hardening Δσ is proportional to the precipitate volume fraction fV and average precipitate radius R, Here, precipitate size is proportional to strength. Precipitates are shearable in the under-aged condition up to a critical radius. When precipitate size grows beyond the critical radius, then the Figure 9 shows the calculated strength increase due to precipitation hardening for both pure shear and pure by-pass mechanisms. The strength increase was calculated for each case using Equations (6) and (7); the precipitate size and volume fraction measurements presented in Figures 6 and 7 were used as inputs. The strength of a precipitation-hardenable alloy is governed by the interaction of the dislocations with the precipitates. Dislocations interact with precipitates by two mechanisms: (1) shearing or (2) by-pass. In the under-aged condition, the shearing mechanism is dominant-where strength increase due to precipitation hardening ∆σ is proportional to the precipitate volume fraction f V and average precipitate radius R, ∆σ ∝ ( f V R) 1/2 (8) Here, precipitate size is proportional to strength. Precipitates are shearable in the under-aged condition up to a critical radius. When precipitate size grows beyond the critical radius, then the Orowan strengthening mechanism becomes dominant. Instead of shearing, the dislocations bow around and by-pass precipitates. This is called Orowan strengthening, where ∆σ is proportional to f V and R by: Precipitates in the over-aged condition are non-shearable, and strength is controlled by the Orowan mechanism, where material strength is inversely proportional to precipitate size. The transition from shear to by-pass mechanism occurs around the 140 • C heat treatment temperature for both Al-Zn-Mg and Al-Zn-Mg-Cu alloys. Precipitate shearing is the dominant strengthening mechanism after the 100 and 120 • C heat treatments, whereas the by-pass mechanism becomes dominant after the 160, 180, and 200 • C heat treatments. Peak strength in precipitation-hardenable alloys occurs at the transition from shearing to by-pass. Peak hardness (strength) was observed after the 140 • C/3 h heat treatment for the Cu-bearing and non-Cu-bearing alloys in Figure 4a. The point at which peak hardness is observed in Figure 4a agrees well with the calculations in Figure 9, which shows that the transition from shearing to by-pass mechanism occurs around 140 • C. Metals 2020, 10, x FOR PEER REVIEW 10 of 12 Orowan strengthening mechanism becomes dominant. Instead of shearing, the dislocations bow around and by-pass precipitates. This is called Orowan strengthening, where Δσ is proportional to fV and R by: Precipitates in the over-aged condition are non-shearable, and strength is controlled by the Orowan mechanism, where material strength is inversely proportional to precipitate size. The transition from shear to by-pass mechanism occurs around the 140 °C heat treatment temperature for both Al-Zn-Mg and Al-Zn-Mg-Cu alloys. Precipitate shearing is the dominant strengthening mechanism after the 100 and 120 °C heat treatments, whereas the by-pass mechanism becomes dominant after the 160, 180, and 200 °C heat treatments. Peak strength in precipitation-hardenable alloys occurs at the transition from shearing to bypass. Peak hardness (strength) was observed after the 140 °C/3 h heat treatment for the Cu-bearing and non-Cu-bearing alloys in Figure 4a. The point at which peak hardness is observed in Figure 4a agrees well with the calculations in Figure 9, which shows that the transition from shearing to bypass mechanism occurs around 140 °C. The average precipitate diameter measured from the SAXS data for the 140 °C/3 h heat treatment was 44 ± 4 Å for the non-Cu alloy and 52 ± 5 Å for the Cu-containing alloy. Assuming these measurements are respective of critical precipitate size, the transition from shearing to Orowan type strengthening mechanism occurred at larger precipitate sizes in the Cu-containing Al-Zn-Mg-Cu alloy. Hardness began to decrease after the 160 °C aging treatment for both alloys, followed by further decline after the 180 °C and 200 °C heat treatments (Figure 4a). Similarly, the calculated strength delta in Figure 9 decreases after the 160 °C heat treatment. As the hardness decreases, precipitates continue to coarsen, indicated by an increase in average precipitate size for both alloys in Figure 7. Conclusion Two model Al-Zn-Mg and Al-Zn-Mg-Cu alloys were cast and fabricated into sheet material and given 3 h isothermal heat treatments ranging in temperature from 100 to 200 °C. The average precipitate diameter and precipitate volume fraction were characterized for each heat treatment condition using synchrotron-based small-angle X-ray scattering. A maximum entropy method (MEM) was used to determine the average precipitate diameter from SAXS intensity vs. q curves for The average precipitate diameter measured from the SAXS data for the 140 • C/3 h heat treatment was 44 ± 4 Å for the non-Cu alloy and 52 ± 5 Å for the Cu-containing alloy. Assuming these measurements are respective of critical precipitate size, the transition from shearing to Orowan type strengthening mechanism occurred at larger precipitate sizes in the Cu-containing Al-Zn-Mg-Cu alloy. Hardness began to decrease after the 160 • C aging treatment for both alloys, followed by further decline after the 180 • C and 200 • C heat treatments (Figure 4a). Similarly, the calculated strength delta in Figure 9 decreases after the 160 • C heat treatment. As the hardness decreases, precipitates continue to coarsen, indicated by an increase in average precipitate size for both alloys in Figure 7. Conclusions Two model Al-Zn-Mg and Al-Zn-Mg-Cu alloys were cast and fabricated into sheet material and given 3 h isothermal heat treatments ranging in temperature from 100 to 200 • C. The average precipitate diameter and precipitate volume fraction were characterized for each heat treatment condition using synchrotron-based small-angle X-ray scattering. A maximum entropy method (MEM) was used to determine the average precipitate diameter from SAXS intensity vs. q curves for each heat treatment condition. SAXS precipitate diameter measurements were verified with TEM precipitate diameter measurements for two conditions: 120 • C/3 h and 160 • C/3 h. TEM precipitate diameter measurements were in good agreement with SAXS precipitate diameter measurements. The precipitate volume fraction was computed from the integrated intensity. The measured precipitate size and volume fraction were used to calculate the change in strength due to precipitates. The transition from shearing mechanisms to by-pass mechanisms from the calculations agreed well with the observed peak hardness measurements for both alloys.
7,645.4
2020-07-16T00:00:00.000
[ "Materials Science" ]
Analysis of the ways to provide ecological safety for the products of nanotechnologies throughout their life cycle Recommendations for conducting ecological evaluation of nanomaterials are prepared. It is necessary to exercise control in order to establish effect of nanoproducts on the environment and human health for safe and productive use of nanotechnology. A general procedure for the system of nanosafety and certification of nanoindustry product should be based on creating standardizing, legislative and methodological support of safety system in the process of production, handling and disposal of nanomaterials. It was found that in order to perform assessment, nanoproducts should be examined at all stages of the life cycle. A scheme of the life cycle of nanomaterials was developed, which should be considered as a multi-stage process from the preparation of the source material to the reclamation. According to the methods proposed and recommendations developed, ecological assessment of porous indium phosphide and the device based on it, indium nitride, was performed. Nanostructures are investigated using the methods of scanning electronic microscopy, chemical analysis, the method of average projected diameter, gravimetric method, etc. It was found that porous indium phosphide may be health hazardous. Porous indium phosphide is formed by the method of electrochemical etching in the solutions of acids. Such methods of synthesis of nanostructures pose an ecological threat. Understanding these threats will optimize the processes of formation and operation of nanomaterials for ecological safety and will highlight the key moments of safe usage and disposal of products of nanotechnology. Introduction Over the past decades, nanotechnologies have become strategic industrial direction. In many areas of science and technology and sectors of industry, there is a great interest in the products of nanotechnology, which is associated with the real possibility of practical implementation of their unique properties. More than 50 countries conduct research and development in the field of nanotechnology and not less than 30 countries have their own national programs in this area [1]. According to official data of the StatNano website, in 2016, the Office for patents and trademarks of the United States granted 8484 patents in the field of nanoindustry [2]. Nanomaterials are widely used as basic material for photovoltaic converters [3,4], lasers and LEDs [5,6], buffer layers for making heterostructures [7], etc. Nanotechnologies traditionally include designs, in which materials and systems are used that meet three criteria: -at least one of their spatial dimensions does not exceed 100 nm; -processes based on fundamental control over the physical and chemical properties of molecular structures are used for their manufacturing; -they can be incorporated into larger structures. The penetration of nanoparticles into the biosphere can lead to many consequences, which are currently impossible to predict due to the lack of information. Researchers note that toxicity of nanomaterials is largely associated with impurities existing in them, rather than with materials themselves [8]. However, information about the consequences of the uncontrolled emission of nanoparticles in the environment remains quite scarce. American Society for Testing and Materials (ASTM) developed the standards. They relate to the terms in the field of nanotechnologies, methods of measurement and characteristics of nanoparticles as well as specification of nanomaterials [9]. Within the framework of ISO/TC 229, country-curators of individual areas of metrology, standard-ization and certification were defined. Metrology, measurement and testing techniques are assigned to Japan, terms and definitions -to Canada, problems of health, safety and environment -to the USA. In the field of nanotechnology in Japan, the share of funding of works that study the risk of negative impact on health and environment reaches 30 % [10]. All mentioned above indicates relevance and the need to search for the ways to provide ecological safety of nanotechnology products throughout their life cycle for their further improvement. Literature review and problem statement Widespread implementation of nanotechnologies in industry is predetermined by a number of factors that include: -depletion of natural resources and the possibility of replacing rare materials with metamaterials [11,12]; -miniaturization of electronics products [13,14]; -advent of new industries [15,16]. Nanoindustry develops rapidly and, due to this, attraction of investment from government and businesses to this sector is growing around the world. At the same time, more and more researchers acknowledge that the use of nanomaterials may pose a danger to human health and environment [17,18]. Authors of paper [19] stress the need to take into account the lifecycle approach to nanoproducts when evaluating their possible impact on the environment and human health. Method of assessment of nanomaterials, which includes a risk assessment "Nano LCRA" and comprehensive ecological assessment, was proposed. However, the authors indicate that this technique has a general character, and requires further detailed specification. Studies have shown that the quality of nanomaterials, which make them popular, may pose a potential ecological threat. Today it is important to make a decision: either to use potentially dangerous materials or to refuse from them in favour of ecologically friendly and those that are sufficiently studied. With this in mind, paper [20] proposes to conduct analysis of nanotechnologies with regard to four principles: -before applying nanotechnology products, to make a comparative analysis of all alternative solutions to the set task after obtaining complete information about a possible threat to biological components of environment; -to determine quantitavely the nature of compromises associated with existing choice of alternatives; -nanomaterials and structural elements on their basis should be considered as a united system; -analysis of risk and sense of using nanotechnology products should be comprehensible for consumers. However, the authors do not provide a clear mechanism for the quantitative detection of risks and methods of general analysis of the products of nanotechnology. It should also be taken into account that is not always possible to receive full information about possible danger of nanomaterial. In paper [21], authors demonstrate that standard toxicological methods cannot be applied to determine the hazards of nanomaterials. This is explained by the fact that the properties of the latter are caused not by concentration in the volume of material, but rather by its quantodimensional properties. Thus, one can argue that many scientists point to potential dangers of nanotechnology products for the environment and human health. However, in this case, there is no systematic approach to determining the extent of danger of nanomaterials throughout their life cycle. Methods for detecting this danger at different stages of synthesis and the use of nanoindustry products have not been determined up to now, nor have been explored the problems of ecological safety of nanotechnology application. The aim and tasks of the study Conducted studies were aimed at searching for the ways of providing ecological safety for nanotechnology products throughout their life cycle. To accomplish the set aim, the following tasks were to be solved: -to establish main stages of the life cycle of nanomaterials that require research and control of their safety; -to identify the main purpose of ecological evaluation of nanotechnological products; -to develop recommendations as for the provision of ecological safety for nanotechnology products throughout their life cycle. 1. Development of the scheme of nanomaterials life cycle A comprehensive research into the risks of using nanomaterials and controlling their impact on the environment and the human body is a long-lasting and scientifically complicated process. In addition, there are no sufficient data on the toxicity of large quantities of nanomaterials and labeling and passports have not been developed for most of them. That is why we will focus only on general types of nanoindustry influence on the ecosystem and humans. To do this, one must clearly understand that nanomaterials may pose a danger not only in the course of their usage, but at all stages of their life cycle, the simplified schematic of which is shown in Fig. 1. Thus, in the process of ecological assessment of nanomaterials, specific features of each stage of the life cycle should be taken into account. Hence, at the first stage of "Extraction and production of raw materials from nanomaterials", one should consider substances of which the nanoproduct is made. The second stage of "Production of nanomaterials" is directly related to the methods of synthesis of nanomaterials, which may conditionally be divided into physical, chemical, and chemical-physical. At this stage, the pargest threat is caused by the substances involved in nanomaterial production (electrolytes, ions, powders, gases, etc.) and the methods of synthesis. At the third stage of "Storage and packaging", it is necessary to take into consideration specific features of materials, their volatility, solubility, interaction with air and water, etc. As a rule, nanomaterials are made with the view to their further integration into products or industrial produce. Thus, the fourth stage of "Production of nanomaterials products" is an essential element in examining the life cycle of nanoproduce. At this stage, testing and identifying the quality and suitability of nano-raw materials for later use is carried out. That is why a significant percentage of nanomaterials is rejected and requires liquidation or recycling. "Usage of nanomaterials", which is the fifth stage of the life cycle of a nanoproduct, regards products containing nano raw materials. Therefore, research should be comprehensive, taking into account not only physical and chemical characteristics of substances, but also behavior of the whole product and its components during the operation period. When analyzing the last stage of the life cycle "Reclamation and wastes", it should be taken into consideration that nanomaterial exists as a component of the product, which is why its separation is impossible in many cases. Then the reclamation of the entire product is necessary. 2. Procedure of making up guidelines to control nanomaterials In general, sanitary-epidemiological examination is carried out in order to detect: -products, which pose a danger for human life and health; -products, manufacturing, circulation and consumption (using) of which may have a possibility of causing harm to human health. When using nanoindustry products, it is necessary to assess compliance/noncompliance of products, terms of its manufacturing and usage with current legislation and international standards. Most studies of assessing the risks related to nanomaterials refer to particular, homogeneous nanomaterials that are characterized by a high degree of purity. However, heterostructures, that contain nanofilms of different composition, are very often used. In addition, it is necessary to focus attention on such indicators as: -total amount of resources, used for creation of nanoproduce; -ageing of nanomaterials; methods of treatment and incorporation of nanomaterials in a commercial product; -basic characteristics of the original material that was used to create nanostructures, etc.; -change in properties of nanomaterials throughout life cycle. Considering the aforementiones, there is a need for creating a system of nanosafety and certification of nanoindustry produce. General schematic of this approach should include a number of measures (Fig. 2). A manufacturer must provide full information on the nanomaterial according to the procedure shown in Fig. 3. Fig. 3. Recommended procedure to control nanomaterials It should be noted that specific properties of nanomaterials may vary for each individual case, even with the same chemical formula and method of obtaining. This fact complicates classification and labelling of nanomaterials. In Results of conducting control of nanomaterials on the example of porous indium phosphide As an experimental nanomaterial, we selected porous indium phosphide (por-InP), which was obtained on the substrate of monocrystalline indium phosphide by the method of electrochemical etching in the solution of hydrochloric acid. 1. Analysis of stages in the life cycle of porous indium phosphide and the product based on it Given the general scheme of nanomaterial life cycle, it is expedient to compose LCA of por-InP and of the product based on it. This should take into account the intermediate stages -testing and sorting the samples (Fig. 4). We will accept indium nitride (InN/por-InP), which is widely used in optoelectronics, photoelectric and photovoltaic devices as a product based on porous indium phosphide [22]. An important point is to understand the fact that porous indium phosphide is a specific form of monocrystalline indium phosphide, so the general properties of both materials will be the same, while specific properties may vary. Stage 1 "Extraction and production of raw material bulk-InP" Porous indium phosphide is made at the surface of monocrystalline indium phosphide (mono-InP or bulk-InP). In its turn, monocrystalline indium phosphide is made by the Cokhralsky method ( Fig. 5) with liquid hermetic sealing of melt (LEC) and by the method of vertical directional crystallization (VGF). The peculiarity of the technology of InP cultivation lies in the fact that both methods are implemented at high pressure of inert gas or phosphorus in the chamber. The obtained ingots of indium phosphide are cut into plates and polished (Fig. 5, b). General and physical and chemical properties of indium phosphide are shown in Table 1. There are some data on cancerogenity of indium phosphide: according to the website of U. S. National Library of Medicine, indium phosphide is classified as a substance, probably cancerogenic to humans (Group 2A) [23]. The studies were carried out on mice and rats. Very important is the fact that an increase in cases of neoplasms occurred in rats and mice, subjected to exposure of extremely low concentrations of indium phosphide (0.03-0.3 mg/m 3 ), and, more importantly, the number of cases increased in mice and rats that were exposed to this influence for only 22 weeks. In view of the foregoing, the plates of indium phosphide should be accompanied by a danger pictogram "Health hazard" (Fig. 5, d). However, it should be taken into account that indium phosphide is usually presented in the form of crystalline plates that are thermodynamically and electrically stable in the air. So we can assume that the plates themselves do not pose a threat to life and health. For the experiment, we selected 10 monocrystalline plates of n-type indium phosphide with surface orientation (111), alloyed with sulfur to the concentration of non-basic charge carriers 2.3×10 18 сm -3 . Porous surface was formed in electrochemical cell with the 5 % water solution of hydrochloric acid (Fig. 6). Fig. 7 shows the plate of indium phosphide after electrochemical treatment. Current density during the treatment was selected in the range of 70-150 mА/сm 2 , at etching time of 5-15 min. After etching, the samples were treated in the flow of liquid nitrogen. In addition to hydrochloric acid, solutions of hydrofluoric, nitric, bromide acids, etc. often serve as solvents for indium phosphide. Given the fact that the solutions of acids are used for the formation of porous layers, it can be argued that this technology is not safe for human health. Moreover, during the experiment, the mode of electrolyte heating is often used to accelerate the process of penetration of ions into the holes of pores. That is why this experiment should be carried out with the use of means of collective and individual protection. The used electrolyte must be disposed of according to valid legislation requirements. To study the properties of por-InP, we used the method of scanning electronic microscopy and EDAX method. As a result, a porous layer with tightly packed pores was formed at the surface (Fig. 8). Porous structure is a nanomaterial, consisting of deep cylindrical holes -pores and walls between them -quantum wires. These wires are nanostructures (Fig. 9). Equivalent diameter of particles was determined by the method of using the mean projected diameter, which is the diameter of circle whose area is equal to the area of particle projection image (1), (2). Since the projection area of spherical particle is equal to: then the mean projected diameter is calculated as: By the results of scanning electronic microscopy, it is possible to establish that dimensions of pores reach 40 nm on average. This indicates that this structure is mezoporous. Dimensions of walls between the pores are within (5-10) nm (Fig. 10). Porosity degree of the sample is determined by the gravimetric method (weighing) at three stages: weighing monocrystalline plates; etching of porous layer on it and weighing; removal of the porous layer and repeated weighing. Next, porosity was determined by formula (3) where ρ por and ρ InP are the density of porous and monocrystalline materials. Thus, porosity of the obtained layers varies from 40 to 70 %. Fluctuation of surface porosity is caused by uneven concentration non-uniformity of distribution of impurity in the volume of ingot, which occurs during the crystal growth. Chemical composition of porous samples was assessed using the EDAX method (Fig. 11). Based on results of these data, it may be concluded that the oxide film was not formed at the surface of porous por-InP, the existence of elements that make up the etcher was not observed either. However, crystal stoichiometry was broken in the process of etching: indium was present in larger concentrations than phosphorus. Fig. 11. Chemical composition of elements at the surface of por-InP Obviously, porous layers of indium phosphide are very fragile. The top layer may shear off even in the contact with hands, forming a nanodispersed powder, which is a real threat to human health -such nanoparticles easily get into the respiratory tract and penetrate the skin. Indium excess creates an additional threat, as indium in its pure form is a toxic substance. Stage 2A "Testing and sorting" Samples testing is conducted in order to identify the ones which are suitable for further use. Depending on the requirements for the quality of nanomaterials, different methods, such as visual inspection, electronic microscopy, photoluminiscence, X-ray diffractometry are used. In our case, the method of scanning electronic microscopy was applied. As a result, 2 layers out of 10 were rejected due to excessively severe conditions of etching (current density for them amounted to 150 mA/cm 2 ) -porous layer was separated from the monocrystalline substrate. Stage 3 "Storage and packaging of por-InP" A specific feature of por-InP is its ability of "ageing" in the open air. The surface of porous layers of indium phosphide under normal conditions of storage is covered by the oxide layer. Chemical analysis of the surface of porous InP (spectra were taken at 4 points - Fig. 12) revealed violations of stoichiometry of the original crystal. Oxygen atoms and a small fraction of fluorine atoms emerged at the surface of the sample ( Table 2). It indicates creation of proper oxides of InP. Overgrowing of porous nanomaterial with an oxide layer occurs for definite reasons. Porous surface is characterized by high density of surface states in the forbidden zone, which leads to fixing of the Fermi level, the position of which at the surface practically does not depend on the nature of adsorbed atoms [27]. This circumstance negatively affects the work of many micro-and optoelectronic devices, preventing complete revealing high potential abilities of these semiconductors. To eliminate undesirable surface influence on the properties of devices, the technique called "passivation" is actively developing in technology, within which a variety of methods of surface treatment, related to applying coverings on it, are designed [28]. At chemical passivation, an oxide layer is removed from the surface of semiconductor, instead of which a thin crystalline film of chemically inert material is formed. This film can perform the functions of a superfine buffer layer and protect surface of the semiconductor from contact with aggressive components of the environment. The layers of porous indium phosphide were kept in the Na 2 S solution for 10 min. During chalcogenide por-InP passivation the oxide layer is removed, a thin crystalline film of chemically and electrically inert material is formed instead of it. These nanomaterials may be stored under normal conditions in a special container, avoiding contact with aggressive substances. Porous indium phosphide does not dissolve in water; the solvent may be acids and alkali. Stage 4 "Production of nanomaterial products InN/ por-InP" Thin films of indium nitride on the substrate of porous indium phosphide were obtained by the method of ray-radical epitaxy (Fig. 13). The main difference of this method from the traditional epitaxy is that one component comes with gas phase (atomic nitrogen), and the other (indium) is obtained from the volume of the crystal [29]. As atomic nitrogen, especially pure ammonia is used, which passes through the high-frequency discharge, resulting in atomic nitrogen, which is a chemically active. A stream of atomic nitrogen gets on the crystal of indium phosphide (temperature of the sample is 400 о С, time of experiment is 1.5 hours). It results in the process of conversion of surface layers. At the surface of porous InP, thin InN films emerge (Fig. 14). Table 3 shows basic properties of the resulting structure. A film of indium nitride is formed with the violation of stoichiometry toward indium (Table 4). Indium nitride may cause irritation of skin and eyes, pain in joints and bones, tooth decay, nervous and gastrointestinal disorders, pain in the heart and the overall weakness [30]. Acute and chronic toxicity of this substance are not known enough. Given high thermodynamic, electrical and chemical stability of indium nitride [31], it can be argued that its crystals may be considered conditionally safe under normal conditions. Stage 4A "Testing and sorting of InN/por-InP structures" The main problem of obtaining InN films on allogenic substrates is mismatching of periods of nitride film lattices and the used substrate [32]. This leads to a considerable number of defects emerging on the boundary of the InN film and the substrate and, as a consequence, poor quality of the produced InN films. A porous layer of indium phosphide serves as a buffer that is able to take elastic deformations, which arise in the process of its formation and further cooling, and provide drainage for dislocations of mismatching [33,34]. Stage 5 "Usage of nanomaterials" The structures, based on nitrides of the third group, have a predicted operation life of about 5 years [35,36]. In this context, we imply retaining all electro-physical indicators at the output level. This is followed by a slow degradation of the structure surface. InN/InP is used as a raw material for solar cells whose operation life is 20 years. Stage 6 "Reclamation and wastes" As was noted above, the original nanomaterial is used as a raw material for products and devices, the reclamation of which is recommended to conduct with the "hazardous wastes" label [37]. Currently, there is a limited number of studies, devoted to the recycling of nanomaterials, and, until sufficient data are collected, such materials should be treated as hazardous. Discussion of results of the study of conducting nanomaterial control The main purpose of certification of nanomaterials is a confirmation of the possibility of recreating conditions of synthesis within permissible deviations, establishing suitability of using this nanomaterial in accordance with its purpose, detection of potential dangers of its usage. The analysis, presented above, of quality control of porous indium phosphide at all stages of the life cycle allows making up a control card of por-InP (Table 5) according to the procedure, presented in Fig. 3. To identify the possible danger of nanoproduct, it is necessary to evaluate its indicators from its design stage to the reclamation stage. This approach might be applied in the analysis of other nanomaterials, taking into account their specific features. Methods of measurement of parameters and properties of nanostructures is a fixed set of operations and regulations, compliance with which provides obtaining measurement results with guaranteed accuracy according to the adopted method. One may say that the method of measurement is the technology of measurement process. However, most methods are still at the stage of development and do not allow providing full control of quality and safety of nanomaterials. The main reasons for this are: -lack of clear-cut requirements and standards for quality of nanomaterials; -lack of standard samples of most nanomaterials; -insufficient number of certified methods of measurement, calibration and validation, etc. This direction needs further development and the government-level support. 1. A scheme of the life cycle of nanomaterials was developed, which should be considered as a multi-stage process from the preparation of source material to the reclamation. In this case, it is necessary to take into account additional stages -testing and sorting of samples. 2. It was established that the main purpose of assessing nanotechnological products is a safe and productive use of nanotechnology for the provision of ecological safety. The search for and development of methods of studying nanomaterials is required. Ecological estimation of nanotechnology products needs government regulation. 3. The methods for controlling quality and safety of nanomaterials and products based on them were presented. It is necessary to exercise control at every stage of the lifecycle using appropriate techniques and methods of research. According to the proposed methods, an analysis of the samples of porous indium phosphide and a device based on it -indium nitride, was performed. It was found that porous indium phosphide is dangerous for health material. Acknowledgement Present study was conducted within the framework of the scientific state-funded research "Nanostructured semiconductors for power efficient ecologically friendly technologies that increase power efficiency and ecological safety of the urbosystem" (State registration number 0116U006961). Standardizing and legislative provision Information is not available Introduction The Cat Ba islands consisting of 367 islands are the third largest island group, behind The Phu Quoc and Cai Bau islands. However, The Cat Ba Islands are the biggest limestone islands in tropical Southeast Asia, also are the largest islands in Halong Bay Area with high potential for scientific study. In recent years, tropical karst landscapes have been strongly affected by intrusion and impact of global climate changes. Therefore, understanding the processes of weathering, erosion and the effects of climatic factors and natural conditions on limestone weathering process is very essential, as a basis for proposing efficient conservation measures of sustainable natural heritage of our world (Fig. 1) [1]. EROSION STUDY OF LIMESTONE ON THE Cat Ba ISLANDS IN NORTH EAST VIETNAM BY TRANSVERSE MICRO-EROSION METER N g u y e n T r u n g M i n h PhD, Associate Professor, Director* Email<EMAIL_ADDRESS> D o a n D i n h H u n g Master of Science, Researcher* Email<EMAIL_ADDRESS> N g u y e n T h i D u n g Master of Science, Researcher* Email<EMAIL_ADDRESS> T r a n M i n h D u c Bachelor of Science, Researcher* Email<EMAIL_ADDRESS> N g u y e n B a H u n g Bachelor of Science, Researcher* Email: hungdc53@gmail.com
6,106.8
2017-02-28T00:00:00.000
[ "Materials Science" ]
User Evaluation of UbiQuitous Access Learning (UQAL) Portal: Measuring User Experience —The goal of user experience (UX) research in human-computer interaction is to understand how humans interact with technology. This paper aimed to evaluate the interface and user experience of UbiQuitous Access Learning Portal (UQAL) and make recommendations for the system interface. UQAL Portal is an e-learning web portal that teaches a targeted group of users how to start a business or an online business using an e-learning portal. The portal will be used to search for business-related information, among other things. The User Experience Questionnaire (UEQ) is used to evaluate user experience. The interface is evaluated using a heuristic evaluation technique based on Nielsen’s ten heuristics. According to the UEQ results, the average score for each aspect in 30 UQAL users is: Attractiveness aspect: 1.77; Perspicuity aspect: 2.20; Efficiency aspect: 2.30; Dependability aspect: 1.73; Stimulation aspect: 0.63; and Novelty aspect: 1.27. A comparison of the average score in the dataset product of UEQ Data Analysis Tool revealed that the Perspicuity, Efficiency, and Dependability aspects of UQAL belonged to the Excellent category. The Attractiveness and Novelty aspects could be categorized as Good, and its stimulation could be categorized as Below Average. Four evaluators participate in the heuristic evaluation, which tests all user categories in UQAL. The findings of this study can be used as a suggestion and reference for UQAL Portal improvement. I. INTRODUCTION Because of the rapid evolution of digital technologies, new forms of human interaction and experiences are becoming possible. To achieve a positive user experience with technology, service providers must ensure a high user experience quality. Nowadays, users" demand for products is no longer limited to functional satisfaction but also includes psychological needs [1], which involve emotional, intellectual, and sensual aspects [2]. To date, user experience (UX) research has attempted to comprehend how humans interact with technologies such as computers, mobile phones, telecommunications networks, and other digital systems [3]. Similarly, user experience (UX) is a critical factor in the commercial success of digital products. It appears that the new UX movement is gaining traction among academics and industry practitioners who are looking for innovative approaches to improve the experiential qualities of technology use. As a result, this paper aims to understand user experience better when interacting with technologies by measuring user experience while interacting with the UQAL Portal. UQAL is an abbreviation for UbiQuitous Access Learning. The UQAL Portal will bring a Digital Transformation for learners to access business-related information from the e-learning portal and for educators to supply business-related information into the elearning portal. The B40 group in Malaysia is the target audience for the UQAL Portal. The B40 group represents the bottom 40% of income earners. The goal is to assist the B40 group in learning how to start a business or online business using the UQAL Portal. Furthermore, the portal will be used as a platform for the B40 group to search for business-related information, among other things. UQAL is evaluated based on its user interface and user experience, and the interface is evaluated using a heuristic evaluation technique. A User Experience Questionnaire (UEQ) assesses UQAL"s user experience. The evaluation of the user experience can provide feedback about the product or service and facilitate product improvements and acceptance among the targeted users. The rest of the paper is structured as follows: Section II identifies the Experience Evaluation Methods (UXEMs) used to evaluate and measure user experience in previous papers. In Section III, the paper discusses UX evaluation methods on the UQAL Portal. Section IV discusses the findings, followed by the conclusion, which concludes and provides insight for the improvement and future direction of the UQAL Portal. A. User Experience (UX) The International Organization for Standardization (ISO) 9241-110:2010 defines user experience as a person's perceptions and responses resulting from the use and anticipated use of products, systems, or services. Several studies have been conducted to explain the meaning and concept of user experiences with technology. User experience is used to stimulate the HCI (Human-Computer Interaction) research by focusing on the aspect of usability that goes beyond usability and its task-oriented instrumental values [4]. According to Vermeeren et al. [5], user experience examined how an individual felt about using a product, i.e., the experiential, affective, essential, and beneficial aspects. According to Melançon et al. [6], when interacting with a www.ijacsa.thesai.org product or service, the user experience was described as a fleeting, primarily evaluative feeling (good-bad), and it was about having a positive experience through a system. Lipp [7] emphasizes that user experience is subjective because it is about an individual's performance, satisfaction, feelings, and thoughts about a product or service. Despite the lack of a clear definition, the concept of user experience has emerged as an important design consideration for interactive systems [8]. According to Allam, Razak and Hussin [9], user experience is dynamic and involves multiple research areas, including HCI, product design and development, and psychology. As a result, user experience can be viewed as a phenomenon, field of study, or practice. Some work on measuring user experience and usability was carried out by [10] [11] [12] [13]. These studies assess user interaction and product usage, including satisfaction. The user experience is dynamic because it changes over time as conditions change. As a result, user experience should be valuable after interacting with an object and before and during the interaction. While evaluating short-term experiences is important, given the dynamic changes in user goals and needs resulting from contextual factors, it is also critical to understand how (and why) experiences evolve [5]. A product"s effect on a user is called the user experience. In addition, Türkyilmaz, Kantar, Bulak and Uysal [14] stated that user experience is an emotional interaction that begins with usage as a feeling. It is about how we feel and remember after using the product. The term "user experience" refers to using a device to create an experience rather than just creating a fancy interface. Although there is no agreement in the literature on defining user experience, everyone agrees that it is a complex concept and should not be confused with usability or user interface [15]. Hellweger and Wang [15] conducted a thorough examination of the user experience concept and proposed a user experience conceptual framework. There are numerous perspectives on user experience, and it is understood in various ways by various disciplines and can be viewed from various perspectives [16]. User experience can be academically defined as any aspect of a user's interaction with a product, service, or company [17]. Nonetheless, user experience is regarded as desirable. However, what something exactly means is still up for debate, and it is a highly interdisciplinary topic [18]. A large and growing body of literature has been devoted to understanding user experience (UX) better. Due to the variety of concepts and the flexibility of adding and removing them when stating a definition, it is not easy to have a unique and general definition for user experience. User experience, in our opinion, is primarily associated with the overall design and presentation of online software solutions such as websites or apps. To date, the analysis appears to have focused on user experience in specific domains and fields. For instance, user experience evaluations in games and interactive entertainment [8], [19], [20], [21], culture [22], [23], [24], robotic [25], safety-critical domains [26], and in business and management [18] and [27]. User experience evaluations in games, and more broadly in interactive entertainment systems, had previously been performed over the last ten years [19]. HCI user experience evaluation methods are used during game development to improve user experience. To better understand the concept of user experience, HCI borrowed and explored aspects of the gaming experience such as immersion, fun, and flow [19]. Nagalingam and Ibrahim [21] conducted additional research on the user experience elements for the evaluation and design of educational games (EG). It is critical to identify the appropriate elements to model the right user experience framework for EG to assist the designer in producing an effective educational game [21]. Several studies have been conducted to investigate user experience with social robots. In 2017, Alenljung, Andreasson, Billing, Lindblom and Lowe [25] demonstrated how the user interacted with the humanoid robot Nao while conveying emotions to the robot through touch. The research objective was to gain a better scientific understanding of affective tactile interaction and see if theories and findings from emotional touch in user experience could be applied for future robotic technologies [25]. It was preliminary to conduct additional user experience studies in the Human-Robot Interaction research area. Grundgeiger, Hurtienne, and Happel [26] recently emphasized the importance of the personal experience of consumers in security-critical domains who engage with technology such as healthcare. They summarized "interaction" concepts based on modern theories of HCI, which include personal user experience as an essential construct. They concluded that improving user experience could improve technology design, employee well-being, and modern safety management [26]. Luther, Tiberius and Brem [18] recently conducted a bibliometric analysis to identify the evolution of scientific research on user experience between 1983 and 2019. However, despite its importance for competitiveness, customer satisfaction, customer retention, and, ultimately, firm performance, the topic has so far been discussed in the HCI field rather than in business and management. As a result, businesses must adopt a successful user experience approach [18]. It is consistent with Erdos"s [27] research, which found that user experience is one of the most important determining factors in the case of business software products and services. They recommended that future research concentrate on business and management-related topics. III. MATERIALS AND METHODS The evaluation methods for user experience are another path for undergoing user experience studies. The primary goal of evaluating user experience is to support and aid in selecting the best design, ensure that development is on track, or measure and clarify whether the final product meets and exceeds the initial user experience targets [9]. There are an increasing number of methods for assessing user experience available at all stages of the development process. Several studies attempted to conduct a comprehensive review of user experience evaluation methods to understand the available methods better. Surveys on these contributions are already available [5], [28], and [29]. A study by Vermeeren et al [5] had discovered 96 user experience evaluation methods www.ijacsa.thesai.org both from academia and industry. They also discovered a need for development of UX evaluation methods, such as early-stage methods, methods for social and collaborative UX evaluation, and establishing practicability and scientific quality. Bargas-Avila and Hornbk [28] conducted an integrated review of user experience, looking for similarities across products, experience dimensions, and methodologies (time frame restricted to [2005][2006][2007][2008][2009]). According to the study"s findings, questionnaires (self-developed questionnaires) were the most commonly used method of assessing user experience. In addition, qualitative methods included semi-structured interviews, focus groups, open interviews, user observation, video recording analysis, and diary analysis. However, psychophysiology is rarely used to improve user experience [28]. Table I summarizes the data collection methods used by Bargas-Avila and Hornbk [28]. Maia and Furtado [29] conducted a systematic review on user experience evaluation (time frame restricted to 2010-2015). According to Maia and Furtado [29], most of the studies used questionnaires to assess the user experience rather than other tools and techniques such as interview, observation, reports, video recording, eye-tracking, etc. They reported that psychophysiological analysis was not yet used in user experience evaluation models because most studies evaluated the user experience manually. According to literature reviews, many different types of user experience evaluation methods are available in the industry and academia. However, methodological improvements in evaluating user experiences that focus on product use and their specific needs such as development phase, type of experience addressed, target users, and evaluation objective are required. A. Respondents Respondents were found through a WhatsApp Group announcement. Users who wish to participate in this survey have received an invitation to do so. All respondents had been informed about the survey's objectives and methods. The invitation contained a link to our survey, which was created using the online survey tool Google Forms. B. Data Analysis The data gathered during the evaluation process is both quantitative and qualitative. The open-ended questionnaire yields qualitative data. The UEQ provided the quantitative data. The results of the evaluation are then summarized into a table. The data was then analyzed to determine the user experience level of UQAL. The system"s user experience is graded on six scales: Stimulation, Perspicuity, Efficiency, Dependability, Attraction, and Novelty. The level of user experience for each scale is calculated by processing statistical data with the UEQ Analysis Data Tool. After obtaining the score for each scale, the data is displayed using a benchmark graph to determine the quality of UQAL in comparison to other products in the data set UEQ Analysis Data Tool. C. UQAL Portal Interface Evaluation is a stage where the UQAL Portal"s effectiveness and efficiency are perceived. The user"s interface effect is measured, which concerns how simple the portal can be learned, its usability and user experience, and problems that may occur on the portal are identified. UQAL is evaluated based on its user interface and user experience. This evaluation aims to measure the user experience and user interface when interacting with the portal. A heuristic evaluation technique is used to evaluate UQAL"s user interface. According to Nielsen [30], a heuristic evaluation is carried out by a group of evaluators who are given an interface. They are then asked to evaluate whether each element adheres to a set of established heuristic uses. UQAL Portal is an e-learning web portal that teaches a targeted group of users how to start a business or an online business using an e-learning portal. The UQAL Portal can be found at https://yutp-uqal.com/. The B40 group in Malaysia is the target audience for the UQAL Portal, and the B40 group represents the bottom 40% of income earners. UQAL Portal will bring a Digital Transformation for learners to obtain business-related information from the e-learning portal and for educators to provide business-related information to the elearning portal. The user will interact with the e-Learning portal through GUI elements such as menus, buttons, checkboxes, search fields, pagination, and notification. Fig. 1-3 depicts the UQAL interface"s main menu. Probes Participants were given a probe kit with a brief personal explanation and instruction. Body movements The choreography of interaction with apps was evaluated by analyzing the movements. 1) User Experience Questionnaire (UEQ): The method was chosen for this study. The questionnaire was divided into four sections. The first section asked a few questions about the user's demographic information (i.e., age, gender, race, occupation, working experiences). Users rate the usability evaluation, including the portal interface, ease of use, and learnability. These sections used a five-point Likert scale with 1 (Strongly Disagree), 2 (Disagree), 3 (Neutral), 4 (Agree), and 5 (Strongly Agree) was employed. The UEQ in the third section is used to assess the user experience of the UQAL elearning portal. The UEQ can be accessed for free and is available at https://www.ueq-online.org/. The UEQ has seven scales and 13 items in total (as shown in Fig. 4). This study employed only the 13 items of UEQ related to the user experience to cover the user"s psychological aspects such as feelings of pleasure, disappointment, and stimulation when using the portal interface. Table II shows each of these scales in detail. This section allows users to choose their own experiences and opinions while interacting with the portal. Finally, we ask the user to provide any comments or suggestions for the portal's improvement for the open-ended questions. 2) Heuristic Evaluation a) Nielsen"s ten heuristic principles are described below: 1) Visibility of system status: This system should always keep users informed of what is going on by providing appropriate feedback in a timely manner. 2) Match between system and the real world: The system should speak the user"s language, using words, phrases, concepts that the user is familiar with, and adhere to real world conventions rather than system-oriented terms. www.ijacsa.thesai.org 3) User control and freedom: Users frequently select system functions by accident, necessitating a marked "emergency exit" to exit the undesirable state without going through an extended dialogue. 4) Consistency and standards: Users should not guess whether various words, situations, or actions mean the same thing. Observe platform conventions. 5) Error prevention: A careful design that prevents a problem from occurring in the first place is even better than good error messages. Either eliminate error-prone conditions or check for them and provide users with a confirmation option before proceeding with the action. 6) Recognition rather than recall: Make objects, actions, and options visible to reduce the user"s memory load. The user should not have to recall information from one section of the dialogue to the next. When appropriate, system instructions should be visible or easily accessible. 7) Flexibility and efficiency of use: Unseen accelerators may frequently speed up the interaction for the expert user, allowing the system to cater to both inexperienced and experienced users. Allow users to personalize frequently performed actions. 8) Aesthetic and minimalist design: Dialogues should not include irrelevant or used infrequently. Every additional unit of information in a conversation competes with the relevant information units, reducing their relative visibility. 9) Help users recognize, diagnose, and recover from errors: Error messages should be written in plain language (no codes), accurately describe the problem, and constructively suggest a solution. 10) Help and documentation: Even though it is preferable if the system can be used without documentation, assistance and documentation may be required. Any such information should be easy to find, focused on the user's task, list concrete steps to be taken, and not be too large. b) Data Collection Procedures: The following are the data collection steps in the heuristic evaluation:  Step 1: Establish an appropriate list of heuristics. This survey used the model based on Nielsen"s 10 heuristics.  Step 2: Identify 3 to 4 evaluators (experts). They were assuring their knowledge of the relevant industry. Experts were defined in this survey as people with several years of job experience in the software and information technology fields.  Step 3: Briefing the evaluator/expert. They inform the evaluator about what they are expected to do and cover during their evaluation. The evaluator has explained the scope and objective of the portal inspection and the characteristics of the portal's users.  Step 4: Evaluation phase. Evaluators must have free access to the portal to identify elements to analyze. Individual elements are examined by evaluators using heuristics. They also investigate how these fit into the overall design, meticulously documenting all issues encountered.  Step 5: Report issues/problems. Evaluators complete the questionnaire given and report any issues and problems they discover. The evaluator's task at this stage is to assess the list of 10 Usability Heuristics for User Interface Design [30] in Table III. Help and Support The data obtained from this technique is a list of interface problems based on the evaluators' heuristic principles. The evaluation results are then compiled into a table that provides a detailed breakdown of the issues and recommendations. A. Demography Thirty users participated in the user experience survey (19 females, 11 males). Most respondents were between the ages of 18 and 35 (n = 23), followed by those between the ages of 36 and 55 (n = 6), with the remainder being over the age of 55 (n = 1). Malay (97%) and Chinese are the most common ethnic groups (3%). The majority had a bachelor"s degree or were enrolled in a bachelor"s degree program (70%). 7% had high school diplomas, 10% had college diplomas, and 13% had graduate degrees. Furthermore, approximate monthly household income for the respondent shows that 37% have more than 4500 Malaysian Ringgit, 20% have 2500-3500 Malaysian Ringgit, 10% range from 1500-2500 Malaysian Ringgit and less than 2500 Malaysian Ringgit. In addition, 7% ranges from 3500-4500 Malaysian Ringgit. The remaining respondents (17%), on the other hand, preferred not to respond. B. Usability Evaluation This survey evaluates the portal's usability with a few items identifying general interface design and layout, ease of use, and learnability. Overall, participants gave positive feedback on usability aspects, as shown in Table V. 79.9% thought the portal interface was pleasant and easy to use (n = 24). In comparison, 83.3% thought the sequence of screens, organization of information presented, and graphical presentations were simple to understand (n = 25). As a result, 89.9% agreed that the portal was simple to use (n = 27), 86.6% agreed that it was easy to find needed information (n = 26), and 90% of respondents understood the menu (n = 26). Overall, most of the participants, 86.6%, were satisfied with the easiness of the portal (n=26). In terms of learnability, most respondents (96.6%) said that it was easy to learn how to use the portal; 89.9% said it helped them become more productive quickly. Another 93.3% found the information in the portal to be effective and helpful. C. User Experience Overall, the score indicates that the UQAL Portal gets a positive evaluation from users. Results from UEQ show that the overall score is in the positive range. The Likert scale data has been transformed into the UEQ Data Analysis Tool in an Excel sheet to calculate the scale means and compare the products in the benchmark data set. The measured scale means are determined by comparing them to existing values from a benchmark data set (https://www.ueq-online.org/). Comparing the results for the evaluated product with the data in the benchmark allows conclusions about the quality of the evaluated product compared to other products. Table VI shows the score of each user experience aspect. The benchmark results from UEQ Data Analysis Tool revealed that perspicuity, efficiency, and dependability aspects belonged to the Excellent category, indicating that UQAL is included in the best 10% range of results, implying that 10% of the products in the dataset are better and 75% are worse. However, the stimulation aspect of UQAL could be classified as Below Average, which means that 50% of products on the dataset are better than UQAL while 25% are worse. The overall score is in the positive range, according to the evaluation of UEQ results. Minor issues on UQAL have not been shown to impact user experience significantly. Fig. 5 depicts the benchmark graph. Another thing that this survey wanted to look into was whether UQAL should have any additional essential features. Table VII includes some comments on all the missing features mentioned. We are looking at which aspects/features the UQAL Portal users like the least and most. According to the survey results, some respondents like how simple it is to use and understand the portal. Some respondents said they were straightforward when asked about the portal's features. For example, "I like the portal structure; it is straightforward." They commented, "This portal is simple and easy to understand." Some of the respondents commented, "User-friendly." This portal helps me find any business courses. I can easily organize and manage courses from the beginning to the end". Others commented that it is "so easy for people to understand the flow of the system because each page has different information". Some respondents stated that the user interface design is their least favorite. They felt the portal's interface was not interesting enough to draw their attention. Some of them stated: "The theme of the portal does not seem very interesting. Color combinations could be used to make the portal look better." "The thing I like the least is the inconsistent type of fonts used and the size of the fonts. I found certain words or sentences do not start with a capital letter, which does not represent the professional side." "The color of the portal. This e-learning web portal is for Malaysians who want to start a business online. The color of a website plays a vital role in attracting more B40 groups." "The team can research which fonts are compatible for each part, especially for the Business Opportunities interface and UQAL course interface. I found certain fonts used are 'awkward', and the layout and the color of the fonts should be consistent. " 1. Course/Event details "Course introduction, course timeline, and instructors' information" "State the date when this portal updates the information of the course and event. So, the user will know the information was updated." "Can display multiple categories of data like all courses, my courses, external courses in a single screen." "Add more information of the courses." "Have a calendar to show/ list next course register." "Add description for the courses." "Have a list of courses by category." 2. Customer Service Chat/Online Chat "Chabot or online helper to assist users when they face any issues when using the portal." "Chat feature to allow peer engagement and learnersinstructor interaction." "Any online learning platform should have chat features to enable for peer engagement as well as learner-instructor interaction." Information/ Content "Give information about another interesting portal" "Can add detail grant for SME, provided from the government" "Introduced more local corporate and business com-pany starter." "Maybe can add 'dashboard' that include information such as a graph to prove how UQAL Portal help the B40 group start the business using this portal." "Information on business events and business opportunities" "Company and corporate sector involved mostly big known." "Should have "about" section which could explain to people what the portal is about." User Interface Design "Perhaps the portal should be more organized with a dropdown menu…" "No attractive colors or graphics." "Greyish button. Hope more eye-catchy." "Add more pictures or graphics to make this portal interesting." "More interesting interface maybe can add animation, the welcoming or introduction video." "It would look nicer with better images resolution." 5. Advanced Search "Have sort and filter searching" "Allow multiple searching criteria in one screen." 6. Bilingual (BM/BI) "Make it friendlier for example, bilingual feature for easy to understand." www.ijacsa.thesai.org D. Heuristic Evaluation Analysis Four evaluators evaluate with backgrounds in software and information technology. Two of them have more than seven years of experience as software developers. One has over ten years of experience as an information technology administrator, and the other is a research graduate in usability. Based on the severity rating, the evaluators discussed some issues and made recommendations for improvement. Match between systems and the real world The portal has no elements of positive encour-agement (rewards, praise, personalization, etc.) to boost users' motivation. This type of element is essential in online learning since it requires users to learn independently. The system should speak the user"s language with words, phrases, and concepts familiar and follow real world conventions rather than systemoriented terms. There are no features that allow learners to interact with one another and with instructors/ lecturers/ trainers. Any online learning platform should have chat features to enable peer engagement and learner-instructor interaction. User control and freedom When the user makes an error on a certain field, the system removes all the information that the user has filled in, even though that information is supposedly correct. Support undo and redo. Consistency Some buttons are not suitable. For example, there is a button "Report" I thought it was for adding a report, but it was to generate Report. Use the standard color of buttons. Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. Prevent errors The function is not working well for evaluation. When I try to add a new course as an educator, the system makes it compulsory to add the image of the banner. When I didn't add the image, it showed an error message. However, the page redirects me to the front page. So, I need to re-key the form again. Properly test the portal to make sure all functionality is working. The system must be functioning well to be successful. Prevents a problem from occurring in the first place. Present users with a confirmation option before they commit to the action. Recognition rather than recall There is no error warning message when the user makes a mistake. The system should prevent users from making mistakes. Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Flexibility and The efficiency of use. Advance search feature: Users do not get the benefit of a search menu there. The search feature should add sorting and a filtering function. Its multi-platform, but it can't be used for the mobile version well. Create a mobile-friendly web portal for users to access because not everyone has a laptop or tablet to access the portal. It would be better if you could add the calendar management for learners and educators to view the courses and events they join/conduct. For example, if I"m a learner and I click to join the event/course, then the event will be added to my calendar. Make the calendar to be viewed monthly/weekly. So, that it easier to check which event/course that I have joined or to join. I"m not sure how the courses will be conducted. So that leaner can always come to this website to review back the provided material. When the course is already marked as complete, there is nowhere for me to view back what the courses are all about. It would be better if an educator can up-load the teaching material (e.g., Power-Point slides or others material). Aesthetic and minimalist design The interface for "Course Management" (learner view) is not convenient to use. All the courses are displayed in one listing. It would be better if you could display the listing for the courses that have already been completed in one tab and the courses not yet completed in another tab. Or you could just add the filter there to allow users to filter the listing. The portal theme appears to be uninteresting. This portal does not appear to employ vivid colors. Color combinations could be used to improve the portal's aesthetic value. The image size used does not fit and not match the box provided. It is possible to match all the pictures at the same size and clear. The dashboard for learners should be appealing and dynamic. Choose dashboard UI elements carefully; otherwise, learners will become discouraged. The sidebar"s use of repetitive icons appears to be confusing. The button positioning should be con-sistent and clear (e.g., the Join button). 9. Help users recognize, diagnose, and recover from errors. When the user makes an error on a certain field, the system totally removes all the information that the user has filled in, even though that information is supposedly correct. Recovery from Error. Help user to recover if making an error. Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. Help and support Lack of Help to guide users. Put Guideline to help the user. Create a help menu to make it easier for users to use the portal. There is no sitemap of the portal. Create a site map to make it easier for users to navigate the portal. www.ijacsa.thesai.org The majority of problems can be found in Aesthetic and Minimalist Design principles. Two of the evaluators noticed some issues with UQAL"s interface"s aesthetic. The principle issues include a colorless interface, size, images resolution, icons, and buttons that should have aesthetic values according to the evaluator. The Search and Course function menus should be improved based on flexibility and efficiency of use. Some errors occur while performing certain tasks. Table VIII displays the outcome of the heuristic evaluation. Although heuristic evaluation revealed some significant flaws in the UQAL"s user interface design, it had no direct impact on the UEQ"s overall score, which was positive. Previous research on heuristic evaluation has shown that it identifies more minor usability issues in an interface than other methods [31]. Regardless, the UEQ results show that the overall score is positive. Minor issues discovered during the heuristic evaluation do not appear to significantly impact user experience on UQAL. V. CONCLUSION Finally, based on the heuristic evaluation results, the evaluators discovered some issues with Nielsen's heuristic principles on UQAL"s user interface design, which are commonly found in the Aesthetic and Minimalist Design principles, as well as flexibility and efficiency of use. The outcome of heuristic evaluation is a recommendation of issues and problems that must be addressed. Nonetheless, according to UEQ results, the user experience of the UQAL Portal is adequate. The sufficient average score of each aspect demonstrates this. The results of this experiment can be used as a reference for the developer to improve the UQAL Portal in the future.
7,627.2
2022-01-01T00:00:00.000
[ "Computer Science", "Education" ]
Contemporary crustal kinematics in the Guangdong-Hong Kong-Macao Greater Bay Area, SE China: Implications for the geothermal resource exploration Fault kinematics plays an important role in estimating the stress state and permeability of faults which are controlling factors in the formation of a geothermal system. However, there are very few studies on the kinematic characteristics of the major faults in the Guangdong-Hong Kong-Macao Greater Bay Area (GBA), SE China. To obtain a better understanding of the fault kinematics, we establish a comprehensive 3D geomechanical model for the GBA. Our results show that the NE-trending faults in the west of the Pearl River Estuary (PRE) usually slip faster than the faults striking in the ENE-WSW direction to the east of the PRE. The NW-trending faults have the lowest modeled fault slip rate. Slip rates of the faults are generally low with a maximum value of 0.12 mm/a occurring on the northeastern segment of the Wuchuan-Sihui fault. The NE-trending faults display sinistral motion, while the ENE-trending faults are dextral. The opposite slip senses on these two fault groups are inferred to be caused by the lateral variations in the crustal stress. Based on the analysis of contemporary kinematics and the heat flow in the GBA, we suggest that the fault segments with relatively high slip rates, such as the north-eastern segment of the Wuchuan-Sihui fault, the Kaiping fault, the Enping fault, and the middle segments of the Wuhua-Shenzhen, Zijin-Boluo, and Heyuan faults, have a high prospect for geothermal resources. The intersections of the NW-trending extensional faults and the NE-/ENE-trending faults could also be potential areas of interest for future geothermal exploration. Introduction The South China Block, located at the southeast margin of the Eurasian Plate (Fig. 1), has attracted widespread interest in the field of geodynamics due to its extensive magmatism and super-giant ore deposits (Dong et al., 2020;Li et al., 2020;Li, 2000;Zhou et al., 2006).Geochronological studies suggest that this block was initially formed by the Neoproterozoic collision of the Yangtze and Cathaysia blocks and then experienced three tectonic-magmatic events, which occurred sequentially in Silurian, Early to Middle Triassic and Late Mesozoic periods (Shu et al., 2021).Among these events, the Late Mesozoic tectonism-magmatism is considered to be caused by the northwestward subduction of the Paleo-Pacific Plate with a slab roll-back (Dong et al., 2020;Zhou et al., 2006;Zhou and Li, 2000).The back-arc extensional setting and the asthenosphere upwelling in response to the roll-back of the subducting Paleo-Pacific plate jointly promoted the magma emplacement and formed the famous Yanshanian (Middle Jurassic-Middle Cretaceous, 180-80 Ma) granitoids in the South China Block (Dong et al., 2020;Zhou et al., 2006).The widely distributed granitoids provide stable heat sources to make the South China Block a region rich in geothermal resources, especially in its southeastern part (i.e.Cathaysia Block).The latest terrestrial heat flow map of China shows that the Cathaysia Block has a high average heat flow of 69.4 mW/m 2 with local anomalies over 100 mW/m 2 (Jiang et al., 2019), which is generally consistent with the distribution of the Yanshanian granitoids in SE China (Zhang et al., 2018a). As an important economic center in SE China, Guangdong Province is characterized by high terrestrial heat flow values and abundant geothermal resources, and it has the first Chinese geothermal power station that was built in 1970 (Huang, 2012).Previous geothermal surveys have identified over 300 hot springs within the province, with low to moderate temperatures mainly ranging from 25 to 118 • C (Wang, 2018;Wei et al., 2021).In 2019, an administrative region called the Guangdong-Hong Kong-Macao Greater Bay Area (GBA) (Fig. 1), including Hong Kong, Macao, and nine cities in Guangdong, was established by the Chinese government to accelerate the economic development of this region.This has led to a strong demand for energy in the area.However, the abundant geothermal energy resources in the GBA have not been fully exploited yet, and one of the barriers to the exploration of this resource is that the kinematic characteristics of faults in this region are still not clear. Fault kinematics plays an important role in estimating the fault stress state, which further governs the fault permeability and eventually affects the migration of fluids through the subsurface (Jolie et al., 2021;Siler et al., 2018).For example, for a right-stepping fault system, a dextral slip will produce extensional structures such as normal faults and extension fractures in the step-over (e.g.Fig. S1a), while a sinistral slip produces contractional structures (Fossen, 2010).Fluids in an extensional regime can migrate more easily from deeper to shallower depths to form a geothermal field than in a contractional regime.In such a setting, compression would reduce the permeability of faults and tend to close flow channels to impede fluid migration (Tanikawa et al., 2010;Evans et al., 1997).In addition, fault permeability is also affected by the fault activity (Tanikawa et al., 2010).High fault slip rates would be favorable for the generation of new fractures or make pre-existing ones remain open.Since the geothermal resources in the GBA depend mainly on deep hydrothermal circulation systems controlled by faults (e.g.Lu and Liu, 2015;Wang et al., 2018;Wei et al., 2023;Wei et al., 2021), a good understanding of its fault kinematics is essential to provide a solid foundation for better geothermal resource assessment in the GBA. Fault kinematics in the GBA Active faults are widely developed in the GBA and can be roughly divided into three groups based on their orientations: the NE-SW group located in the western part of the study area, the ENE-WSW group in the eastern and southern parts, and the NW-SE group in the central part, specifically the Pearl River Estuary area (Fig. 1).Several previous qualitative studies suggest that the activity of these faults is relatively low.For example, Chen et al. (2002) measured the soil gas emission data along the Guangzhou-Conghua fault (F25), Luofushan fault (F12), Xijiang fault (F24), and Baini-Shawan fault (F23) and pointed out that Fig. 1.Map of active faults, earthquakes and heat flow measurements of the Guangdong-Hong Kong-Macao Greater Bay Area and its adjacent area.Fault traces are from the Seismic Active Fault Survey Data Center (activefault-datacenter.cn), thick and thin lines denote lithospheric scale and crustal scale faults, respectively; earthquake data are from the National Earthquake Data Center (data.earthquake.cn);heat flow measurements are from the International Heat Flow Commission (IHFC, ihfc-iugg.org)and Tang et al. (2014).The inset shows the tectonic setting of the study area, in which the black box represents the location of Fig. 1 and the spatial extent of the model region. these faults have slight activities.This understanding is further strengthened by comprehensive geological surveys and soil gas measurements carried out specifically along the Baini-Shawan fault (F23) (Dong et al., 2016).The low fault activity presents a challenge in acquiring precise fault displacements in the field, thereby impeding the quantitative determination of fault slip rates.Different from these aforementioned studies, Song et al. (2003) used differences in elevation within the same stratum on both sides of the fault in the GBA, which are revealed through borehole analysis, and preliminarily calculated vertical fault slip rates ranging from 0.14 to 0.47 mm/a.Yu et al. (2016) also compiled stratigraphic profiles from boreholes and seismic reflections and suggested that the ENE-and NW-trending faults are the primary active faults.Although a few studies on fault activities have been conducted quantitatively in the region (e.g.Lei et al., 2018;Sun et al., 2007;Wang et al., 2011), they have focused only on two faults, namely, the Wuhua-Shenzhen fault (F10) and the Guangzhou-Conghua fault (F25) (see Fig. 1).The detailed kinematic characteristics of the various other faults remain unclear. Numerical modeling provides a powerful method to study the largescale kinematics of complex fault systems.It can integrate the observational data from different geoscience fields such as geology, geophysics, and geodesy to derive physics-based fault slip rates and crustal deformation considering the acting forces and mechanical properties of faults and rocks.Thus, the modeling approach can provide detailed and consistent estimates for the kinematics even of a complex fault system.Several modeling studies on fault slip rate have been conducted in the Marmara Sea region, northwestern Turkey (Hergert and Heidbach, 2010;Hergert et al., 2011) and the eastern Tibetan Plateau (Li et al., 2022;Li et al., 2021).The modeled rates in these studies have shown to be in good agreement with observations.However, in the GBA, to our knowledge, no similar work has been conducted on the kinematics of the faults except a couple of two-dimensional (2D) (Chen et al., 2014;Wen et al., 2001) or three-dimensional (3D) models with simple fault geometries (Dong et al., 2021) focusing on crustal stress. In this paper, we present a 3D geomechanical model for the GBA incorporating a complex 3D fault system and inhomogeneous rock properties as well as topography and gravity.Detailed contemporary kinematics of the faults and the crust in the study area are derived from the numerical simulation and validated by comparison with modelindependent data.Finally, based on the model, a comprehensive analysis of the implications for geothermal exploration in the study area is given. Model concept and input The modeling process in this study is composed of five steps, which are model design (including a 3D fault system and model geometry), model discretization, computation, results calibration, and analysis of results (Fig. 2).In the beginning, we construct a 3D geometry of the study area with a complex fault system by using the CAD software Rhinoceros®v7.The detailed geometrical parameters used to constrain the model in this step are extensively compiled from published geological and geophysical literature and reports.The 3D model volume is discretized into ~ 4 million linear tetrahedral elements with a resolution of 1-2 km on the faults and ~ 5 km near the model boundaries using the meshing software Hypermesh®v2019.After assigning inhomogeneous material properties to the finite element model and applying values of initial stress and gravity to the model volume, the numerical simulation was carried out with the finite element analysis software Abaqus®v2019 using displacement boundary conditions.The modeled displacements at the surface and on the faults are then compared with model-independent data, such as the GPS observations and geological or geodetic fault slip rates.Finally, we analyze the modeled kinematic results and discuss their implications on the geothermal resource exploration in the GBA. Model geometry The model geometry has a cuboid shape with a spatial extent of 572 km in E-W (110.5 • -116 • E) and 490 km in S-N (20.5 • -25 • N) direction.The model thickness is ~ 38 km (Fig. 3).From top to bottom, the model contains four units, namely the upper, middle, and lower crust, and the top layer of the upper mantle.These units are bounded by five surfaces: the topography, three interfaces of the upper, middle, and lower crust, Fig. 2. Model setup and workflow of the geomechanical modeling in this study (modified from Hergert et al., 2015).The partial differential equations of the equilibrium of forces in the 3D model are solved using the finite element method (σ ij stress tensor, x j Cartesian coordinates, ρ density, and ρx i body forces). X. Li et al. and the bottom of the model at 38 km depth.The relief of the topography and crustal interfaces are taken from the GTOPO30 global digital elevation model (USGS) and the CRUST1.0model (Laske et al., 2013), respectively. For the 3D fault system embedded in the model, the upper terminations of the faults are constrained by the fault traces from the Seismic Active Fault Survey Data Center (https://www.activefault-datacenter. cn), which incorporates the new findings of active fault investigations in recent years and forms the latest active fault database in China.The fault geometries at depth are determined from deep seismic sounding profiles, focal mechanism solutions, seismic tomography, earthquake relocations, etc.For an individual fault, if none of the references mentioned above is available to constrain its subsurface geometry, we use the shallow fault dip generally obtained by surface fault surveys to extend the fault plane to a greater depth.Based on geological and geophysical data in the study area, the faults implemented in the model can be divided into three types according to their depth extent.One is the lithospheric scale faults that penetrate the entire model down to a depth of ~ 38 km.The other two are both crustal scale faults but terminate differently at the bottom of the upper (~10 km) and middle crust (~20 km), respectively.The detailed geometrical parameters of the faults and the corresponding references are summarized in Table 1. Rock properties and friction coefficient In this study, the mechanical behavior of the model units is set to be linear and of static elasticity, an approach frequently used in geomechanical modeling (Ahlers et al., 2021;Hergert et al., 2011;Li et al., 2021;Reiter and Heidbach, 2014).The corresponding rock properties (i. e. Young's modulus E, Poisson's ratio ν and density ρ) are referenced from the CRUST1.0model (Laske et al., 2013), in which the Young's modulus is further converted to the static ones by empirical relations (Brotons et al., 2016).According to the seismic velocity structures obtained by Cao et al. (2014), the model domain can be divided into the normal continental crust of the Cathaysia Block and the thinned continental crust of the South China Sea separated by the Littoral fault (F1, Fig. 1).Table 2 lists all the material parameters used in the model. The faults implemented in the model are defined by pairs of contact surfaces that can slide relative to each other and are constrained by the Coulomb friction law.In each contact pair, two surfaces are in direct contact without any additional thickness between them, and the elements on both sides of the contact pair are discontinuous.After testing a series of effective friction coefficients ranging from 0 to 1, both 0.01 and 0.02 are optimum values that can minimize the misfit between the measured velocities at the surface and the modeled velocity field (Fig. 4), resulting in nearly identical simulated fault slip rates.The value of 0.02 is used to continue the analysis in the following model.Although the friction coefficient of crustal rocks is usually within the range of 0.6-0.85(Byerlee, 1978), observations at large faults (e.g.Li et al., 2015;Zoback et al., 1987), as well as lots of numerical simulations (He and Lu, 2007;Hergert and Heidbach, 2010;Li et al., 2021), suggest that the effective friction coefficient for large active faults in nature is generally much lower.The low friction coefficient is likely caused by specific materials distributed along faults.For instance, Carpenter et al. (2015) reported a static friction coefficient as low as 0.1 on the San Andreas fault and suggested the abundant magnesium-rich clay localized within the faults leads to the low frictional strength.Additionally, the effect of pore pressure can also contribute to the reduction of the effective friction coefficient. Gravity and initial stress state Gravity is applied to the whole model with an acceleration of 9.81 m/s 2 .An appropriate initial stress state is also accounted for in the modeling approach, which is important to simultaneously model both the kinematics and crustal absolute stresses.The initial stress state is calibrated by the semi-empirical horizontal-to-vertical stress ratio k proposed by Sheorey (1994), where E is Young's modulus (GPa) and z is the burial depth (m).Vertical k-z profiles at three test sites (see Fig. 1) are extracted for the calibration.These selected sites are far away from the faults and model boundaries to reduce potential stress perturbations and boundary effects.The good consistency between the modeled initial k-value and the theoretical initial k-value predicted by Eq. ( 1) is shown in Fig. 5. Kinematic boundary conditions For the kinematic boundary conditions, we use the hitherto most comprehensive Global Position System (GPS) data compiled by Wang and Shen (2020) to drive the horizontal movement of the model and simulate the tectonic loadings (Fig. 6).Since all the GPS stations in this dataset are in continental China, they cannot constrain the motions of the southern and southeastern model boundaries which are located in the South China Sea. Although a previous geodetic study has reported a motion rate (10.7 mm/a, relative to stable Eurasia Plate) of the Dongsha Islands, located near the southeastern corner of the model (Yu et al., 1999), the rate is abnormally high and would drag the model southeastward dramatically and produce a NE-SW-trending maximum horizontal stress (S H ) in the central part of the study area (Fig. S2a), which would contradict the NW-SE-trending S H revealed by focal mechanism solutions (Kang et al., 2008) and in situ stress measurements (Heidbach et al., 2016) (Fig. S2a). In this study, we assign a relatively low displacement rate (6.9 mm/a) to the southeastern corner, which is inferred from the variation trend of the continental GPS to ensure a suitable stress orientation.This inferred rate, together with the other GPS measurements near the model boundary, can generate a stress field with predominant NW-SE-trending S H , which is consistent with the model-independent stress observations (Fig. S2b). To check the effect of changing this inferred displacement rate on the modeled kinematic results, two tests with ± 10 % perturbation to the 6.9 mm/a have been conducted.The results indicate that the slip rates and sense of slip on the onshore faults in both tests are almost the same (Fig. S2c and d), which are also similar to the modeled results when the value of 6.9 mm/a is used (see Fig. 7).Therefore, the modeled kinematic results on the onshore faults are marginally affected by the perturbation of the displacement rate at the southeastern corner and the displacement rate of 6.9 mm/a adopted in this study can be considered reasonable. Then the inferred motion rate and the GPS measurements are jointly used to interpolate the velocities at nodes located on the boundary of the model.A time span is selected to convert the velocities at the boundary nodes to displacement boundary conditions to drive the lateral motion of the model (see the black arrows in Fig. 6).The time span is set to be 600 ka to allow the accumulated displacements at the boundaries to propagate into the model thus generating a proper contemporary state of stress and deformation.In addition, good correlations between the polarization directions of fast shear waves and the GPS measurements in the South China Block (Wu et al., 2007) suggest that the crustal strain in the study area is consistent over the whole depth range of the crust, which provides a reasonable basis for using the GPS measurements at the surface to constrain the model at all depths.The model bottom is fixed only with respect to movements in the vertical direction while allowed to move horizontally (see Fig. 3).The model surface is unconstrained. Model results In this study, the GPS data used to validate the model mostly represent the interseismic strain accumulation (Wang and Shen, 2020).Therefore, in order to compare the modeled velocities with GPS measurements, we first lock the faults in the model by assigning to them an infinitely large effective coefficient of friction.The modeled velocities at the exact positions of the GPS stations are extracted from the model and plotted together with the GPS data in Fig. 6.The comparison shows that the modeled crustal velocities in most places are in good agreement with the GPS data.Slight deviations are mostly within the uncertainty ranges of the GPS observations.After a good fit between the modeled velocities and GPS measurements is achieved (Fig. 6), the faults are changed to be unlocked with a low effective coefficient of friction (0.02), and the same boundary conditions as in the previous locked-fault model are imposed to obtain the long-term kinematics of the study area.Fig. 7 shows the modeled horizontal surface velocities and fault slip rates.Overall, the crust in the GBA moves ESE-ward at an average surface velocity of ~ 7.2 mm/a (Figs. 6 and 7).The northern part of the study area exhibits a higher velocity of ~ 7.5 mm/a, gradually decreasing to ~ 7.0 mm/a towards the south, indicating a low velocity gradient, which implies relatively weak crustal deformation within this region. The faults in the study area can be divided into three groups based on their orientations: NE-SW, ENE-WSW, and NW-SE, located in the western part, the eastern and southern part, and the Pearl River Estuary, respectively.These faults exhibit generally low slip rates (Fig. 7), which is consistent with the previously mentioned weak crustal deformation.Among these three fault groups, the NE-trending faults have the highest slip rates.For example, a prominent high slip rate of approximately 0.12 mm/a is observed on the northeastern segment of the Wuchuan-Sihui fault (F32), gradually decreasing towards the southwest.To the east, the Kaiping and Enping faults (F26, F28) have slip rates ranging from 0.05 to 0.06 mm/a.To the west, the Dashiding fault (F33) slips at a rate of approximately 0.07 mm/a.Furthermore, the discontinuous velocity contour lines intersected by the NE-oriented faults (Fig. 7) indicate that these faults have a significant influence on adjusting crustal deformation due to their relatively high slip rates.In the east of the Pearl River Estuary, the faults are distributed nearly parallel in the ENE-WSW direction with slip rates of < 0.04 mm/a, which are lower than the slip rates inferred for the NE-trending faults.The Littoral fault (F1) and its subfaults located in the South China Sea also exhibit very low slip rates (<0.03 mm/a).The lowest modeled fault slip rate (<0.02 mm/a) occurs on the NW-trending faults that are mainly located in the Pearl River Delta region.Compared with the northwestern segments of these faults, the slip rates on the southeastern segments are extremely low to almost no slip (Fig. 7). For the fault slip sense, our modeled results show that faults with different orientations exhibit different slip senses.As shown in Fig. 7, the NE-trending faults located in the western part of the study area display sinistral motion, while the ENE-trending faults in the eastern part are dextral.The opposite slip senses on these two fault groups are speculated to be caused by the changes in the crustal stress field, which will be discussed in detail in section 5.2.Unlike the previous two fault groups, the slip sense of the NW-trending faults in the central part of the study area is variable.For example, the Baini-Shawan fault (F23) is dextral, while the Xijiang fault (F24) to its west changes to sinistral.The extremely low fault slip rates are most likely responsible for the different fault slip senses. Discussion Faults and fractures play an important role in the circulation of geothermal fluids in the crust (Egger et al., 2014).They can not only channel fluids from hot deep depths of the crust to shallow depths to form extractable geothermal resources, but also facilitate the downward flux of meteoric fluids to deeper crust and the sustaining of broad convective geothermal systems (Jolie et al., 2021).However, such fluid movements are strongly affected by fault permeability, which varies according to the current stress field, structural setting, fault slip rate, etc. (Cox, 2010;Egger et al., 2014;Tanikawa et al., 2010).As mentioned in the Introduction, the stress state of a given structural setting is closely related to fault kinematics.Estimation of fault kinematics therefore is essential to understand the stress regime and, ultimately, the fault permeability, which helps to locate hidden geothermal systems and provides insight into their genesis.In the following sections, we will first compare the results with model-independent observations, such as reported fault slip rates and fault slip senses, to validate the model and then discuss in detail several research findings revealed from the kinematic characteristics of the study area. Comparison with reported fault slip rates, earthquakes and slip senses In order to validate our modeled results, published modelindependent observational data in the study area have been extensively collected (such as reported fault slip rates) and compared to our modeling results.In recent years, there have been a few investigations of fault activities conducted in the GBA.For example, Lei et al. (2018) used a dynamic deformation instrument to monitor the ENE-trending Wuhua-Shenzhen fault (F10) for four years and found that the fault slip rate is 0.01-0.06mm/a.Sun et al. (2007) analyzed the geological characteristics of the Wuhua-Shenzhen fault (F10) and suggested that the fault's activity is very low.Based on shallow seismic explorations combined with trench profiles, Wang et al. (2011) argued that the NE-trending Guangzhou-Conghua fault (F25) is almost inactive since Late Quaternary times.In summary, it is widely accepted that the slip rates on the faults in the GBA are generally very low, which is consistent with our modeled results. To explore the relation between fault slip rate and the earthquake locations in the study area, we also plot earthquakes with M ≥ 3 together with fault slip rates, as shown in Fig. S3.As can be seen from the figure, earthquakes in the study area are generally sparsely distributed and there are no earthquake clusters in fault segments with relatively high slip rates even when the magnitude threshold is decreased to a much lower value (e.g., Xia et al., 2022).In fact, most of the earthquakes in the study area are located in the intersection zone of faults with NW and NE orientations (Sun et al., 2012).For example, the 1969 M6.4 Yangjiang earthquake occurred in the intersection area of the NE-trending Pinggang fault (F29) and the NW-trending Yangbianhai fault (F30) (Fig. S3b); the 1962 M6.1 Xinfengjiang earthquake also occurred in the intersection zone of the NNW-trending fault (namely the Shijiao-Xingang fault) and the ENE-trending Heyuan fault (F13) (Fig. S3c).Detailed studies on the seismogenic structures of small earthquakes located in the Pearl River Estuary also indicate that these earthquakes are in the intersection area of NW-trending and NE-trending faults (e.g.Chen et al., 2021;Xia et al., 2018).We consider this is because our study area is in an intraplate setting of weak deformation rates, within which the fault intersection is a typical stress concentrator place, where tectonic movement can cause a localized build-up of stresses and, ultimately, earthquakes (Gangopadhyay and Talwani, 2003). For the sense of fault slip, Fan et al. (2022) used remote sensing interpretation as well as field investigation to estimate the kinematic characteristics of the branch of the Zijin-Boluo fault (F11) and found that this ENE-trending fault has an obvious dextral slip component (Fig. S1a); Huang and Zheng (2001) analyzed a series of microstructures in the NE-trending Wuchuan-Sihui fault zone (F32) and suggested that this fault is sinistral.These senses of fault slip revealed by field investigations match well with our model results (Fig. 7). Although not all the faults implemented in the model can be validated against model-independent data due to the limited number of published fault activity data in the study area, the good consistency of fault slip rates and fault slip senses between the model-independent measurements and the modeled results on the typical faults indicate that our modeled kinematic results are generally reliable. Mechanism of the opposite slip sense on ENE-and NE-trending faults Two groups of faults oriented in NE-SW and ENE-WSW directions are developed in the west and east of the Pearl River Estuary, respectively (Fig. 1).Our modeling results show that the sense of slip on these two groups of faults is opposite to each other: the NE-trending faults are sinistral, while the ENE-trending faults show dextral movements (Fig. 7).Yu et al. (2016) first pointed out the difference in activity between these two groups of faults but gave no further explanation. In order to gain a detailed understanding of the relationship between fault slip sense and fault strike, we design a series of sub-models with generic faults oriented in different directions.A submodeling technique is applied in the computation, during which the boundary nodes of the sub-models are driven by the modeled displacement of the GBA model.The generic faults in the sub-models are simplified to be vertical planes, consistent with the large dip angles of the faults in the study area (Table 1).The fault orientations range from 0 • to 170 • in steps of 10 • .Since the strike of a vertical fault is bidirectional, the chosen strike range can represent the full range of 0-360 • .The sub-models are cylindrical in shape with a diameter of 100 km and a height of 10 km, representing the thickness of the upper crust.Since the focus of this test is on the fault slip sense, the friction coefficient of the scenario faults in the sub-models is set to 0 to make the faults slide more easily.We then use our kinematic model to drive this series of sub-models and obtain the slip senses for the faults in different orientations.In view of the different dominant orientations of the faults on both sides of the Pearl River Estuary, this test is performed on the east and west sides of the Pearl River Estuary separately (as outlined by the two dashed circles in Fig. 7).The results are summarized in Fig. 8. In the west of the Pearl River Estuary, faults with strikes between 110 • and 170 • exhibit dextral slip, while faults with strikes between 10 • and 80 • exhibit sinistral slip (Fig. 8a).Compared with the results in Fig. 8a, the pattern of fault slip senses in the east of the Pearl River Estuary (Fig. 8b) has an anticlockwise rotation, since the strike of dextral and sinistral faults is concentrating on the range of 50 • -120 • and 140 • -210 • , respectively (Fig. 8b).This anticlockwise rotation of the fault slip sense pattern reflects the lateral variation of the crustal stress field.Wen et al. (2001) calculated the modern crustal stress field in the southern part of the South China Block by the finite element method and found that the compressional stress axis in the west of the Pearl River Estuary trends nearly N-S and rotates to NW-SE in the east, which is consistent with the results of our sub-model simulations.Therefore, we suggest that the opposite slip senses on ENE-and NE-trending faults are caused by the lateral variation of the crustal stress field.The changing compressional stress axis is oblique to the ENE-and NE-trending faults and promotes the ENE-trending faults to slip in the dextral direction while the NE-trending faults show sinistral movements, as illustrated in Fig. 8c. It should be noted that the angle between the compressional stress axis and the ENE-and NE-trending faults is usually larger than 45 • (Fig. 8c).This means that these faults are not oriented in the preferred direction, which is typically ± 20 • -30 • (Fossen, 2010) relative to the maximum principal stress axis σ 1 .The NE-and ENE-trending faults are pre-existing faults inferred to have been initiated in Triassic times which have experienced a complex history of deformation, including alternating episodes of shortening and extension, during the northwestward subduction of the Paleo-Pacific Plate (Li et al., 2020;Qiu et al., 1991).Since the Late Cenozoic, the stress regime of the study area has changed into the NW-SE-trending compression and converted the NE-and ENEtrending faults into transpressional structures (Xie et al., 2016;Zhang et al., 1999). The shortening component of the transpressional movements will reduce the fault permeability and make it difficult to circulate geothermal fluids.On the contrary, the NW-trending faults, especially those located in the east of the Pearl River Estuary, are parallel to σ 1 (see Fig. S2b), i.e. they are extensional and can be most conducive to channel X. Li et al. geothermal fluids (e.g.Jolie et al., 2021).Therefore, we suggest that the NW-trending faults as well as their intersection zones with other faults are favorable sites for the exploitation of geothermal systems.In addition, more attention should also be paid to local structures, such as stepovers or fault bends along the NE-and ENE-trending faults, where pullapart basins can be formed due to the strike-slip component of the deformation (e.g.Fan et al., 2022).Normal faulting stress regimes in these local areas would facilitate the flow of fluids and host a geothermal system. Spatial correlation of hot springs, heat flow, and fault slip rate The extensive magmatism and widespread granitoids (Zhou et al., 2006), the gradually thinning crust towards the sea (Huang et al., 2015), and the uplift of the Moho interface (Zhang et al., 2018b) make the GBA and its vicinity an area of high heat flow values (Jiang et al., 2019) and abundant hot springs (Wang, 2018).Fig. 9 shows the spatial distribution of heat flow, hot springs, and fault slip rates in the GBA region.There is a significant high heat flow anomaly covering most of the western part of the study area with heat flow values reaching up to 95 mW/m 2 .Another high heat flow anomaly zone is located in the eastern part of the study area and extends in the NE-SW direction, nearly parallel to the dominant strike of the faults in this region.In addition, there is also a local high thermal anomaly (with heat flow as large as 80 mW/m 2 ) situated at the southwestern end of this thermal anomaly zone, around the Shenzhen-Hong Kong area. Hot springs are often taken as indicators of geothermal anomalies.When underground fluids are heated by deep heat sources (such as recently intruded dikes and plutons), the fluid density decreases, and the buoyant forces promote the upward movement of fluids along faults to the surface, thus forming hot springs.Therefore, the presence of hot springs indicates the presence of heat sources below as well as the existence of fluid pathways and high permeability of the faults.There are at least 106 hot springs in the study area (Wang, 2018) primarily distributed in the southwestern coastal region and the eastern part of the study area, where there are also high heat flow anomaly zones (Fig. 9).However, the formation of hot springs is also influenced by specific hydrogeological conditions.For example, the presence of a lower water table in the geothermal system may prevent the formation of hot springs (Jolie et al., 2015).Therefore, areas without hot springs can also be strong geothermal anomaly zones (Zhong and Zhan, 1991). In general, faults can increase permeability (e.g.Evans et al., 1997) and promote the generation of new fractures or make pre-existing ones to remain open, favorable for the formation of hydrothermal fields (Curewitz and Karson, 1997).As aforementioned, the granitoids are distributed widely in the study area.Laboratory experiments suggest that the permeability of granite increases significantly at high slip rates, due to the microcracks and mesoscale fractures forming near the slip surface (Tanikawa et al., 2010).Jolie et al. (2019) measured soil gas emission data in a geothermal field in the East African Rift System and found that the increased values of soil gas efflux are mainly located in the area with the maximum displacement of faults, indicating that at the regional scale in the field, higher fault slip rates can enhance the permeability of the fault and facilitate the migration of geothermal fluids.Furthermore, hydrothermal alteration processes in the fault often make the fault tight by minerals precipitated from hot fluids in the crust, such as the giant quartz reef 40 km long and > 75 m wide formed in the core of the Heyuan fault (F13) (Tannock et al., 2020).The activity of the fault would break the solidified fault cores and allow the fluid to flow. Geothermal projects installed worldwide are located in various geological settings and commonly require highly permeable faults (Moeck, 2014), not only because these faults can facilitate the development of a circulation system (e.g. by providing pathways for hot fluids to ascend from greater depth), but also can provide high flow rates for commercial usage.In Germany, for example, fault zones are the primary targets of all geothermal projects, no matter whether the geothermal reservoir is of aquifer type in the foreland basin (Munich area in the Molasse) (Cacace et al., 2013) or of crystalline basement type in the Upper Rhine graben (Frey et al., 2022). Our modeled results show that the highest slip rate of approximately 0.12 mm/a is observed along the northeastern segment of the Wuchuan-Sihui fault (F32).The Kaiping and Enping faults (F26, F28) as well as the middle segments of the Wuhua-Shenzhen fault (F10), Zijin-Boluo fault (F11), and Heyuan fault (F13) in the eastern study area also have a relatively high slip rate ranging from 0.03 to 0.06 mm/a.These active fault segments are believed to have higher fracture densities due to their higher slip rates, which can also form a pronounced normal faulting stress regime in releasing fault bends or step-overs (e.g.Fan et al., 2022, Fig. S1).Therefore, the permeability of these segments would be high.Considering the high background heat flow values, we infer that these segments could be interesting prospects for future geothermal exploration.Additionally, in the Shenzhen-Hong Kong area, the NW-SE compression of the background stress regime will increase the dilation tendency of the NW-trending fault (F16) (see Fig. 8c), which extensionally intersects the ENE-trending fault (F10) and further increase the permeability there.The expected elevated permeability and high background heat flow values in the intersection area suggest that geothermal exploration in that area could also be prospective. Comparison with existing geothermal systems Detailed geothermal investigations found that many hot springs or hot water wells are located along the middle segment of the Heyuan fault (F13) (Tannock et al., 2020), of which the fault slip rate is relatively high.The Huangshadong geothermal field, where a borehole of 3000 m produces geothermal water of 118 • C at the rate of 137 m 3 /h (Fan et al., 2022) is also located near the middle segment of the Zijin-Boluo fault (F11) with a relatively high slip rate.In the Tangkeng geothermal field, where the first Chinese geothermal power station was established, hot springs and geothermal wells are mainly distributed along the NW-trending faults that cross-cut the NE-trending faults (Luo et al., 2022), which means that the intersection of the faults is favorable to host promising geothermal resources. Fault stress can also provide valuable clue for the potential geothermal resources.We extracted the stress from the 3D geomechanical model and calculated the slip and dilation tendency on the faults (Fig. S4), which is similar to the distribution pattern of the slip rate, indicating that the delineated potential geothermal resources based on stress analysis are consistent with those based on slip rate analysis. Some limitations on the model primarily stem from uncertainties in the model-input parameters due to sparse subsurface or surface information.For example, the fault geometries employed in this study are highly simplified and some fault branches are omitted in the model.These simplifications are appropriate to obtain the first-order characteristic of slip rates on these faults, but more realistic and complex fault geometries and fault distributions would certainly improve the applicability and reliability of the model.Moreover, the absence of data on GPS measurements at the southern and southeastern model boundaries could potentially affect the reliability of the fault kinematics within the South China Sea.More high-quality records of the movement rate of the South China Sea are necessary to increase the reliability of the modeled kinematics of the faults in the sea area. Conclusions In this study, we establish a 3D geomechanical model of the Guangdong-Hong Kong-Macao Greater Bay Area.The model considers a complex 3D fault system, inhomogeneous rock properties, topography, initial stress as well as gravity and provides the spatially continuous contemporary kinematics of the faults and the crust for the study area. The average velocity of the ESE-directed movement of the crust in the GBA is ~ 7.2 mm/a.The northern part of the study area has a higher velocity of ~ 7.5 mm/a, gradually decreasing to ~ 7.0 mm/a towards the south, indicating a low velocity gradient.Slip rates on the faults are generally low.The NE-trending faults slip at the highest rates of 0.05-0.12mm/a in the study area, which is generally higher than the rates on the ENE-trending faults (<0.04 mm/a).The NW-trending faults have the lowest modeled fault slip rates (<0.02 mm/a). For the fault slip sense our modeling results show that the NEtrending faults located in the western part of the study area display sinistral motion, while the ENE-trending faults in the eastern part always undergo dextral slip.It is inferred that the opposite slip senses on these two fault groups are caused by changes in the orientation of the regional compressional stress axis. From the perspective of geothermal resource exploration, our results indicate that the fault segments that have relatively high slip rates, e.g., the northeastern segment of the Wuchuan-Sihui fault (F32), the Kaiping and Enping faults (F26, F28) as well as the middle segments of the Wuhua-Shenzhen fault (F10), Zijin-Boluo fault (F11) and Heyuan fault (F13), are in areas with relatively high background heat flow.The apparent strike-slip component of the deformation will promote the local structures, such as releasing step-overs or fault bends along the NEand ENE-trending faults to form pull-apart basins.Normal faulting stress regimes and high fracture densities inferred to develop in these local areas would facilitate the flow of fluids and host a geothermal system.Additionally, the intersections of the NW-trending extensional faults and the NE-/ENE-trending faults could also be potential areas of interest for future geothermal exploration. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 3 . Fig. 3. Geometry of the 3D geomechanical numerical model and the implemented 3D fault system.The white arrows and circles indicate the displacement boundary condition applied to the model. Fig. 4 . Fig.4.Variability of the misfit between the modeled crustal velocities and GPS measurements with the effective friction coefficient.For details of the method to calculate the misfit, it is referred toHergert and Heidbach (2010).The value of 0.02 is selected as the best friction coefficient.Note that GPS measurements within ~ 5 km of faults are excluded in the misfit calculation due to the strong perturbations in the proximity to faults. Fig. 6 . Fig. 6.Displacement boundary conditions applied to the model and comparison between the GPS observations and modeled horizontal velocities.The black arrows represent the horizontal boundary movements.The red and blue arrows represent the modeled velocities at the surface and the GPS observations (relative to stable Eurasia), respectively.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Fig. 7 . Fig. 7. Modeled horizontal fault slip rates (color-coded) and fault slip senses (black arrows).The background contours and the corresponding labels denote the magnitude of the crustal surface velocities (in mm/a) relative to stable Eurasia.The dashed circles represent the location of Fig. 8. Fig. 8 . Fig. 8.The preferred slip senses for different orientations of scenario faults in (a) the western and (b) the eastern part of the study area (detailed locations are outlined in Fig. 7).(c) Scheme interpreting the opposite slip senses on the NEand ENE-trending faults, which are inferred to be caused by the lateral variations of the crustal stress. Fig. 9 . Fig. 9. Spatial distribution of heat flow, hot springs and the modeled fault slip rates in the study area.Background color contours denote the heat flow, which is interpolated from the discrete data of IHFC and Tang et al. (2014) (see Fig. 1).White dashed ellipses represent the sites of prospective geothermal systems inferred by this study. Table 1 Geometry of the faults implemented in the model. Table 2 Elastic parameters and densities in the 3D geomechanical model.
9,972.2
2024-02-01T00:00:00.000
[ "Environmental Science", "Geology" ]
Smartphone-based Recognition of Human Activities using Shallow Machine Learning The human action recognition (HAR) attempts to classify the activities of individuals and the environment through a collection of observations. HAR research is focused on many applications, such as video surveillance, healthcare and human computer interactions. Many problems can deteriorate the performance of human recognition systems. Firstly, the development of a light-weight and reliable smartphone system to classify human activities and reduce labelling and labelling time; secondly, the features derived must generalise multiple variations to address the challenges of action detection, including individual appearances, viewpoints and histories. In addition, the relevant classification should be guaranteed by those features. In this paper, a model was proposed to reliably detect the type of physical activity conducted by the user using the phone's sensors. This includes review of the existing research solutions, how they can be strengthened, and a new approach to solve the problem. The Stochastic Gradient Descent (SGD) decreases the computational strain to accelerate trade iterations at a lower rate. SGD leads to J48 performance enhancement. Furthermore, a human activity recognition dataset based on smartphone sensors are used to validate the proposed solution. The findings showed that the proposed model was superior. Keywords—Data preprocessing; data mining; classification; genetic programming; Naïve Bayes; decision tree I. INTRODUCTION The aim of human action recognition (HAR) is to recognize activities extracted from a number of observations concerning the behavior and environmental conditions of subjects. A lot of applications for HAR research include video monitoring, healthcare and contact with human-computer. HAR uses sensors influenced by human movement for the classification of an operation of the individual. Both users and sensors of smartphones expand as users also bring their smartphones. HAR seeks to identify activities arising from a variety of observations concerning the behavior and environmental conditions of subjects. Sensors can help patients always record and track and automatically report if abnormal behavior has been detected by a huge quantity of resources. The research benefits from other applications, including the human survey method and position predictor. Many experiments have successfully established wearable sensors with a low error rate, but most work is conducted in labs with very limited settings. Readings from many body sensors achieve a low error rate, but in reality the complex environment cannot be achieved [1]. The efficiency of the human action mechanism can be deteriorated by several challenges. One is that the extracted features need to generalize many variations in order to address the challenges of action recognition, including individual appearances, viewpoints and histories. In addition, the relevant classification should be guaranteed by those features. The creation of a lightweight, precise device on Smartphones that can detect human activities and reduce labelling time and burden is another challenge. The main purpose of this paper is to reliably detect the type of physical activity that the user conducts using the phone sensors. This involves an analysis of existing solutions, finding ways to strengthen them and finding a new approach to the issue. Furthermore, a human activity recognition dataset based on smartphone sensors are used to validate the proposed solution. Section 2 is associated work on recent study events in the field of methods and applications for human action detection. Section 3 describes the basic methodologies and principles. Section 4 addresses with shallow learning the proposed method of human behavior recognition. Section 5 symbolizes the findings of the experiment. The conclusion of Section 6 is the representation of the result of the proposed scheme. II. RELATED WORK Anguita et al. [2] introduced a system that uses inertial smartphone sensors to recognise human physical activity (AR). Since the energy and computer power of these mobile phones is small, they suggest a new hardware-friendly method for classification of multi-class problem. This approach adapts the regular Support Vector Machine (SVM) and uses fixed-point arithmetic for the reduction of computational costs. Tran and Phan [3] have created and built a smartphone framework for human activities through the use of integrated sensors. For acknowledgement, six acts are selected: standing, upstairs, walking, sitting, downstairs, lying down. The Support Vector Machine (SVM) for classification and identification of the operation is used in this method. For the model classification model -the model file, data obtained from sensors is analyzed. The classification models are optimized to generate the best results for the human activity described. In the sense of human recognition of human behavior, Gusain et al. [1] evaluated gradient boosted machines (GBM). The proposal solution uses an ensemble of SVM to incorporate incremental learning. After the first batch of data has been trained, the computer is stored in many machines. This machine is trained on the new batch for the second time, and correctly classified information is removed, but the misclassified machine is trained. The generic feature engineering approach Zdravevski et al. [4] have proposed to pick robust characteristics from a variety of sensors that can be used to generate accurate classification models. A number of time and frequency domain features have been extracted in the initially registered time series and some newly created time series [i.e. fast Fourier transformation series, first derivatives, magnitudes and Delta series]. Also, the number of functions generated is substantially reduced with a two-phase function selection. Finally, various classification models are trained and tested in a separate test collection. Hassan et al. [5] proposed an inertial smartphone sensor method for detection of human behavior. Second, raw data extract productive functionality. The characteristics include autoregressive coefficients, meaning, median, etc. A Linear Discriminant Analysis (LDA) and kernel principal component analysis (KPCA) further process the features to make them robust. The features are eventually educated in effective identification of behavior with the Deep Belief Network (DBN). Xu et al. [6] proposed an InnoHAR deep learning model based on a neural network and a recurring neural network. The model enters end-to-end multi-channel sensor waveform data. Multi-dimensional functions with different kernel-based convolution layers are extracted in initial modules. In conjunction with GRU, time series characteristics are modelled and data characteristics are used entirely to complete classification tasks. Inertial smartphone accelerometer architecture design for HAR has been developed by Wan et al. [7]. In traditional everyday activities, the smartphone gathers the sensory data sequence and extracts from the original data the high efficiencies, and then uses several three-axis accelerometers to acquire the physical behavioral data of the consumer. The data are preprocessed to extract useful feature vectors by denoising, normalizing and segmenting. A real-time method of classification of human behavior based on the neural convolution network (CNN) using a CNN for the extraction of local functions is also suggested. Next, Table I Further optimization is possible in the structure of the four neural network models used in the experiment and further comparative studies can be carried out. A. Shallow Learning Machine learning is seen as a form of artificial intelligence (AI) which deliver learning-free machines with no more processes and Shallow learning [8] is regarded as machine learning. They have evolved from theory of machine learning and pattern recognition. Two key categories of learning are typically un-supervised and supervised. The training set comprises samples of input vectors and matched objective vectors for supervised learning. No labels are required for the training set in unsupervised training. The supervised target of learning is to predict an adequate output vector for each vector. Classification tasks are functions where the objective label is a discrete finite number of the group. It is difficult to describe the unsupervised learning target. The related samples of sensitive clusters within input data, known as clustering, are a primary objective. B. Genetic Programming (GP) Genetic programming (GP) is a technique of evolutionary computing (EC) that solves problems automatically without asking the machine how to do this [9] directly. From the most abstract level, GP is a domain-independent, systematic way to get computers to solve problems automatically, from a highlevel argument. GP is a special evolutionary algorithm (EA) where computer programs are present in the population. GP thus converts populations of programs, as shown in Fig. 1, from generation to generation [10]. 79 | P a g e www.ijacsa.thesai.org Any computer application, with ordered branches, can be graphically displayed as a rooted label tree. Genetic programming is an enhancement to the conventional genetic algorithm, which is a computer programme for every person in the population. The genetic programming search area is the space for all possible computer applications, which consist of functions and terminals that are suitable for the problem area. The features may include standard arithmetic operations, logical functions, standard programming, standard math, or domain-specific functions. Five preparatory steps have been taken [11]. The following five steps are: 1) The terminal package, 2) The elementary functions set, 3) The measure of fitness, 4) The run control parameters, and 5) The formula for the outcome and the end of the run criterion. In preparation for genetic programming the first step is to classify the terminal sequence. The terminals can be seen as the entries into the computer program that has been uncovered. Terminals are the ingredients from which genetic programming tries to construct or approximately solve a computer program to solve the problem. The second step in planning for the use of genetic programming is to recognize the set of functions to produce the mathematical expression to match the unique finite data sample. The functions of the F function set and the terminals of the T terminal set are used in any computer program. In each function set, any value and data type that may be returned to a function set and to any value and data type that may be assumed by the terminal in the set should be recognized as its arguments. That is, the selected function set and terminal set should be closed. These first two steps correspond to the step of specifying the representation scheme for the conventional genetic algorithm. These two first steps correspond to the step of defining the traditional genetic algorithm representation scheme. The remaining genetic programming steps are the three last preparatory steps for typical genetic algorithms. Populations of hundreds, thousands and millions of computer systems are genetically derived from genetic programming. This breeding takes place using the Darwinian survival and reproduction concept of the most suitable and genetic crossover operation for computer-based programming. This combination of Darwinian natural selection and genetic operations frequently results in a computer program that solves a given problem. Genetic programming begins with an initial population of computer programs randomly generated (generation 0) consisting of features and terminals for problem domain applications. The establishment of this initial random population is essentially a blind random search of the problem's search space as a computer program. Each computer program within the population is calculated by the fitness of the problem area. The fitness calculation is different from the problem [12]. A combination of the number of properly treated instances (i.e., true negatives and true positives) and the number of correct instances will calculate the fitness of a program (i.e., false positives and false negatives). Correlation is also used as a test of fitness. From the other hand, the fitness of a particular computer program may be calculated using entropy, the fulfilment of the gap test, the success test satisfaction or a combination of these. For several problems a combination of factors like correctness, parsimony (smallness in the program), or effectiveness (of execution) may be needed to use a multifunctional fitness measure [12]. In general, each computer program in the population has numerous different fitness instances, with the consequence that it is evaluated in a number of representative situations, either in total or in average. These fitness instances may be a sampling of different independent variable values or a sampling of different initial system conditions. The fitness cases can be picked alone or arranged (e.g., over a regular grid or at regular intervals). The initial conditions are also usual in cases of fitness (as in a control problem). In genetic programming generation 0, computer programs are almost always extremely suited. Some people would however prove more suited than others in the population. These output disparities are then taken advantage of. A new descendant population of individual computer programs is being generated through the Darwinian theory of reproduction and survival of the fittest and genetic fusion process. The reproductive method involves choosing a computer program, which can be used by copying into new population [13], from the existing population of fitness-based programs. The crossover operation is used to construct new descending computer programs from two fitness-based parental programs. The genetic programming parental systems are of various sizes and types. The offspring programs consist of their parents' sub-expressions (building block, sub-programs, subtrees, subroutines). These descent programs, as opposed to their parents, have different sizes and styles. In genetic programming the mutation operation can also be used. The population of offspring (i.e., the new generation) replaces the old population after the genetic operations on the present population (i.e., the old generation). Each participant in the new program population will then be assessed for fitness and over several generations the process is replicated. At every point, the state of the process will consists only of the present population of people in this highly parallel, locally regulated, decentralized process. The driving force behind this mechanism is just the human health in the existing population that has been observed. As can be seen from this algorithm, populations of programs are generated that appear to display an increasing average fitness in their environment over several generations. Furthermore, these machine populations are able to adapt quickly and efficiently to environmental changes. The best person in any run is usually referred to as the outcome of the course of genetic programming [13]. Inherently hierarchical are the products of genetic programming. Sometimes, genetic programming effects are default hierarchies, priority hierarchies of tasks, or hierarchies where one action subordinates or suppresses another. Another 80 | P a g e www.ijacsa.thesai.org (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 4, 2021 fundamental characteristic of genetic programming is the dynamic variability of the computer programs that are built along the way to a solution [10]. The effort to describe or minimize in advance the dimensions and form of the potential solution is always hard and unnatural. Furthermore, advancing or restricting the solution's dimensions and type narrows the window through which the machine sees the world and may prevent the solution of the problem from ever being found. The absence or a relatively minor function of inputs and postprocessing inputs is another important aspect of genetic programming. Usually, the inputs, intermediate effects and outputs are directed in the natural terminology of the problem region. Genetic programming systems consist of functions that are natural to the problem area. If appropriate, a wrapper will conduct the post processing of the output of a program (output interface). Eventually, another key element of genetic programming is the active genetic programming structures [11]. They really aren't inactive encodings of the problem (i.e. chromosomes). Instead of running a machine, the genetic program structures are active structures that can be run as they are. C. Decision Tree The classification in decision trees [14] is based on a sequence of decision classification of the sample. The present decision helps to make a subsequent decision in a decision tree to create a sequence that is indicative of the structure of the tree. The structure includes two key types of attributes and allows to use attributes during the prediction process. The predicted attribute is described as a dependent variable because the value of the other attributes depends on or depends upon the values. The other attributes that help to forecast the dependent variable value are known as the independent dataset variables. In the case of classification, each end leaf node represents one decision or category, the root node becomes an eligible end leaf node, for instance. Each node has the attributes of the instances and the value of each division is the same as the attributes of each division. The decision tree is a model that determines the value of the dependent variable(s) based on the values of various attributes of the data available in a new case. In decision tree, The inner nodes indicate the various attributes The divisions between the nodes reflect potential values in the observed samples for these attributes, whereas the classification of the dependent variable or the final value are represented by the terminal nodes. After the related basic calculation, the J48 [15] decision tree classifier shall be used. In order to order anything else, the ultimate objective is to create a selection tree first because of the quality estimates of the accessible preparation information. Therefore, whatever the stage in which things are organized (training), it identifies the characteristic that usually clearly distinguishes the various occurrences. This distinctness, which has the capacity to get the best out of the instances, may be structured to collect the necessary data. At present, if the standard for which the information events falling into its class have the same meaning for the target variable is of some fair value that there is no vagueness, this expansion would be terminated and designated as the objective value that could be achieved. For alternative situations, the search for alternate quality begins which results in the most astonishing data collected-and goes on until either a fair choice has been made about which combination of the unique characteristics of a particular target quality or the use of properties. If qualities are used or if an exact result from the accessible information cannot be obtained, the degradation of this extension goal was priced for most of the items in this branch. [5]. See Table II which compares between different algorithms can be used for building decision tree. D. Naïve Bayes Classifier One of the most popular simple machine learning classifiers is a probabilistic classifier. Due to the use of probability distribution over a set of classes, the classifier can predict a sample instance instead of predicting only one class for the sample. Probabilistic classifiers have a certain description that can be useful when classifiers are combined into ensembles. Naïve Bayes is known by its probabilistic designation as the straightforward street algorithm. Naïve Bayes is a statistical classification that calculates the likelihood of a certain class of tuple based on the Bayes theorem [16]. The class-conditional Independence characterizes Naïve Bays, implying that the influence of an attribute-value on a certain class is irrespective of the other attributes. High accuracy, pace and many advantages are of Naïve Bayes. In principle, in contrast with all other classifiers, minimum error rate is the main characteristic of Bayesian classifiers [17]. Naïve Bayes is working on a basic definition, but a very intuitive one. In certain instances, Naïve Bayes beats several comparatively complex algorithms by using the variables in the data sample and by observing each other separately and independently. The classification of Naïve Bayes is based on the conditional probability rule of Bayes. It begins with all of the attributes in the data that are equally relevant, independent from one another and is evaluated individually. It works with the hypothesis that one feature works without the others in the study. The model offers a response to questions such as "What is the likelihood of a certain type of attack, provided certain device events, when it comes to using Naïve Bayes, NB in model intrusion attacks? The query in turn is reworded in the context of conditional probability. A directed acyclic (DAG) charts the structure of an NB. Each node represents one of the system variables and each relation codes the effect of one node on the next. So A directly influences B when it has a relation from node A to node B [6]. 1) Problem formulation: Recognition of human activity (HAR) is a method of pattern recognition. Since preprocessing, feature engineering and assignment to labels are the key recognition processes, HAR has applied the method to all sub processes mentioned. In order to identify an action as a label, it is important to preprocess the input data and evaluate it in order to detect whether there is an abnormal value...etc. Due to the acquiesced data from sensors of the smartphone, which represents the characteristics and features of human activity. The final step before classification, however, includes the study and engineering of features. We addressed each method for more details in depth in the following sections. 2) Shallow human action recognition system: The framework proposed is planned and built in two phases: the first phase consists of the pre-processing of acquired data values, feature engineering and analysis. In contrast, the second stage is the process of using shallow learning algorithms and applies classification based on a Decision tree, Naïve Bayes, and Genetic Programming. This paper proposes a comparatively shallow learning algorithm for human action recognition based on smartphones. We have taken full advantage of their strengths. Fig. 2 demonstrates our proposed recognition model's abstract architecture. The stream data is pre-processed in instances and is inserted into the classifier in each instance. The shallow learner infers the corresponding action of the input tuple. We have adopted a decision tree architecture to automatically learn efficient and robust action features from the training instances. Stage (1): Pre-processing Data is first prepared in the attributes at the last index of the data set class name. In order to minimize the difficulty of the performance assessment of the proposed model, the module data of the class is divided into two key categories. The dataset had already been prepared and made available online. For easy study, activity labels are then changed by having the class label 1, 2, 3, 4, 5 or 6 for WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING or LAYING, respectively. The datasets were transferred with the best search evaluator to the feature analysis with the selection of correlation features to analyze the correlated and uncorrelated features during the processing. 1) Dataset splitter: The process of loading data set into two components obtained from the preprocessing module is the process which divides the dataset. The technique of crossfold validation [18] divides the date set into 2 parts when data is partitioned randomly into k-fold independent set, group is a training set and fold is a test set. The training set is used to train the proposed system while checking and validating the accuracy of the trained model is carried out. 2) Learning phase: In the proposed system, the learning phase starts the learning process of the classifier using the current instance from the training dataset that was realized. The results of this base classifier are considered to be the input data for the second stage. The update to the classifier has its own rule set for every detail to be independently conscious of the behavior of human activity. Stochastic descent [19] (often SGD) is an iterative way to refine an objective function with sufficient smoothness properties (e.g. sub differentiable or differentiable). The gradient optimization can be seen as a stochastic approx. because the real gradient (calculated from the whole dataset) is replaced by an approximation (calculated from a randomly selected subset of the data). This reduces the machine burden, achieving faster iterations in trade at a lower convergence rate, particularly in high-dimensional optimization problems. This is the stochastic gradient descent used to optimize the parameters of j48. Stage (3): Action recognition stage The proposed framework uses the test dataset developed by the Data Splitting module during this stage. The test data set is used to assess the model's output. The results of this module are redirected to create the complete classifier performance assessment process that is discussed in detail in the experimental section using the classification rules created during the learning phase for the assessment of the proposed model. A. Runtime Environment On a computer system with features the proposed recognition system has been implemented and designed using JAVA 8. Such features have been listed as hardware, comprising 64-bit machine and Intel of Core i7 and 2.2 GHz rather than as a software framework for Windows 10 Professional. B. Data Set(s) To evaluate the proposed system, we have used a dataset of Human Activity Recognition provided by Jorge L et al. [2]. It is available online for public use on the UCI repository since 2013. It contains six types of actions, including, LAYING, SITTING, WALKING, STANDING, WALKING_DOWNSTAIRS and WALKING_UPSTAIRS. All has a smartphone on the waist (Samsung Galaxy S II). This dataset version contains all the training and testing examples provided in the original data repository. The data collection was randomly divided into two groups, with 70% of the volunteers chosen to produce the training data and 30% of the test data. The database is composed of the logs of 30 people who conduct everyday life activities (ADL) with an embedded inertial sensor waist-mounted smartphone. In an age group of 19-48 years, the tests were performed with 30 volunteers. We captured three axial linear acceleration and three axial angular speeds at constant speeds of 50Hz using its embedded accelerometer and gyroscope. The tests were captured by video to manually mark the data. The sensor signals (gyroscopes and accelerometers) were pre-processed by using noise filters and sampled in 50 percent overlap and 2.56 sec and (128 readings/windows) in fixed-wide sliding windows. A Butterworth low-pass filter into body acceleration and gravity separated the sensor acceleration signal with gravitational and body movement elements. It is believed that the gravitational table force is only low frequency; a filter with a cutoff frequency of 0,3 Hz has thus been utilized. The measurement of time and frequency domain variables obtained a vector of features from each window. Table III represents a summary of these dataset. Precision is the most sensitive and interesting measure to be used to compare the fundamental classifiers and the proposed system in a detailed range. VI. EXPERIMENTAL RESULTS AND DISCUSSION The results of the proposed model are provided in this section. The results provided a comparison between the proposed as an application of three main classifiers. According to Table V, the J48 achieved better accuracy than the naïve Bayes and the genetic programming. These results displayed graphically in Fig. 3. The J48 take the advantages of no domain knowledge required, no parameter setting, can handle multidimensional data, simple and fast. Accuracy 84 | P a g e www.ijacsa.thesai.org Moreover, the proposed model compared to the literature according to the accuracy parameter. The results accepted the supremacy of the proposed model, as seen in Table VI and Fig. 5, in achieving the best recognition rate rather than the sixth compared models. VII. CONCLUSION If sensors help patients register and monitor them all time and automatically report because of any abnormal activity is found, a massive number of resources may be saved. This paper presented a model to detect how effectively the user conducts physical activity using the sensors of the telephone. The model utilized three different classifiers to test the percentage of the recognition. The j48 with stochastic gradient descent approved its superiority rather than naïve Bayes and genetic programming. Moreover, the model compared to the literature and success to achieve the highest accuracy reached to 96.6%. In future work, the proposed model will be integrated to different optimization algorithm like the slap swarm optimization algorithm, the crow search algorithm or the grey wolf optimization algorithm to improve the recognition rate.
6,381.2
2021-01-01T00:00:00.000
[ "Computer Science" ]
The Impact of Persuasive Language on Ideology Perceived in Translated Children’s Literature: A Case Study This study was conducted to examine the impact of persuasive language on ideology perceived by children while reading translated children’s books. To do so, the author studied the ideological manipulations made in children's literature translation (ChLT) through analyzing the two abridged Persian translations of Mark Twain's The Adventures of Tom Sawyer. The researcher also was looking for major strategies used by the translators to reinforce the ideological attitudes of the recipients. In this regard, a model of critical discourse analysis (CDA), that of Fairclough, was used to analyze the translations in terms of their vocabulary. The model introduces three values of words, namely, experiential, relational, and expressive. Making use of the expressive value, the researcher found that the translators of the novel tried to fit the translated novel into the Iranian culture, using modification, addition, and deletion strategies. Those strategies were determining factors of the ideologies perceived by the young readers. Introduction These days, there is a tendency to translate adult books, especially classics, for children and adolescents. In this respect, many classic books, such as Defoe's Robinson Crusoe, Mark Twain's The Adventures of Tom Sawyer, and Charles Dickens's Oliver Twist, have been also abridged while translated for children. The original books are abridged for a variety of reasons that are related to didactic, ideological, cultural, sociological, psychological, and cognitive aspects of children's literature (ChL). In addition, many classic books that were originally intended for adults are now commonly thought of as works for children. Mark Twain's The Adventures of Tom Sawyer was originally intended for an adult audience. However, today it is widely read as a part of children's school curriculum in the United States. Recently, it has also been translated into Persian for children. Some researchers describe translating ChL as a means of cross-cultural communication involving the cultures of both children and adults. This is mainly because adults communicate with children through literature (Oittinnen, 2000). On the other hand, children are introduced to literature read by people of their age in other countries and are exposed to domains of other lives and cultures through which they begin to understand and accept each other as being unique and having different literary and cultural experiences (Vandergrift, 1997). However, translating ChL might cause challenges due to different morals, ideologies and customs of two distinct cultures. point out that: Behind every one of the translator's selections, as what to add, what to leave out, which words to choose and how to place them, there is a voluntary act that reveals his history and the sociopolitical milieu that surrounds him; in other words, his own culture and ideology. (p. 5) Therefore, translation is influenced by translators' own cultural values and ideology that causes them to manipulate the source text. The matter of ideology in Translation Studies is closely related to translation norms, translation strategies, and the belief that the translators' decision-making is norm-governed. Ideology is a principal issue in the field of ChLT, and is closely related to censorship and manipulation (Xeni, 2007). On censorship and manipulation in ChLT, Shavit (1986) argues that "censorship often justifies on pedagogical grounds or resulting from children's assumed incapability of understanding" (p. 112). She explains that because of the peripheral position of ChLT in the literary system, the translator is authorized to "manipulate the text in various ways by changing, enlarging or abridging or by deleting or adding to it" (p. 112). Moreover, according to Stephens (1992), every book has an implicit ideology, which is usually expressed through beliefs and values taken for granted in society. Accordingly, a translated book also may have an implicit ideology in the form of beliefs and values of the target society. In this regard, translators play a crucial role in representing the beliefs and values of the target people through deciding which lexical and grammatical items to be used. In this respect, the author decided to examine the ideological manipulations made in two abridged Persian translations of Mark Twain's The Adventures of Tom Sawyer in order to find out if the translators could render the novel in a way preserving, even reinforcing, Iranian children's ideological attitudes. The author also explored the prevalent strategies 27 used by the translators to reinforce recipients' ideological attitudes. Children's literature There are some definitions with nuances for what ChL is. Education Encyclopedia (2008) defines ChL as: Any literature that is enjoyed by children. More specifically, ChL comprises those books written and published for young people who are not yet interested in adult literature or who may not possess the reading skills or developmental understandings necessary for its perusal. Oittinen (2000) defines ChL as "the literature read silently by children and aloud to them" (p. 6). She (2000) adds that ChL can be seen both as literature produced for children and as literature read by children. The broadest definition of children's literature applies to books that are actually selected and read by children. Children select many books, such as comics or literary classics. There are various age categorizations with different number of subdivisions for ChL. ChL is an age category opposite adult literature, but it is subdivided further due to the diverse interests of children age 0-18, including picture books for ages 0-5; early-reader books for children age 5-7; chapter books that are in turn composed of short chapter books for children ages 7-9 and longer chapter books for children ages 9-12; and young-adult fiction for children age 12-18 ("By age category," n.d.). There is not a clear-cut standard for specifying these divisions, as books near a borderline may be classified either in the previous or next subdivision. 2.1.1The Role and the Position of Children's literature Although ChL plays multiple roles as an educational, social, and ideological instrument, it is generally thought of as a peripheral and uninteresting area of study. Besides being entertainment and help developing children's reading skills, it also provides world knowledge, ideas, values, and accepted behavior to children. Therefore, while translating children's books, various adjustments may be required in order to accomplish the notions of what is good and appropriate for children and also what is regarded as the suitable level of comprehensibility in a given target culture . Writers can help create and maintain beliefs, values and relations of power by the language they use such as the lexical and syntactic choices. Syntactic structures, such as nominalization, passivisation, and theme-rheme structures, can reflect a worldview through creating a particular perspective on the events, which the writer does not necessarily intend to do . In this respect, the language of children's literary and nonliterary texts is a very powerful socializing instrument, as Halliday (1978) emphasizes that children receive information about customs, hierarchies and attitudes through language. Therefore, the language of literature can promote and reinforce the adoption of these customs and so forth. Polysystem theory (Even-Zohar, 1990) believes that translated literature is usually in a peripheral position and thus is loyal to norms and models that have been already established in the literary system. As children's literature tends to be peripheral, translations in this area might most adhere to conventions and, consequently, to the target norms. Even-Zohar (1990) investigates how literature and translation act in specific contexts or systems. The polysystem theory covers all major and minor literary systems within a society. Every literary polysystem consists of a number of subsystems that are hierarchically arranged. The closer to periphery a subsystem is, the lower its cultural status within the entire system. A subsystem in the centre represents a significant part of a country's literature, and a subsystem in the periphery represents a less influential part. ChL constitutes a part of the literary system. It usually maintains a peripheral position in the literary polysystem, with low cultural status. Children's literature Translation ChLT is an almost new area within Translation Studies. Scholars became interested in ChLT when the demand for reading books of other parts of the world, especially of neighbors, began (Hunt & Bannister Ray, 2004). The demand led to an approach toward ChLT in which the focus was on the child reader. What has caused the promotion of interest and increase in number of publications in ChLT? In this regard, Tabbert (2002) introduced several factors as follows: · The belief that the books translated for children build bridges between different cultures. · The text-specific challenges posed by children's books to translators, such as the interaction between picture and words in picture books, playful use of language, dialect, register, names, cultural references, and so forth. · The polysystem theory that puts ChL in a peripheral position of the literary system. · The age-specific addressees of ChL, who are children with imperfect linguistic competence and limited world knowledge. ChLT plays some general roles known as didactic or pedagogical, cultural-sociological, psychological, cognitive, and academic aspects in order to perform its mission. The didactic or pedagogical aspect has attracted most attention of researchers and critics. Many people accept that literature, either original or translated, is the best way to stimulate the world and surroundings of children (Wells, 1986). ChLT has cultural-sociological objectives besides the didactic objective. According to Xeni (2007;2006e), through literature, cultural content is transmitted and the world is understood more efficiently. Translating children's books from other languages increases number of literary works for young people and spreads universality of human experience. ChLT can be looked at as an international phenomenon because it can go all over the world and cross the linguistic and cultural borders, and it can bring a new life to world literature by establishing universal communications (Bassnett, 1993;O'Sullivan, 2005). In respect to the cognitive aspect of ChLT, children and young adults understand new information easier when it is told through stories (Wells, 1986). When children become interested in reading translated books from other cultures, they try to understand the meaning of those books by activating their cognitive skills like thinking, analyzing, comparing, and so on. Therefore, the cognitive aspect is necessary to reach the cultural-sociological objectives. On the academic aspect, Fernandez Lopez (as cited in Lathey, 2006) believes that ChL and Translation Studies scientific disciplines also have made scholars pay attention to ChLT because new techniques and subjects have emerged from those disciplines. In the light of academic aspect, issues of ideology and translator's behavior came to Translation Studies. Some of the prominent theorists who have tried to theorize ChLT are Itamar Even-Zohar, Zohar Shavit, Hans Vermeer, and Katharina Reiss. The researcher will briefly discuss their theories related to ChLT. Based on the polysystem theory introduced by Even-Zohar, similar to ChL, ChLT has a marginal position in the literary system and has little influence. Following Even-Zohar, Shavit (1981) developed the polysystem theory and applied it to ChLT. Shavit (1981) explains that marginal position of ChL elucidates manipulations that are made on translated texts for children. For instance, changing text genre in order to adapt it to superior models in the target language may disobey the integrity and complexity of the text and ideological and stylistic adaptation. Vermeer and Reiss (as cited in Munday, 2006) are the two functionalists who developed the skopos (purpose) theory, which has significant applications to ChLT. Considering skopos as the most important criterion in any translation that determines the function of the translation in the target culture, these theorists place the young reader in the central position. So, in translating for children, the translator is free from the obligations to recreate the source text in the target culture the translator has a higher status (Thomson-Wohlgemuth, 1998). Ritta Oittinen, Emer O'Sullivan, and Tina Puurtinen are some important researchers in ChLT. These researchers investigate issues associated with translation norms and the function of translated ChL, readability of translations in the sense of translation for reading aloud, comparative ChL, child reader, and the child-text interaction (Fornalczyk, 2007). Ideology and Translation There are so many definitions for the notion of ideology. Mason (1994) suggests that ideology is a "set of beliefs and values which inform an individual's or institution's view of the world and assist their interpretation of events, facts, etc" (p. 25). Hatim (2000) defines ideology as "a body of ideas which reflects the beliefs and interests of an individual, a group of individuals, a societal institution, and so forth, and which ultimately finds expression in language" (p. 218). Therefore, it can be argued that ideology is a set of beliefs, ideas, interests, and attitudes that are accepted by an individual, a group of individuals, institutions, and so forth, and is sometimes realized as culture. Van Dijk (2005), who investigates in the fields of discourse analysis and CDA, argues, "Much of our discourse expresses ideologically based opinions" (p. 9). Thus, a comprehensive analysis of the discursive expressions provides fundamental guidelines about the prevalence of ideology in language. According to Hatim (2000), this analysis can be done at different levels including the grammatical level and the lexical level. Therefore, ideology plays an important role in decision-making process undertaken by translators. According to Schaffner (2003), translation is a matter of ideological choices besides the linguistic choices. A source text is selected according to the target readers' interests as well as their ideological, social, and cultural values. In this regard, the lexical and grammatical choices are exploited in order to show the ideological aspects of a group of people. Kress and Hodge (1993) discuss the relationship between language and ideology in a simple way. They argue that language is both an instrument of control and an instrument of communication. Linguistic forms can reflect significance of an issue or, otherwise, distort it. Therefore, readers and hearers can be either manipulated or informed. A political sense of language argues that language is ideological because it includes a systematic distortion in the service of class interest. As understood from Kress and Hodge (1993), they believe that ideological attitudes can be shown through lexical and grammatical choices, that is, linguistic forms. In this respect, translators can give significance to an attitude or belief, or otherwise, underestimate it. Therefore, through translation, the content of a source book may be exposed to several manipulations whose objective is usually to adapt the book to the target recipients' interests, beliefs, and values. Moreover, Kress and Hodge (1993) argue that language in its ideological sense relates to power and makes use of systematic distortions. As ideologies function in language in a form of power relations, translators must aware of these ideologies while translating one language to another language. Therefore, translators who translate texts from a culture that is far distinct from their own culture should be more careful about the ideologies of both the source language and the target language. Ideology and Critical Discourse Analysis Critical discourse analysis (CDA) is a rather new discipline in linguistics. Critical Linguistics and CDA were used interchangeably in the past. However, it seems that the latter has been recently preferred and has been applied to refer to the theory that had been introduced as the Critical Discourse (Fairclough & Wodak, 1997). CDA regards language as a social phenomenon. Individuals and social groups have meanings, values, and concepts that are defined in language in systematic ways. In CDA, texts are considered as the relevant units of language in communications; readers and listeners are not passive recipients in their relation to texts; there are similarities between the language of science and the language of institutions; and so on (Kress, 1989). Nevertheless, a clearer and more general approach to CDA can be found in the work of Fairclough and Wodak (1997). According to them, CDA regards language as a social practice and takes into account the context of language in use. Van Dijk, in his paper Multidisciplinary CDA: a plea for diversity (collected by Wodak & Meyer, 2005) argues that CDA "focuses on social problems, and especially on the role of discourse in the production and reproduction of power abuse or domination" (p. 96). As ideology is expressed by means of language, the ways in which ideological concepts are embedded in language are included in the domain of CDA (Sertkan, 2007). In this regard, Puurtinen (2000) explains, "CDA aims at revealing how ideology affects linguistic choices made by a text producer and how language can be used to maintain, reinforce or challenge ideologies" (P. 178). Based on the above quotation, the grammatical and lexical choices made by text producers, such as translators and authors, are not accidental. Moreover, these linguistic choices have an underlying ideology representing the ideological dispositions of the text producer. CDA conveys a framework within which these ideological dispositions and the ways of constructing and enhancing them are identified. In this regard, models of CDA including that of Fairclough help analysts to detect the linguistic expressions of ideology in a text. Calzada-Perez (2003) also believes that the primary aim of CDA is to reveal the ideological forces underlying communicative exchanges such as translating. In this respect, CDA can be used as a methodological tool in analysis of a text in order to uncover the ideological attitudes of the text producer. Objectives and Research Questions As Fairclough puts forth, the expressive value of words tends to persuade others to believe something, mostly an idea. Therefore, expressive values can be demonstrated through persuasive language in various ways, spoken or written. Translators are among the agents who may use the expressive words to enhance an idea and reorient a text toward specific ideologies. In this respect, the author aimed to answer the following questions: 1. Were the translators of The Adventures of Tom Sawyer successful in making use of expressive values of words in order to reorient the translations toward target readers' ideological attitudes, and to what extent? 2. Is there any prevalent strategy used by the translators of the novel in highlighting those ideological attitudes? Materials and Methods In this qualitative empirical study, the two abridged translations of Mark Twain's The Adventures of Tom Sawyer by Soleimani and Modarres Sadeghi, both published in 2009 in Tehran, Iran, were analyzed to find the strategies, if any, taken by the translators in making use of the persuasive language to reinforce the ideological attitudes of Iranian children. To do so, the source text and its two abridged translations were analyzed and compared through the following steps: · The source text and the two translations were read carefully. · Lexical items (words, phrases, and expressions) that sounded different were extracted from the translations and paired with the corresponding ones in the source text. In total, eight extracts were selected purposefully. This method could help identify the alterations to the source text, that is, additions, deletions, and modifications made by the translators. · The alterations that had been made in the translations of the same lexical items were compared together in order to find distinct strategies, if any, used by the translators. · The obtained results were analyzed using descriptive statistics in SPSS software. In this study, the author applied CDA as a methodological tool for the identification of the manipulations performed to highlight the ideological attitudes of the target readers and also to deal with probable ideological breakdowns. In this respect, Fairclough's method of CDA will be used. Fairclough (2001) in his book Language and Power devotes a chapter to describe one method of CDA. He states ten questions and their subquestions based on the three categories of vocabulary, grammar, and textual structure, which are important to critically analyzing any discourse. The category of vocabulary consists of four main questions about experiential values, relational values, and expressive values of words; and metaphors. In this analysis, the author focused on the expressive values of words. According to Fairclough (2001), expressive values imply how the creator of the text relates to the reality it is discussing (Fairclough, 2001, p. 93). As Fairclough defines it, expressive value is connected with "subjects and social identities" (2001, p. 93). He also points out that "the expressive value of words has always been a central concern for those interested in persuasive language" (2001, p. 99). Results of the Analysis In the case of the expressive values of words, eight extracts were identified in this study. In all the extracts, except one, the translators have made use of the persuasive language to show the magnitude and glory of God and that God is the only source of power one can believe in. Generally, the translators have highlighted the religious beliefs of the target people using strategies, including addition, modification, and deletion. It must be noted that any of these strategies may determine the ideologies perceived by the readers. For instance, addition of the word 'xodâ' meaning 'God' to a piece of translation highlighted the religious beliefs of the target community. In another case, the translator has deleted a word of the source text in order to persuade the readers that a harmful habit like smoking was an unpleasant action. Moreover, the author found out that each translator had treated the text differently, as the second translator has avoided translating most of the extracts. This has resulted to two texts with different form and length. The following examples extracted from different chapters of the novel show the comparison between the source text and the target texts in terms of expressive values. The abridged translations by Soleimani and Modarres Sadeghi are referred to as target text 1 (TT1) and target text 2 (TT2), respectively. The original extracts are referred to as the source text (ST). Example 1 This extract is from chapter one in which Tom eats jam stealthy. Aunt Polly understands it and wants to punish him. However, Tom escapes before she has time to do so. Aunt Polly utters the following statement. ST: Well-a-well, man that is born of woman is of few days and full of trouble, as the Scripture says, and I reckon it's so. TT2 : None As seen in the ST, the sentence in boldface does not mention a word meaning God although the word 'scripture' in the ST implies the God's words. However, the first translator has rendered the sentence 'as the scripture says' to ' ‫ھﻢ‬ ‫ﺧﺪا‬ ‫ﺧﻮد‬ ‫ﮔﻮﯾﺪ‬ ‫ﻣﯽ‬ ‫ﮐﺘﺎﺑﺶ‬ ‫'ﺗﻮی‬ (/xod-e xodâ ham tuy-e ketâb-aš miguyad/) meaning that 'God also says in his scripture.' The translator has added the word 'God' to emphasize that God is the source of all truths and beliefs. In other words, the translator wants to persuade the readers that the only source of power and belief is God. Example 2 This extract is from chapter six. Tom writes a sentence on his slate and hides it from Becky Thatcher. Becky wants to know what Tom has written on his slate. So, she promises not to tell anyone what he has written if he allows her to see the slate. ST: "You'll tell." In this extract, Becky promises Tom that she never tells anybody what she has seen on Tom's slate. As indicated by the translation, the sentence 'No I won't--deed and deed and double deed won't' has been rendered to ' ‫دھﻢ.‬ ‫ﻣﯽ‬ ‫ﻗﻮل‬ ‫ﮔﻮﯾﻢ.‬ ‫ﻧﻤﯽ‬ ‫ﻧﮫ‬ ‫ﮔﻮﯾﻢ‬ ‫ﻧﻤﯽ‬ ‫ﺧﺪا‬ ‫'ﺑﮫ‬ (/na ne-miguyam/, /qowl midaham/, /be xodâ ne-miguyam/). The Persian sentence means that 'I swear to God that I won't tell anybody.' According to the ST, Becky does not mention any word meaning 'God.' It was not necessary to add the sentence 'be xodâ ne-miguyam' to the translation because without this sentence, the content of the ST has been completely transferred into the TT1. However, the first translator has added the word ‫'ﺧﺪا'‬ (/xodâ/) meaning 'God' to show that God is the only power human can swear to and rely on. In this respect, the translator has indirectly highlighted the strong belief of the target readers in God in all situations. Example 3 The below extract is from chapter nine. Muff Potter and Doctor Robinson grapple each other. Injun Joe kills the doctor with Potter's knife and inculpates Potter for the murder. Now, Potter blames himself for he thinks that he has really killed the doctor. ST: "Oh, I didn't know what I was a-doing. I wish I may die this minute if I did. As seen in TT1, the first translator has added the word ‫'ﺧﺪا'‬ (/xodâ/) when translating the sentence 'I wish I may die this minute if I did.' The English sentence has been rendered to ‫ﺑﮕﻮﯾﻢ'‬ ‫دروغ‬ ‫ﺑﺨﻮاھﻢ‬ ‫اﮔﺮ‬ ‫ﺑﮑﺸﺪ‬ ‫ﻣﺮا‬ ‫'ﺧﺪا‬ (/xodâ marâ bekošad 'agar bexâham doruq beguyam/) meaning 'God kill me if I lied.' The ST has not mentioned any word in the sense of 'God,' whereas the translator has added it. By use of the word 'God,' the translator wants to emphasize that, the people of the target community believe in God in all aspects of life. In addition, he wants to establish the religious ideas in children, to make them realize how God is significant in the society they live in. The TT1 shows that God is the one who has the power to let people alive or die, and this reminds the readers the religious ideologies of their community. As shown by TT2, the second translator has not translated this sentence. Example 4 This example has been extracted from chapter thirteen. Tom and his friends Huck and Joe Harper decide to go to an uninhibited island and become pirates. Each boy chooses a nickname for himself. Red-Handed is the nickname of Huck. ST: The Red-Handed made no response, being better employed. He had finished gouging out a cob, and now he fitted a weed stem to it, loaded it with tobacco, and was pressing a coal to the charge and blowing a cloud of fragrant smoke--he was in the full bloom of luxurious contentment. The other pirates envied him this majestic vice, and secretly resolved to acquire it shortly. TT2 : None In this example, the phrase 'majestic vice' refers to a fault that is regarded majestic. The fault here is smoking tobacco. Huck, one of the three boys in the uninhibited island, smokes tobacco, and the two other boys envy him and decide to try smoking tobacco betimes. The ST suggests that smoking is a majestic action although it is a fault. However, the first translator has rendered the 'majestic vice' to ‫زﺷﺖ'‬ ‫'ﮐﺎر‬ (/kâr-e zešt/) meaning the 'obscene act.' As seen in the translation, the word 'majestic' has not been translated at all. The reason is that the translator wants to show young readers that smoking is considered as an unpleasant act. The translator has deleted a word of the ST in order to persuade the readers that smoking is not a majestic action, but on the contrary, an unpleasant action. Example 5 The below extract is from chapter twenty. Becky Thatcher takes the teacher's book and turns its pages. However, when she wants to close the book, she tears the page she is watching. Tom sees her as she causes the page to be torn. Therefore, Becky gets nervous and fears that Tom will tell the truth to the teacher. TT2 : None As seen in this extract, the interjection of exclamation 'oh' has been rendered to ‫ﺧﺪاﯾﺎ'‬ ‫'آخ‬ (/'âx xodâ-yâ/) literally meaning 'oh, God.' Obviously, the ST does not have any word in the sense of 'God,' whereas the first translator has added the word 'God' to his translation. In this respect, the translator shows that God is the only power that people can rely on in difficulties. This enhances religious ideologies of the target community in the mind of young readers. Example 6 The below extract is from chapter twenty-six. Huck and Tom are in an abandoned house. They are watching two men coming into the house to take their hidden money. The men sleep for a while. Each of them must watch when the other is slept. However, both of them fall asleep until a noise awakes one of them. In this example, the word of exclamation 'Here!' has been rendered to ‫ﷲ!'‬ ‫'ﺑﺎرک‬ (/bârek-allâh/) literally meaning 'God bless you!' Actually, the term 'bârek-allâh!' is an Arabic word used also by Farsi speakers to admire someone. In this example, both the word 'Here!' in the sense of 'Hey!' and its translation 'bârek-allâh!' have been used ironically to squib the slept man that what a good watchman he is. There are some translations for the word 'Here!' including ‫ﺑﺒﯿﻦ!'‬ ‫'اﯾﻨﻮ‬ (/'ino bebin/) meaning 'Look at him!' and ‫'ھﯽ!'‬ (/hey/) meaning 'Hey!' that the translator could have used instead of 'bârek-allâh!' Anyway, the translator has used the word 'bârek-allâh!' although it does not mean 'Here!' The use of 'bârek-allâh!' in this context shows the strong religious beliefs of the target readers because it refers to the fact that God is the sole source of blessing. In respect to the TT2, the word 'Here!' has not been translated. Example 7 This example is from chapter thirty-one. Tom and Becky go to a picnic with their classmates. The students go on a hike in a cave and get out of the cave when the time of picnic is finished. Now, they must return to the village. However, Tom and Becky get lost in the murky aisles of the cave. ST: "Well. But I hope we won't get lost. It would be so awful!" and the girl shuddered at the thought of the dreadful possibilities. /beki 'az fekr-e gom šodan tars bar-aš dâšt va be larze 'oftad/. In this example, the first translator has rendered the sentence 'I hope' to ‫ﮐﻨﺪ'‬ ‫'ﺧﺪا‬ (/xodâ konad/). The sentence 'xodâ konad' means 'May God help us,' which actually has the meaning equal to 'I hope.' The second translator has rendered the sentence 'I hope' to ‫'اﻣﯿﺪوارم'‬ (/'omidvâram/), which is the exact meaning of the English sentence. Although the sentence ' 'omidvâram,' used by the second translator, implies trusting in a superior power, that is, God, it does not suggest the word 'God' explicitly. Both translators have rendered the sentence 'I hope' to its right sense. However, the first translator has chosen to translate the sentence in a way showing that God is the sole source of hope and reinforcing the religious beliefs in the young readers. Therefore, TT1 is found as an example of the expressive value that highlights the young readers' values and beliefs. Example 8 This example has been extracted from chapter thirty-three. Tom and Huck find a treasure in the cave. They carry it toward the village. However, on their way to the village, they come across an old man named Jones. The old man requests them to go to Widow Douglass's house. TT2 : None As indicated in this extract, the sentence 'Mr. Jones, we haven't been doing nothing' has been translated to ' ‫ﺟﻮﻧﺰ‬ ‫آﻗﺎی‬ ‫ﺑﮫ‬ ‫ﺧﺪا‬ ‫ﻧﮑﺮدﯾﻢ‬ ‫ﺑﺪی‬ ‫ﮐﺎر‬ ‫ﻣﺎ‬ ' (/'âqâ-y-e ǰownz be xodâ mâ kâr-e bad-i na-kardim/) meaning 'Mr. Jones, we swear to God we did not do anything wrong.' In TT1, the translator has added ‫ﺧﺪا'‬ ‫'ﺑﮫ‬ (/be xodâ/) in the sense of 'we swear to God,' which clearly does not exist in the ST. In this example, again, the translator represents God as the only source to which people oath and the only source people can believe in. Although the complete content of the ST can be transferred to the target language without adding the notion of 'swear to God,' the addition highlights the strong belief of the target community in God. Discussion According to the examples, the first translator has oriented the meaning of the source text toward the target culture and ideology in a positive way. The extent to which the first translator and the second translator were successful to do so is 100% and 0%, respectively. In this regard, the first translator has manipulated the source text in order to highlight the target culture and ideologies, especially religious ones. The notions toward which the manipulations have been oriented involve believing in God as the sole source of power, hope, trust, and security. The manipulations were done through additions, modifications, and deletions. In respect to the second question of this study, the most prevalent strategy used by the first translator to manipulate the text was addition that comprised 62.5% of the total strategies. The strategies of modification and deletion comprised 25% and 12.5%, respectively. As discussed earlier, the second translator has translated only two extracts without any orientation toward a specific ideology. The study by Sertkan (2007) found a result similar to the above result. Sertkan (2007) analyzed five versions of Oliver Twist and found that constant use of lexical items with religious connotation in translation preserves the values, ideas, and beliefs that are current in the readers' mind. In the present study, the translators have also manipulated the original text and emphasized the use of words like 'xodâ' meaning 'God' to highlight the target readers' religious ideologies. The study by Zeinab Hussein Taha Khwira (2010) also agrees with the results of the present study. The translators of Robinson Crusoe used different strategies such as deletion, addition, and modification in order to fit the original text into the Arab culture. They also revealed that the strategy of translation is a determining factor in the children's understanding of the ideologies and values embedded in the text. As mentioned earlier, translators of The Adventures of Tom Sawyer also has used deletion, addition, and modification strategies in cases that reflected specific ideologies. The results of Minga's study (2005) on the appropriateness of an approach taking into consideration the ideological issues in translation of Ngugi's children books for Francophone young readers showed that the translated texts read differently from one translator to another. The present study showed that the two translators have provided translations with different form and length. Conclusion Based on the results, the two translators differed in their approach toward the translation of the source text. In this regard, the first translator felt more freedom to manipulate the text in order to reorient the translation toward specific ideologies, mostly religious ones, of the target readers. When adding a notion to a piece of text, it is usually done to give more information, that is, to explicate an issue. On the contrary, deleting a notion from the text means that the translator has decided to give less information or to make something implicit in the text. It can be concluded that the strategies used by translators can determine the ideology perceived by the readers. The ideologies would be impressive through expressive values of words. In this respect, the act of translation is subject to translators' decision-making, which is in turn influenced by their own cultural values and their ideology, which causes them to manipulate the source text by making some additions, deletions, adaptations, and so on. In this regard, Shavit (1981) argues that the translator of ChL have great liberties to manipulate the text due to the peripheral position of ChL in the polysystem. Therefore, the translator may change the text to enhance its moral or educational values, or to adapt to the child's level of understanding. Considering that ChL tends to be peripheral, translations in this area might be most oriented toward conventions and consequently toward the target norms and ideologies. Children get information about attitudes, customs, and generally cultures of other nations through literary and nonliterary texts. ChL provides world knowledge, ideas, and values besides entertaining and developing children's reading skills. In this regard, writers' use of lexical and grammatical choices contributes to the established values, beliefs, and ideologies. The translators were successful to highlight the religious beliefs of the Iranians. It shows that the translators were fully aware of the Iranian cultural and ideological constraints governing ChL. It also reveals that the translators' decision for what and how to translate can make a great impression on readers. Through translation, translators can highlight a belief in the target community and make it more influential by applying specific strategies like the strategy of addition, which has been used in the abridged translations of The Adventures of Tom Sawyer. Therefore, translators not only can reduce the negative effect of the ideologies and cultural attitudes of the source community but also can reinforce some ideologies of the target community that has not been mentioned in the source text.
8,429.4
2013-10-15T00:00:00.000
[ "Linguistics", "Education" ]
Interfacial Polarization of Thin Alq3, Gaq3, and Erq3 Films on GaN(0001) This report presents results of research on electronic structure of three interfaces composed of organic layers of Alq3, Gaq3, or Erq3 deposited on GaN semiconductor. The formation of the interfaces and their characterization have been performed in situ under ultrahigh vacuum conditions. Thin layers have been vapor-deposited onto p-type GaN(0001) surfaces. Ultraviolet photoelectron spectroscopy (UPS) assisted by X-ray photoelectron spectroscopy (XPS) has been employed to construct the band energy diagrams of the substrate and interfaces. The highest occupied molecular orbitals (HOMOs) are found to be at 1.2, 1.7, and 2.2 eV for Alq3, Gaq3, and Erq3 layers, respectively. Alq3 layer does not change the position of the vacuum level of the substrate, in contrast to the other layers, which lower it by 0.8 eV (Gaq3) and 1.3 eV (Erq3). Interface dipoles at the phase boundaries are found to be −0.2, −0.9, −1.2 eV, respectively, for Alq3, Gaq3, Erq3 layers on GaN(0001) surfaces. Introduction Gallium nitride (GaN) is a very attractive semiconductor for applications in optoelectronics and photovoltaics. It is also used for creating high-power and -frequency devices [1,2]. GaN is one of the most commonly used materials for fabricating devices in the mentioned electronic areas. This occurs due to its good physicochemical properties, such as a direct and wide band gap, high thermal conductivity, and thermal stability. The III-nitrides semiconductors also have a potential in creating three-dimensional hybrid organic/inorganic electronic devices, such as organic light-emitting diodes (LEDs), organic field-effect transistors (OFETs), or biosensors [3,4]. The use of organic semiconductors from the Mq 3 chelate group (M-trivalent metal, q-8-hydroxyquinoline) as an active element of electronic devices has been known for years. It began with Tang's report, which showed potential possibilities of Alq 3 [5]. In recent times, Mq 3 molecules seem to have been great candidates for applying to a light-emitting and/or electron-transporting material in hybrid technology. This is because of the merging of the high charge carrier mobility and efficient charge injection of inorganic semiconductors with the strong light-matter coupling and large chemical composition diversity of organic semiconductors [6][7][8]. Mq 3 complexes are promising materials for sensor and biosensor applications, the main reason is their ability to interact with a wide range of analytes, such as p-nitroaniline, NO 2 , ethanol, and methanol [9]. Another relevant application is the incorporation of Mq 3 compounds to improve OLED's device design [10]. Lately, Erq 3 was used to exceed the limitation of the exaction production efficiency of NIR OLED's over the theoretical limit of 100%, which can lead to light sources exceeding the intensity of the OLEDs produced in current technology [11]. The use of Gaq 3 allowed for increased efficiency of the solar cells [12] and OLED's [13] and application of Alq 3 as an acceptor material in the UV-photodetector [14]. With the advancement of technology and making it possible to produce layers with better properties, there has been rediscovered interest in the organic/inorganic hybrid structures in search of new functionalities in various fields of study. Organic materials are projected to enter the GaN-based hybrid device field [15]. The interfacial polarization of inorganic-organic heterojunction is important because it brings steep shifts in electronic band structure across interfaces and thus effectively tunes charge carrier transport. One of the possible ways to modify charge injection behavior in inorganic-organic heterojunction devices is to make use of interfacial polarization caused by the partial alignment of the permanent dipole moments of polar organic molecules [16,17]. Mq 3 molecules have a large electric dipole moment (~4 D) [18,19]. Mq 3 molecules have been widely studied for their potential applications, inter alia, in organic solar cells, light emission diodes, and data storage and communication devices [20][21][22][23]. Regarding this, the molecule/GaN systems are attractive for both industry and academic research. H. Kim et al. in work [24] proposed using an Alq 3 layer in GaN-based heterostructures. Apart from Alq 3 , the Gaq 3 , and Erq 3 appear to be new candidates for applying to such structures. An important issue in the context of such systems is the electronic structure of the interface, in particular the interfacial polarization or the position of the highest occupied molecular orbital (HOMO) level of molecules relative to the valence band maximum (VBM) of the substrate. So far, this information has been omitted in reports on Mq 3 films on GaN(0001) surface. The interfacial polarization has an impact on the band offset at the interface and this, in turn, has a bearing on the current-voltage characteristics of inorganic-organic devices. The tuning effect of central atom M in Mq 3 molecules on the band offset is of application importance. This report presents a basic study of Alq 3 , Gaq 3 , and Erq 3 layers on GaN(0001) surfaces. The research focused on the electronic properties of the resulting interfaces and was carried out using ultraviolet photoelectron spectroscopy (UPS) assisted by X-ray photoelectron spectroscopy (XPS). The main goal was to check the capability of used Mq 3 to tune the position of HOMO and vacuum levels for the systems formed with p-GaN(0001) surfaces. Materials and Methods In this experiment, gallium nitride p-type, (0001)-oriented, on which Mq 3 films were deposited, was used as a substrate. Mg dopant concentration was ∼1 × 10 18 cm −3 . The GaN(0001) samples around 5 × 10 mm 2 in size were cut from one wafer grown by metalorganic chemical vapor deposition. Initial bare surfaces with a trace of residual oxygen were achieved by degassing GaN samples mounted on Mo plates. The samples were thermally annealed up to 500 • C. A radiation heater in an ultrahigh vacuum (UHV) chamber with a base pressure lower than 1 × 10 −10 Torr was utilized. The temperature was monitored by a pyrometer. The three Mq 3 /GaN(0001) systems were grown in situ by evaporation of molecules from quartz crucibles heated with thermal radiation. The calibration of the sources was done by means of XPS [25][26][27][28]. The 1.5 nm attenuation length of electrons with a kinetic energy of ∼370 eV in organic layers was used to evaluate the film thicknesses and thus growth rates. The parameter was calculated based on NIST Standard Reference Database [29]. The films were deposited step by step up to 15 nm. Adsorbate dosages were established from the evaporation time after the source temperature had stabilized. A surface-analysis system (Specs) was employed for in situ characterization. The main technique used was UPS, and the second was XPS. The XPS data collected in this experiment suggest the growth mode of the Mq 3 layers on the substrate, although the study has not focused on determining it. The decrease in the intensity of the Ga 2p core-level line with the increase in the Mq 3 thickness shows that the data are closest to the theoretical prediction of the Volmer-Weber growth mode. It indicates 3D growth mode for all three Mq 3 films. The photoemission experiments were carried out using a hemispherical electron energy analyzer (Phoibos 100) and a UPS source with He I (21.2 eV) excitation line, and two X-ray non-monochromatic radiation sources, i.e., Mg Kα (1253.6 eV) and Al Kα (1486.6 eV). Photoelectrons were collected in the CAE mode with a pass energy of 2 or 10 eV and a step size of 0.025 or 0.1 eV, respectively, for UPS and XPS measurements. During measurement, the optical axis of the analyzer entrance was normal to the substrate surface. Binding energy values refer to the Fermi level (E F ) of the electron analyzer, the position of which was determined using an argon ion cleaned Ag sample. No charging effect was observed during the photoelectron experiments. CasaXPS software was used to analyze XPS and UPS spectra. Gaussian and Lorentzian line shapes with Shirley-type backgrounds were applied. All measurements were made at room temperature. Results Herein, the results are presented for 7 nm thick organic films, where electronic states of molecules are stabilized and are not affected by phase boundary electron transfer effects. Therefore, the position of the states is stabilized and does not depend on further thicknesses. The valence band of bare GaN(0001) surface and covered with Alq 3 , Gaq 3 , Erq 3 is presented in Figure 1. The spectrum of the bare substrate reveals the valence band maximum (VBM) located at 2.6 eV below the E F . Separate deposition of Alq 3 , Gaq 3 , Erq 3 molecules onto the bare GaN(0001) surfaces changes the shape of the valence band. When the surface is completely covered with the molecules, the appearance of an additional peak in the vicinity of the Fermi level is clearly visible. These electron states are recognized as the highest occupied molecular orbitals (HOMOs), their onsets are located at 1.2, 1.7, and 2.2 eV below the E F , respectively, for Alq 3 , Gaq 3 , and Erq 3 layers. In the case of Alq 3 molecules, the HOMO level is located in the same position for various coverages. The Gaq 3 HOMO is clearly visible at the lowest coverage at 1.6 eV and shifts by 0.5 eV towards a higher binding energy with increasing film thickness and remains constant for coverages ≥7 nm, while the HOMO of Erq 3 molecules behaves similar to Alq 3 . The positions were determined from the intersection of an extrapolated line fitted to the leading edge of the spectrum and its background. In the photoelectron energy distribution curves, other characteristic features are also visible. The maxima are recognized as deeper electron states of the molecules, i.e., HOMO-1, HOMO-2. The vacuum level (E VAC ) of the bare substrate was located 4.3 eV above the E F , calculated from the equation E VAC = hv − E cutoff , where hv = 21.2 eV is photon energy of He I line and E cutoff is a cut-off of UPS spectrum. An electron affinity can be calculated from the equation χ = E VAC − E g − E VBM , where E g is a band gap width and E VBM is a position of VBM. For the substrate, the electron affinity equals 4.3 eV. The UPS data allow constructing a sketch of energy bands for the initial GaN(0001) surfaces used in this experiment, as shown in Figure 2. The band bending of the bare substrates, induced by the Fermi level pinning at surface states, is in evidence. Assuming that the bulk Fermi level of the substrate is located 0.1 eV above the valence band maximum, the band banding is equal to 2.5 eV. Even though the substrates are p-type, the surface Fermi level is closer to the conduction band minimum than to the valence band maximum. This result is in contrast to that in Refs. [30,31], which is most likely due to the fact that the initial surface of the substrates used in this report is depleted of holes. On GaN(0001) close to the conduction band minimum, there is a surface state which derives from Ga dangling bonds [32,33], thus, in the case of p-type GaN, the Fermi level pinning to this state leads to a strong band banding, which is the common observation [34][35][36][37]. Giving that the substrate is Mg-doped, the formation of depletion region is shown. The magnitude of band bending at the substrate needs to be included when trying to analyze the current-voltage characteristics of the device based on the inorganic-organic interface. As is shown further in the text, the magnitude can be changed after the phase boundary formation. Different termination of the substrate surface generally leads to a vacuum level change. It is not the case for the GaN covered with Alq 3 layer where the E VAC does not alter, thus the work function change relative to the bare GaN(0001) equals zero (∆φ = 0). The same vacuum level was observed for various Alq 3 coverages. For Gaq 3 the vacuum level systematically decreases with increasing film thickness. Finally, for coverages ≥7 nm it is located 3.5 eV above the E F , giving the work function change ∆φ = −0.8 eV. The highest change of the E VAC was noted after deposition of Erq 3 , for which vacuum level was located 3.0 eV above the E F , giving ∆φ = −1.3 eV. The Erq 3 vacuum level decreases with film thickness, similar to Gaq 3 molecules. However, in order to reproduce the true change of work function ∆φ D at the resulting inorganic-organic phase boundary, i.e., to determine the interface dipole, it is necessary to know whether there is an electron transfer at Mq 3 /GaN interface or not. When charging of the interface states occurs, the band bending of the substrate changes. The change leads to a shift of the E VAC level for the substrate covered with molecules, the shift magnitude should be the same as the magnitude of the band bending change. To determine whether an electron transfer has occurred at the interface, it must be specified if the VBM position of the substrate has changed. Unfortunately, when the GaN(0001) surface is covered with an Mq 3 layer, an additional density of states resulting from an overlapping of the ad-molecules' orbitals with the GaN valence band prevents direct determination of the VBM from UPS measurements. Nevertheless, this measurement can be done indirectly using XPS. The core level lines for the substrate covered with the molecules are still visible since the mean free path for electrons from them is longer than for the valence band electrons. The positions of the VBM of the substrate for the three interfaces can be estimated based on the core level lines of the GaN substrate displacements, e.g., the Ga 3d or N 1s core level lines, after the molecule depositions considering the fact that the positions of the peaks relative to the VBM remain constant after ad-layer deposition. It is due to the fact the XPS results do not show indications of meaningful chemical interaction between the substrate and the adsorbed molecules. Figure 3 shows changes in the Ga 3d and N 1s peak positions caused by the presence of Alq 3 , Gaq 3 , Erq 3 layers. One can see that the shifts of the peaks are the same, even in the case of Gaq 3 molecules, where the Ga 3d state is derived from two sources (the substrate and the adsorbate). So the Ga 3d peak for the bare GaN(0001) lies 20.4 eV above the E F and is located 17.8 eV above the VBM (see Figure 2). The latter value is constant and in line with other data [38][39][40]. Knowing the changes of the work function ∆φ and band bending ∆φ BB , we can express the interface dipole as their sum, i.e., ∆φ D = ∆φ + ∆φ BB . The values of interfacial polarization for the three Mq 3 /GaN systems are presented in Table 1. The electron affinities for organic layers are 2.8, 2.4, 2.3 eV, respectively, for the Alq 3 , Gaq 3 , Erq 3 molecules (assuming their band gaps are 2.7, 2.8, 2.9 eV [41][42][43]). The data allow constructing band diagrams for the three Mq 3 /GaN interfaces, as shown in Figure 4. The interfacial polarization ∆φ D has the smallest value for the Alq 3 film and the largest for the Erq 3 film. This means that the higher is the number of electron shells of the central metal ion in the organic molecule Mq 3 , the higher is ∆φ D (in absolute value). When the Alq 3 and Gaq 3 molecules reduce the band bending of the inorganic substrate, the Erq 3 molecules slightly increase it (relative to the bare substrate surface). The presence of organic molecules on the GaN(0001) nominally enables the work function reduction up to 1.2 eV. Knowing the electron affinity of the GaN substrate and the adsorbers as well as the interface dipole ∆φ D of the systems unoccupied band offsets at the organic-inorganic interface can be expressed as: To calculate the occupied band offsets, the band gaps of the semiconductors need to be included The band offsets between the conduction band and LUMO of the molecules are 0.5, 0.2, and 0 eV, respectively, for Alq 3 , Gaq 3 , Erq 3 layers on GaN. Obtained data in this research are summarized in Table 2. The band offsets between the valence band and HOMO levels of the molecules Mq 3 are 1.2, 0.8, 0.5 eV, for M = Al, Ga, Er, respectively. It means that that the higher is the number of electron shells of the central metal ion in the organic molecule Mq 3 , the lower is the distance between occupied bands of the inorganic and organic semiconductors. The above analysis shows that tuning of the vacuum levels, HOMO levels, and band offsets at the interfaces is possible by changing the central M atom of the molecule Mq 3 . Conclusions UPS assisted by XPS was used to investigate the electronic properties of the three Mq 3 /GaN(0001) interfaces. The electron affinity of the clean GaN(0001) surface was found to be 3.5 eV and the VBM position was measured to be 2.6 eV below the E F . HOMO levels were determined to be at 1.2, 1.7, 2.2 eV for the Alq 3 , Gaq 3 , Erq 3 layers. The interface dipoles at the phase boundaries were amounted to be −0.2, −0.9, and −1.2 eV. The band offsets between the VBM of GaN(0001) and the HOMO level of the Alq 3 , Gaq 3 , Erq 3 molecules amounted to 1.
4,111.8
2022-02-23T00:00:00.000
[ "Physics", "Materials Science" ]
Live long and prosper: durable benefits of early-life care in banded mongooses Kin selection theory defines the conditions for which altruism or ‘helping’ can be favoured by natural selection. Tests of this theory in cooperatively breeding animals have focused on the short-term benefits to the recipients of help, such as improved growth or survival to adulthood. However, research on early-life effects suggests that there may be more durable, lifelong fitness impacts to the recipients of help, which in theory should strengthen selection for helping. Here, we show in cooperatively breeding banded mongooses (Mungos mungo) that care received in the first 3 months of life has lifelong fitness benefits for both male and female recipients. In this species, adult helpers called ‘escorts’ form exclusive one-to-one caring relationships with specific pups (not their own offspring), allowing us to isolate the effects of being escorted on later reproduction and survival. Pups that were more closely escorted were heavier at sexual maturity, which was associated with higher lifetime reproductive success for both sexes. Moreover, for female offspring, lifetime reproductive success increased with the level of escorting received per se, over and above any effect on body mass. Our results suggest that early-life social care has durable benefits to offspring of both sexes in this species. Given the well-established developmental effects of early-life care in laboratory animals and humans, we suggest that similar effects are likely to be widespread in social animals more generally. We discuss some of the implications of durable fitness benefits for the evolution of intergenerational helping in cooperative animal societies, including humans. This article is part of the theme issue ‘Developing differences: early-life effects and evolutionary medicine’. Introduction Social evolution theory aims to understand and predict how natural selection acts on heritable social traits, that is, traits that affect the fitness of other members of a population. Hamilton's [1,2] inclusive fitness theory defined the condition (rb . c, known as Hamilton's rule) for which selection can favour the evolution of altruism (i.e. a trait that boosts the lifetime fitness b of a recipient, related by coefficient r, at a lifetime fitness cost c to the actor) directed towards genetic relatives. Subsequent theory has emphasized repeated interactions, intergroup competition and group augmentation as promoters of cooperative behaviour [3][4][5]. Inclusive fitness theory in particular has provided a very general framework to understand variation in social traits (both behavioural and life-history traits), and to identify ecological and demographic factors that facilitate cooperation and the formation of animal societies [6,7]. Cooperative animal societies, in which 'helpers' work to rear offspring that are not their own, are a rich testing ground for these theories because they feature conspicuous examples of altruism or 'helping', together with the possibility of measuring the fitness consequences of variation in helping effort and life-history decisions. In addition, research on cooperative vertebrates provides a potentially informative comparator for Homo sapiens, one of the few cooperatively breeding primates [8,9]. There is now considerable evidence that major features of human life history (e.g. long period of offspring dependency, short inter-birth interval, early reproductive cessation, prolonged post-reproductive lifespan) have been moulded via kin selection operating in the family groups of our Pleistocene ancestors [10][11][12][13][14][15]. Although the costs and benefits in Hamilton's rule are in the currency of lifetime direct fitness, tests of kin selection theory and other proposed mechanisms of cooperation (such as reciprocity and coercion [16,17]) rely on measuring reasonable proxies for lifetime fitness impacts. For example, the fitness benefit conferred by helpers might be tested by comparing the number of surviving offspring produced by reproductives with and without the assistance of helpers [11,18,19]. However, the literature on early-life effects and developmental plasticity shows that there may often be delayed impacts of investment that are manifested long after the initial act. In social insects, for example, variation in provisioning in the larval period triggers developmental switches and leads to permanent behavioural and morphological castes [20,21]. In vertebrates, permanent castes are typically lacking, but research on laboratory rodents and humans shows that postnatal care can have lifelong effects on cognitive function, social behaviour and health [22][23][24]. Thus, the effects of help on a recipient's fitness, particularly when the recipient is an individual offspring, may be manifested long after the helping act itself-even after the helper has died or dispersed. The potential for early-life investment to 'programme' an offspring's subsequent life history could promote or inhibit selection for helping, depending on whether helped offspring are more or less likely to disperse, and more or less likely to produce surviving offspring themselves. These delayed impacts of help represent an 'internal' durable benefit conferred by the helper, similar to the 'external' durable benefits that can arise through niche construction, for example, the construction of a nest or shelter that benefits future generations. Recent theory suggests that the potential for helping to result in benefits that are manifested in the future (in addition to, or instead of, fitness benefits that are manifested contemporaneously with the helping act) has a strong influence on selection for altruism in structured populations [25]. In these 'patch-structured' or 'group-structured' models, helping boosts the fecundity (number of offspring) of the local group of kin, but also increases competition among these local kin. The former inclusive fitness benefit of helping is counteracted by the latter inclusive fitness cost resulting from increased competition. The further into the future the benefits of helping are realized, the lower the relatedness of the actor to the individuals in the patch that suffer the costs of competition, and hence the greater the overall strength of selection for helping [25]. The potential durable benefits of helping in cooperative animal societies have been little explored empirically. One exception is Russell et al.'s [26] study of meerkats (Suricata suricatta), which showed that female offspring that gain most weight during the helping period (and hence are likely to have received more help), and those that are experimentally fed, are more likely to reproduce at some point in their lives and more likely to attain the position of dominant breeder. In other cooperatively breeding vertebrates (including humans), measuring delayed or lifelong impacts of help is challenging because it requires following the recipients of care across their entire lifespan, and recipients often die or disperse before attaining reproductive status. Here, we investigate the immediate and lifelong consequences for the recipients of helping in a cooperatively breeding mammal, the banded mongoose (Mungos mungo), using a 17-year dataset. This species exhibits an unusual form of one-to-one early-life offspring care called 'escorting' which provides an opportunity to tease apart genetic, maternal and alloparental effects on development and later life history [27,28]. Multiple females give birth in each breeding attempt, usually on the same day [29], and the communal litter is kept underground for the first month of life. Mothers show no discrimination during suckling, and pups are sometimes observed to move from female to female to suckle [30,31]. From the time that pups emerge from the den until they reach nutritional independence at three months old, pups form exclusive one-to-one caring relationships with adult helpers (their 'escorts') who are no more closely related than a random group member [27]. Escorts provision and groom the pups in their care, and carry them away from danger. However, there is great variation among offspring in the amount of escorting received: some pups spend all day every day with their escort, whereas others have to fend for themselves from an early age [27,28]. The escort system allows us to quantify the amount of postnatal help received by individual offspring in each communal litter. By contrast, in most other cooperatively breeding insects, birds and mammals, helper effort is shared across entire litters or broods [32], so it is more difficult to isolate the fitness impacts of the investment by an individual helper on an individual recipient. In addition, our system is unusual because dispersal away from the study site is rare [33,34], and we can follow individuals across their entire lives, from pup to reproducing adult. In this paper, we capitalize on this system to test whether the care received by pups in the first three months of life has lasting effects on their survival and reproduction as adults, long after the period of care has ended. Material and methods (a) Study species and population Banded mongooses are small (1.5 kg) cooperatively breeding carnivorous mammals common to sub-Saharan Africa. Since 1995, we have continuously studied a habituated population of wild banded mongooses living on and around the Mweya Peninsula in Queen Elizabeth National Park, western Uganda (0812 0 S, 29854 0 E); for details of the field site and the population, see [35] and references therein. At any one time, the population consists of 8 -12 mixed sex groups of 10 -30 individuals, plus offspring. On average, four females give birth in each breeding attempt, synchronizing birth to the same day in 64% of breeding attempts [29]. The resulting mixed-parentage litter of pups is guarded at the den during the first month of life by one or more babysitters [35]. After emergence care is provided by escorts up to the age of three months [27,36]. Individuals reach sexual maturity at around 1 year old, and life expectancy at this age is around 3 years (males ¼ 42 months; females ¼ 38 months). There is no reproductive suppression among females in this species: adult females start breeding when they are 1 year old and produce up to four litters per year until they die [33]. Males, by contrast, form an age-based social queue in which the oldest two or three individuals mate-guard and aggressively monopolize access to oestrous females [37,38]. Younger males, though sexually mature, are typically excluded from reproduction until they reach relatively advanced ages (3þ years; [38]). We collected data from individuals from 12 social groups of on average 22 adult individuals (s.d. 7.3, range 7 -37) inhabiting the study area between the years 2000 and 2016. All mongooses in the study population are individually marked using either unique hair-shave patterns or colour-coded collars, and are habituated to close observation from at least 5 m. Additionally, each mongoose is marked with a transponder chip (Wyre Micro Design, UK) or, before the year 2009, with a unique tattoo on the inside of the leg. One or two mongooses in each group are fitted with a radio collar weighing 26 -30 g (Sirtrack Ltd, Havelock North, New Zealand) to allow the groups to be located. (b) Life-history parameters and genotyping Over the 17-year study period, each group was visited for at least 20 min every 1 -3 days to record the presence and absence of individuals in each group. As banded mongooses almost always disperse in groups, either voluntarily or through a process of violent eviction [39 -41], we could distinguish between dispersal and deaths as cause for permanent absence from the group. For the dataset used in the analyses, we included only those individuals whose date of birth and death were both known with at least one week's accuracy. We identified female pregnancy by visual swelling of the abdomen and confirmed this by palpation and ultrasound scans during trapping [42]. Births occur overnight in an underground den, and were identified by the absence of pregnant females the following morning and a subsequent change in their body shape and mass loss [29,43]. Pups were first captured at emergence from the den, at around three to four weeks of age, weighed and sexed, and given a unique ID; see [44] for further details of the trapping procedure. When individuals were first trapped, a 2 mm 2 skin sample was taken for extraction of DNA, which was used to construct a pedigree for assigning parentage. Parentage was assigned using MasterBayes 2.51 [45] and COLONY 2.0.5.7 [46] as described in [47], for a dataset of 2310 individuals born in the study area between the years 2000 and 2016. Lifetime reproductive success was determined as the total number of pups assigned to each individual. For full details of DNA extraction, genotyping, parentage assignment and pedigree construction, see [47,48]. (c) Measuring early-life care Shortly after emergence from the den, pups form one-to-one caring relationships with particular adults known as 'escorts', which feed, carry, groom and protect the pup from predators [36]. The majority of pups have an exclusive relationship with a single escort; where pups have multiple escorts, they spend the great majority of their time with a single 'primary' escort [28]. Escorting starts at around four weeks of age and continues until pups reach nutritional independence, when they are around 90 days old (hereafter defined as the 'escorting period'). While pup -escort dyads are forming, pups aggressively defend access to their escort [49], but thereafter both parties (escort and pup) actively seek each other out to maintain the association [50]. Experiments demonstrate that escorts and pups can recognize each other's calls, and that escorts are particularly reactive to the distress calls of the specific pup in its care [50,51]. We observed escorting behaviour in 120 communal litters in 12 social groups that inhabited the study area between 2000 and 2016. Groups were visited an average of 12 times during the escorting period, for a minimum of 20 min (the duration of one pup focal observation session). Only those litters for which we had five or more observation sessions (on different days) were included in the analyses. Pup focals were conducted so that each pup was followed for 20 min, and at each minute interval, individuals within 30 cm of the focal individual were noted (focals were paused if the focal pup went out of sight, and resumed once sighted again). If the pup spent more than half of the 20 min focal within 30 cm of the same individual, that adult was marked as the escort for that focal session [27]. The proportion of the pup focals a pup was seen being escorted was taken as a measure of care it received, termed its 'escorting index'. Consequently, the escorting index varies from 0 (never observed being escorted) to 1 (always observed being escorted). (d) Body mass and ecological data The emergence body mass of pups was recorded when the pups were first trapped at three to four weeks of age; see above. Adult body mass measurements were collected as part of the group visits. Most individuals are trained to step onto portable weighing scales in return for a small milk reward and were weighed weekly in the morning before foraging started. Climate data were collected by Mweya meteorological station, and after 2014 by the Banded Mongoose Research Project. Cumulative rainfall during the month before the litter was born was used as a proxy of resource availability, as previous studies indicate that rainfall in the previous 30 days is positively correlated with adult daily body mass gain and pregnancy rate [52,53]). We used generalized linear mixed models (GLMMs) with a binomial error structure and logit link function, to analyse predictors of survival to nutritional independence at three months, and survival to maturity at 1 year. Predictor variables were escorting index, emergence weight of the pup, cumulative rainfall in the month before birth, and sex of the pup. An interaction between sex and escorting index was included to test for differential effects of escorting between the sexes. Social group ID and communal litter ID into which the pup was born were included as random factors in the analyses. This allows the intercept of the model to vary by litter ID and group, to control for group-level and litter-specific factors. (ii) Body mass We used a linear mixed model (LMM) to look at predictors of body mass at 1 year. The model included predictor and random factors as above. (iii) Age at maturity We used LMMs to investigate the age at which first signs of reproductive activity were observed in females (first oestrus), and males (the first mate guarding or 'pestering' behaviour during group oestrus [33]). As the definition for the start of reproduction is different, the sexes were analysed separately, but otherwise both models included predictor and random factors as above (escorting index, emergence weight of the pup, rainfall during month before birth as predictors, and social group and litter as random factors). (v) Lifetime reproductive success In analyses of lifetime reproductive success, the total number of offspring was first fitted as the response variable in a GLMM with a Poisson error structure and a log link function. The sexes were analysed separately to improve model convergence. Emergence weight, escorting index, rainfall and weight at maturity were included as predictors, and litter and group as random factors. We then fitted the same models again, but using the log (total lifespan of the individual) as an offset in the model, to analyse whether the included variables predicted the rate at which individuals produced offspring by accounting for differences in lifespan. In all analyses, weights and rainfall were standardized by subtracting the mean and dividing by standard deviation, to improve model convergence. The correlation of predictor variables in each analysis was checked to confirm that it was not high enough to cause model fitting issues [54]. Non-significant interactions were dropped to allow significance testing of main terms [55], but models were not simplified further [56]. In the analyses that involved fitting models with a normal error structure (body mass, age at maturity and adult lifespan), we visually checked the residuals to ensure they met the model assumptions of normally distributed residuals with homogeneous variance. Where necessary, we log-transformed the response variable (adult lifespan) to meet these assumptions. Statistical analyses were done in R version 3.3.1 [57] and GLMM models fitted using R package lme4 [58]. The significance of predictor variables was determined by performing likelihood ratio tests comparing the full model with a model without the predictor variable, removing non-significant interactions to allow the main effects of variables involved in these interactions to be assessed [59]. We report the x 2 statistics and parameter estimates (b + s.e.) for significant terms, and the full analysis results including non-significant parameter estimates are presented in the electronic supplementary material, tables S1-S3. (a) Developmental impacts of early-life care (i) Immediate survival Pups that received more care were more likely to survive until nutritional independence, as were those that were heavier at emergence (binomial GLMM: emergence weight: b ¼ 0.83 + 0.15, x 2 1 ¼ 36:44, p , 0.00001; electronic supplementary material, table S1). Pup survival was higher in periods of higher rainfall (b ¼ 0.51 + 0.18, x 2 1 ¼ 8:66, p ¼ 0.003), whereas the sex of the pup had no effect (b ¼ 20.16 + 0.22, (ii) Effects on lifetime reproductive success Females that received more care as pups had higher lifetime reproductive success (figure 2a; escorting index: b ¼ 1.691 + 0.506, x 2 1 ¼ 12:39, p ¼ 0.0004), as did those that experienced heavier rainfall during the first month of life (b ¼ 0.51 + 0.24, x 2 1 ¼ 4:91, p ¼ 0.027) and that were heavier at maturity (weight at 1 year: b ¼ 0.48 + 0.20, x 2 1 ¼ 4:3, p ¼ 0.038). When using lifespan as an offset, the amount of care and weight at 1 year were the only significant predictors of a female's lifetime reproductive success (table 1). Thus, female pups that received more care in early life had greater lifetime reproductive success because they produced surviving offspring at a higher rate across their lifespan, not because they lived longer. Of the female pups that survived to adulthood, those that had been lighter at emergence had ; see table 2 and electronic supplementary material, table S3). This unexpected finding may reflect selective disappearance during development (e.g. [60]): most lightweight pups die before reaching adulthood, so those lightweight pups for which we have a measure of lifetime reproductive success may represent a special subset of high-quality or high-survivorship individuals, compared with pups for which early-life mortality is less severe (electronic supplementary material, table S1). In males, there was no significant effect of early-life care on lifetime reproductive success (figure 2b; b + s.e. ¼20.39 + 0.56, x 2 1 ¼ 0:49, p ¼ 0.48). The only significant predictor of male lifetime reproductive success was body mass at 1 year, with males that were heaviest at maturity gaining highest lifetime reproductive success (b + s.e. ¼ 0.75 + 0.23, x 2 1 ¼ 11:03, p ¼ 0.0009; all other variables p . 0.3: table 2). Results were similar when using lifespan as an offset, and the only significant predictor of male lifetime reproductive success was mass at 1 year (table 2). Discussion Our results suggest that early-life care directed by escorts to specific offspring has both immediate survival benefits and durable fitness benefits that are manifested across the offspring's subsequent lifespan. The immediate survival benefits are expected because adult escorts and pups stay in close proximity throughout the day, and escorts are quick to alert, defend and carry their pup away from danger. The durable fitness benefits of being escorted are striking and manifested in two ways. First, for both male and female pups, escorting had a durable impact on body mass at maturity, which is positively associated with lifetime reproductive success in both sexes. In addition, independent of any effect on body mass, female pups that received higher levels of escorting were more efficient at producing surviving offspring and had higher lifetime reproductive success compared with females that received little escorting (table 2). The presence of these durable fitness benefits to the recipients of early-life care is consistent with numerous findings from laboratory studies which suggest that the quality of parental care received in early life can have a profound impact on adult physiology, health and behaviour. In a classic laboratory study of Long-Evans hooded rats, offspring that received more licking and grooming from their mothers in the first 10 days of life showed reduced hypothalamicpituitary -adrenal (HPA) endocrinological stress reactivity as adults [61]. Moreover, those (female) offspring were also more likely to express high levels of nurturing behaviour when they became mothers themselves, suggesting that early-life care can produce a chain of behavioural effects and potential benefits to recipients that last generations into the future. The transgenerational inheritance of grooming/ licking behaviour in rats has a well-established epigenetic basis [62]. If such mechanisms operate in natural populations, cooperative care directed at offspring could have selfreinforcing or even runaway effects on levels of local helping (in the case where helped offspring are more likely to provide help themselves), or self-limiting effects (if helped offspring are less likely to provide help at a later date). The transgenerational impacts of cooperation are rendered plausible by the detailed mechanistic work on laboratory rodents, and are a fruitful area for both theoretical and empirical research. One of our future aims is to use the unusual escort system to investigate possible transgenerational influences on individual cooperative behaviour in this system. Table 2. Predictors of lifetime reproductive success in individuals that reached maturity (lifespan . 365 days). Results from GLMMs with litter and social group as random factors. To improve model convergence, rainfall, mass at emergence and mass at maturity were standardized by subtracting the mean and dividing by the standard deviation. In our study, only female offspring experienced an additional lifetime fitness benefit of being escorted per se, over and above any effect on body mass. This sex difference may reflect differences in the sensitivity of female and male reproductive systems to the conditions experienced in development, or sex differences in the key physical attributes (e.g. body size versus stress physiology) linked to reproductive success. It may also reflect a unisexual pattern of epigenetic inheritance of maternal-care-like behaviour. In the rat studies, both male and female offspring showed similar impacts of being licked/groomed on HPA reactivity and development, but only mothers provide care in this system, and hence only daughters inherited an elevated propensity to lick/ groom their own offspring [63]. A third factor in banded mongooses is that there is a sex difference in the time delay to the realization of any durable benefit: males form a strict dominance hierarchy and must wait much longer to start reproducing compared with females (3þ years versus 1 year for females; [33]), so any durable benefits of being escorted as a pup may become diluted by other factors (environmental and/or social) that impinge on male lifetime reproductive success in the interim. Theoretical analyses of durable impacts of help have focused on external benefits that arise through niche construction or the production of durable physical objects and structures [25]. Our findings suggest that durable benefits can also arise through development, for example, because recipients of help are protected from external insults or stressors during sensitive developmental windows, or are able to carry over extra resources to adulthood [26]. Lehmann's [25] model predicts that where the benefits of help are separated in time from the act of helping, selection for helping is strengthened (other things being equal). Selection for helping is particularly strong where benefits are realized after the actor has died or ceased reproduction, and is therefore unable to experience any negative effects of the increase in local competition resulting from the helping act. Thus, we can predict that where helping results in 'internal' durable benefits, selection for helping should increase with helper age, because older helpers are less likely to suffer direct competition from offspring produced as a result of their help. In humans, killer whales and elephants, grandmothers have demonstrable positive impacts on the reproductive success of their offspring [10,11,[64][65][66]. However, these and other studies typically assume that any benefits associated with grandmother presence cease upon her death, whereas our study suggests that the impact of care may persist long after a helper has died. Durable benefits might go some way towards explaining why, in humans, many analyses have found that the (immediate) measurable fitness benefits of grandmothering are too small to favour the evolution of menopause ( [67,68]; but see [12]). Both our study and studies of grandmothering are examples where it is natural to assume that the recipients of help are members of a younger generation, such as young offspring or younger breeders. By contrast, most studies of cooperative breeding focus on the impact of help on the reproductive success of breeding adults, rather than their offspring [32]. In principle, Hamilton's rule could be used to determine the direction of selection on genes in parents or in their offspring-what matters in each case is correct consideration of genetic relatedness and recipient reproductive value (e.g. [69]). In banded mongooses, it is natural to view individual offspring as the recipients of help, not their parents, because each offspring is the sole beneficiary of the care provided by escorts, while the other offspring of the parent are cared for by other individuals. In other cooperative breeders, it is more practical to focus on parental fitness because help is provided to multiple offspring at a time, and it is difficult to track the impact of help on the reproductive success of all these younger recipients across their life course. However, our study suggests that an exclusive focus on parental reproductive success (measured as their number of surviving young) does not take account of any durable benefits of help and hence may systematically underestimate the strength of selection for altruism in natural systems. In conclusion, our multigenerational study of a cooperative mammal living in the environment in which it evolved suggests that helping has lifelong fitness impacts on both male and female offspring. These durable fitness benefits may be challenging to detect and measure, particularly in long-lived species. Nevertheless, the extensive literature on early-life effects gives reason to believe that durable impacts may be widespread and can be expected to have major impacts on social evolution and life history. Further theoretical research is needed to investigate when durable benefits will result in positive or negative feedback between care received and helping effort in cooperative societies, and the consequences for social evolution. Further empirical research is needed to test for these effects in wild animal societies, and to investigate whether such early-life effects in natural systems are mediated by epigenetic and neuroendocrinological changes similar to those observed in laboratory mammals. Ethics. All research was carried out under permit from Uganda Wildlife Authority (UWA) and Uganda National Council for Science and Technology (UNCST). All procedures adhered to the Guidelines for the treatment of animals in behavioural research and teaching, published by the Association for the Study of Animal Behaviour, and received prior approval from UWA, UNCST, and the Ethical Review Board of the University of Exeter. Data accessibility. The data supporting the analyses are available as part of the electronic supplementary material.
6,701.8
2019-02-25T00:00:00.000
[ "Biology" ]
Cooperative Navigation for Low-Cost UAV Swarm Based on Sigma Point Belief Propagation : As navigation is a key to task execution of micro unmanned aerial vehicle (UAV) swarm, the cooperative navigation (CN) method that integrates relative measurements between UAVs has attracted widespread attention due to its performance advantages. In view of the precision and efficiency of cooperative navigation for low-cost micro UAV swarm, this paper proposes a sigma point belief propagation-based (SPBP) CN method that can integrate self-measurement data and inter-UAV ranging in a distributed manner so as to improve the absolute positioning performance of UAV swarm. The method divides the sigma point filter into two steps: the first is to integrate local measurement data; the second is to approximate the belief of position based on the mean and covariance of the state, and pass message by augmentation, resampling and cooperative measurement update of the state to realize a low-complexity approximation to traditional message passing method. The simulation results and outdoor flight test results show that with similar performance, the proposed CN method has a calculation load more than 20 times less than traditional BP algorithms. Introduction In recent years, UAVs have received extensive attention in military and civil fields such as battlefield reconnaissance, aerial photography, agricultural plant protection, express transportation, disaster rescue, and power inspection, and UAV cluster technology has become a research hotspot due to its high efficiency and damage resistance [1,2]. Navigation information is crucial for reliable flight of UAVs. Especially in an intensive UAV formation flight system, the UAV swarm requires high-precision position, velocity and other information to conduct task planning, precise control, and collision avoidance [3,4]. At present, the Global Navigation Satellite System (GNSS) is the most widely used means of navigation for UAVs; the GNSS based on Real Time Kinematic (RTK) technology can provide UAVs with centimeter-level positioning precision [5]. However, in certain restricted environments such as city and forest, the GNSS receiver may be unavailable to search sufficient satellites for navigation solution, and costs are high if RTK is applied to all UAVs in a swarm [6]. In order to make full use of the advantages of UAV swarm and obtain reliable and better navigation performance, researchers began to study the cooperative navigation method that integrates relative measurements between UAVs. Compared with single UAV navigation method that merely integrates the local measurement data of the UAV, the CN method can correct its navigation result based on the geometric relationship and observation of the swarm and thus greatly improve the absolute navigation performance [7][8][9]. Many researchers focused on the optimization algorithm based cooperative positioning technology, created constraint equation set for the geometric relationship between all users, and established corresponding objective functions to solve the positioning result [10][11][12][13][14]. Due to the existence of noise and observation fault such as Non-line-of-sight (NLOS) of wireless ranging signal [15,16], the cooperative positioning problem is often non-convex and cannot be solved globally and optimally. Thereby, convex optimization technologies such as Semi-definite Programming and Cone Programming are required to relax the cooperative estimation problem [14]. The disadvantage of optimization algorithm based cooperative positioning technology is that under worse observation conditions, its performance worsens rapidly, and on-board Inertial Measurement Unit (IMU) and other navigation sensors cannot be fully used for measurement. A factor graph provides an effective framework for dealing with cooperative navigation problems and is used for describing the linkage between the navigation state and observation at each vehicle of the cooperative network. To pass messages between the nodes in the factor graph, many belief propagation (BP) methods have been proposed [17][18][19][20][21]. Ref. [17] proposed a sum-product algorithm for wireless network (SPAWN), which realized high-precision indoor positioning on the basis of factor graph and message passing. Ref. [21] put forward a factor graph and message passing based H-SPAWN algorithm that can integrate GNSS information and peer-to-peer information to realize cooperative positioning. Ref. [22] provided a distributed positioning method based on sequential particle-based sum-product algorithm (SPSPA), which has the ability to perform online inference in factor graph with continuous variables and nonlinear local functions. Even when obtaining a good effect, these methods require a large number of particles to describe cooperative messages and have a large calculation load, making them hard to guarantee the real-time navigation performance of UAV swarm in the time of formation flight. In order to reduce the computation load of BP algorithm, Ref. [23] offered a cooperative positioning framework combining relative position estimation and optimization, which transforms a multi-dimensional BP problem into an one-dimensional problem, reduces the computation load, and can integrate Ultra-wideband (UWB), GNSS and Inertial Navigation System (INS) information to obtain precise navigation results. However, this framework requires that each UAV in the swarm can conduct GNSS calculation. Ref. [24] presents a cooperative positioning method in which the posteriori distribution of marginalized circulation factor graph is approximated by using sigma point, but the movement of the user is described using a motion model, regardless of the observation information beyond the measurement range. Hence, this method is suitable for indoor application instead of UAV flight scene. It is also unavailable to provide continuous and comprehensive navigation parameters. In this paper, a sigma point belief propagation-based (SPBP) new CN method is proposed. In this method, sigma point filter integrates local observations from INS, GNSS, barometers, and so on; then the filter's state vector is augmented and the cooperative measurements are updated iteratively to realize the message passing between UAVs, and further improve the absolute navigation performance of UAV swarm. Compared with the traditional BP algorithm, this paper uses sigma points to replace the particles obtained by Monte Carlo method and constructs the state equation of the cooperative navigation system based on INS, so as to replace the state equation based on motion model. Most importantly, sigma point filter and BP algorithm are combined and expanded to 3D UAV flight urban scene; moreover, IMU, GNSS, and other local multi-source navigation information are incorporated into the CN framework, improving the performance and efficiency of the CN algorithm. The structure of this paper is as follows: Section 2 provides the preliminaries of sigma point filter and describes the CN problem based on factor graph and BP; Section 3 proposes an SPBP based CN method; the simulations are described in Section 4; and flight tests are described in Section 5; Section 6 discusses the research implications and limitations of the proposed method; and the final conclusion is presented in Section 7. Preliminary on Sigma Point Filter The basic idea of sigma point filter (SP filter, also known as unscented Kalman filter) is to approximate to a nonlinear density function by using a certain number of sample points and transfer the state of a real system while capturing accurate mean and variance of the state [25]. Therefore, the selection of sigma points is crucial for sigma point filter and is realized by Unscented Transformation [26]. It is assumed that N-dimensional random vector X is transformed into an M-dimensional random vector Y after subjecting to h(·) nonlinear transformation, namely: Given the mean X and variance P XX of X, the ( 2N + 1) sigma points of X can be reproduced according to X: where (N + λ)P XX (i) represents the i-th column of the square root of the lower triangular decomposition of matrix (N + λ)P XX ; λ determines the distance between the sampling point and the mean. The sigma point produced by nonlinear transformation can be calculated as follows: Hence, the mean Y, variance P YY , and the covariance P XY of X and Y can be approximated as: are the corresponding weights, which can be calculated as: where α and β are constants and nonnegative. Considering that the variable X and observation Z satisfy Z = Y + ξ = h(X) + ξ at time k, where ξ is the additive measurement noise with variance of R, the posteriori probability density function of X can be obtained based on Z: The specific steps to realize the posterior probability density estimation of Equation (8) through SP filter are as follows [26]: Step 1: Sampling sigma points as per Equation (2) at time k − 1; Step 2: Calculating the one-step predicted value at time k: where f (·) is the one-step prediction equation of the system, that is, the discrete form of the system state equation; u k−1 is the input of the system, and χ * (i) k|k−1 is the one-step predicted sigma point. Step 3: Computing the observed value Y (i) k|k−1 and predicted value Y k|k−1 as per Equations (3) and (4), and the corresponding covariance matrices P YY k|k−1 and P XY k|k−1 are calculated according to Equations (5) and (6); Step 4: Calculating the posterior probability density estimation: As revealed in the above process of sigma point filter, the number of sigma points depends on the number of dimensions of X, namely the former increases with the increase in the latter. Figure 1 shows a flight scene of low-cost micro UAV swarm. In urban areas, the navigation system of UAV may not receive enough number of satellite signals for positioning. For this reason, all UAVs are equipped with GNSS receiver as well as some conventional onboard navigation devices, such as INS and barometer, and are also mounted with UWB to provide the ranging information between UAVs. To meet the demand for CN, it is also essential to provide a wireless communication network for exchanging CN information. Belief Propagation for UAV Swarm The specific steps to realize the posterior probability density estimation of Equation (8) through SP filter are as follows [26]: Step 1: Sampling sigma points as per Equation (2) at time ; Step 2: Calculating the one-step predicted value at time : where is the one-step prediction equation of the system, that is, the discrete form of the system state equation; is the input of the system, and is the one-step predicted sigma point. Step 3: Computing the observed value and predicted value as per Equations (3) and (4), and the corresponding covariance matrices and are calculated according to Equations (5) and (6); Step 4: Calculating the posterior probability density estimation: As revealed in the above process of sigma point filter, the number of sigma points depends on the number of dimensions of , namely the former increases with the increase in the latter. Figure 1 shows a flight scene of low-cost micro UAV swarm. In urban areas, the navigation system of UAV may not receive enough number of satellite signals for positioning. For this reason, all UAVs are equipped with GNSS receiver as well as some conventional onboard navigation devices, such as INS and barometer, and are also mounted with UWB to provide the ranging information between UAVs. To meet the demand for CN, it is also essential to provide a wireless communication network for exchanging CN information. Provided that all UAVs in the swarm belong to a set U, the navigation state of UAV m at time k is defined as X k m ; the satellite observation and barometer output at time k are defined as Z k,gnss m and Z k,baro i , respectively; all the IMU measurements between k − 1 and k are defined as Z k−1,imu m ; and all UWB observations at time k are defined as: Belief Propagation for UAV Swarm where U m is the set of all UAVs that have UWB ranging links with UAV m. The navigation state set and observation set of all UAVs in the swarm are defined as: Based on the above definitions, the joint posteriori probability distribution function of the navigation state of the UAV swarm is given by: According to the flight characteristics and sensor characteristics of UAV swarm, the following two conditions can be assumed [23]: a. The navigation state of each UAV at time k is only related to time k − 1, and obeys the standard Markov assumption; b. The UWB, GNSS and barometric measurements obtained by UAV are independent and only related to the state at the present time. Based on these two assumptions, the joint posteriori probability distribution function can be factorized in terms of a priori information and individual process and measurement models. This factorization can be written as where p X 0 represents the priori factor of the navigation state. This factorization can be described by the factor graph composed of factor nodes and variable nodes as presented in Based on Figure 2 and Equation (19), the approximated marginal posteriori probability density function of variable node is called as the belief, which can be calculated by iteration of BP message passing on the factor graph. At the -th iteration at time , the belief of can be obtained as follows: The forms of messages involved in the above equation are as follows: ① The prediction message passed from factor node to variable node (21) ② The GNSS correction message passed from factor node to variable node (22) ③ The height correction message passed from barometer factor to variable node (23) ④ The cooperative correction message passed from UWB factor to variable node (24) where is the belief passed by UAV to factor node at the -th iteration: (25) where represents the set of all UAVs that have ranging links with UAV , except UAV . For the SPAWN message passing scheme, can be directly replaced with without recalculation [17]. Based on Figure 2 and Equation (19), the approximated marginal posteriori probability density function b X k m ≈ p X k m Z k m of variable node X k m is called as the belief, which can be calculated by iteration of BP message passing on the factor graph. At the τ-th iteration at time k, the belief of X k m can be obtained as follows: The forms of messages involved in the above equation are as follows: 1 The prediction message passed from factor node l k−1,imu 4 The cooperative correction message passed from UWB factor l k,uwb m,n to variable node X k where b (τ) X k n is the belief passed by UAV n to factor node l k,uwb m,n at the τ-th iteration: where m ∈ U n {m} represents the set of all UAVs that have ranging links with UAV n, except UAV m. For the SPAWN message passing scheme, b (τ) X k n can be directly replaced with b (τ) X k n without recalculation [17]. Sigma Point Belief Propagation Section 2.1 shows an SP filter based method for posteriori probability estimation of variables. According to a comparison among Equations (8), (19) and (20), the expression of belief is similar to Equation (8), and the belief of navigation state can also be calculated by SP. Using a mean vector and covariance matrix to approximate the belief of each UAV's navigation state is conducive to reducing communication cost and calculation load, and low-complexity approximation to the traditional BP algorithm. This section will expound the SPBP method in detail. Navigation State Model and Time Update Taking Earth-fixed frame as the navigation coordinate frame, the motion of UAV from k − 1 to k is modeled. Based on the measurements of IMU, the time update of the motion state of UAV is obtained as follows [27]: where Q is the attitude quaternion, including the one-dimensional scalar Q 0 and the threedimensional vectors Q 1 , Q 2 and Q 3 , • represents the quaternion multiplication, v is the three-dimensional velocity, p is the three-dimensional position,ω is the measurement of the gyroscope, ω ie is the angular velocity of the Earth's rotation, ω en is the rotational angular velocity of the carrier relative to the Earth; ω r is the random walk error of the gyroscope, ω ε is the measurement noise of the gyroscope, C n b is the rotation matrix from body frame to navigation frame,ã is the accelerometer measurement, ∇ r is the random walk error of the accelerometer, × represents the cross multiplication of vectors, G is the gravity vector, and r is the position vector with respect to the Earth Centered Earth fixed frame. The random walk error update equation of the gyroscope and accelerometer is: where ω n and ∇ n are the random walk drive noises of the gyroscope and accelerometer, respectively, which are usually supposed to be zero mean white noises. The covariance of noise can be obtained by ALLAN variance method or other IMU error analysis methods. Since the scalar part of the normalized quaternion can be obtained from the vector part, in order to reduce the dimension of the SP filter, the initial state vector is created on the basis of vectors Q 1 , Q 2 and Q 3 , as well as the position vector, velocity vector, gyro random walk error ω r and accelerometer random walk error ∇ r : The system noise is: When the sigma point filter is updating the time, the system noise should be augmented to the system state, and the augmented system state vector is X = x T W T T . By combining and discretizing Equations (26) and (27), the one-step prediction equation of the navigation state can be obtained, that is, Equation (9). Then, the attitude, velocity and position represented by each sigma point can be updated in combination with the input of IMU as per Equations (9) In this paper, two measurements, GNSS information and barometric altitude, are considered in addition to IMU. Regarding whether the satellites searched by GNSS receiver can be positioned directly or not, the GNSS measurement can be divided into position measurement and pseudo-range measurement. The position measurement can be modeled as follows [28]: where ν G m is the GNSS position observation noise of UAV m and can be modeled as a white noise with covariance of R G m . For position measurement, the SP filter does not need to augment the system state. The pseudo range measurement can be modeled as [29]: where ρ S i m is the true distance between UAV m and satellite S i ; δt ρ m is the distance offset corresponding to the equivalent clock error of the receiver; ζ S i m is the error offset caused by the ionosphere and troposphere; ∆ρ S i m is the error caused by multipath or NLOS; and ν S i m is the receiver noise with covariance of receiver noise. The errors caused by ionosphere and troposphere can be eliminated by using the corresponding correction model or the information broadcasted by Satellite-Based Augmentation System (SBAS); the error caused by multipath or NLOS can be detected in some integrity methods [30][31][32]. When the detected error caused by multipath or NLOS is large, the pseudo range measurement can be deleted directly. For pseudo range measurement, the equivalent receiver clock error δt m should be augmented to system state. The time update equation of δt m is shown below: where ν t ρ m is the equivalent clock error deviation noise. The barometric height can be modeled as: where h m is the true height of UAV m; and ν baro is the barometer measurement noise with variance of R baro . After the local measurements are modeled, the system state should be augmented again as per the noise form when there is non-additive noise in the measurement model and the dimension of the augmented state is N. The error augmented to system state vector in local measurement also requires time update. In face of upcoming measurement, the measurement can be updated as per Equations (3) Cooperative Measurement Model and Measurement Update The UWB measurement within the visual range based on Time of Flight (TOF) can be modeled as [33,34]: where d m,n is the UWB ranging value between UAV m and UAV n; · represents the Euclidean distance; and ν uwb is the ranging noise with variance of R uwb . In order to effectively integrate the cooperative measurement of UWB, the position vector of adjacent UAV should be augmented to the system state. After the augmentation, the state vector of SP filter for UAV m is: where p U m = where diag(·) represents the block diagonal matrix whose diagonal blocks are the listed matrices. A sigma point can be regenerated as per the mean and covariance of Equation (36), followed by measurement update and completion of the (τ + 1)-th iteration, obtaining the belief b (τ+1) X k m . Noteworthy, the position vector is augmented without time update. Algorithm Description Taking the navigation state of UAV m as an example, the proposed method is described in detail as shown in Algorithm 1 and Figure 3. Step 1 (Algorithm 1: 1) is to initialize the state vector X 0 m of SP filter, and set the initial meanX 0 m and initial covariance P 0 XX m , which are the bases for the time recursion of the SP filter. Step 2 (Algorithm 1: 3-6) is to sample sigma points as per the meanX Step 3 (Algorithm 1: 7-9) is to complete local measurement update as per GNSS measurements, barometric height and other local measurements, obtain the meanX 6) and (12)-(14) to complete the measurement update of local measurements, and obtain the meanX Step 1 (Algorithm 1: 1) is to initialize the state vector of SP filter, and set the initial mean and initial covariance , which are the bases for the time recursion of the SP filter. Step 2 (Algorithm 1: 3-6) is to sample sigma points as per the mean and covariance of the state at the previous time, and complete time update of the system based on the IMU measurements, obtain the mean and covariance of the state quantity. Step 3 (Algorithm 1: 7-9) is to complete local measurement update as per GNSS measurements, barometric height and other local measurements, obtain the mean and covariance of the state. This mean and covariance correspond to belief Step 5 (Algorithm 1: 19-21) is to extract and output the navigation resultsX t m and P t XX m , and then go to Step 2 to start the next epoch. Complexity Analysis of SPBP For the SPBP method proposed in this paper, the main calculation load is to calculate the square root of the covariance matrix when selecting sigma points as per Equation (2). An effective calculation can be realized by Cholesky decomposition [35], with complexity being cube of the dimension of the state and also cube of particle number N spbp . Within an epoch, the computation load of this method is O (N) 3 + (N a + 3M link ) 3 I sp , where M link is the number of adjacent UAVs and I sp is the number of iterations. In contrast, for traditional NBP method, its computation complexity is linear in terms of M link and quadratic in terms of particle number. Hence, the computation complexity of traditional NBP method is O I nbp N nbp 2 M link regarding the state estimation requiring I nbp iterations and N nbp particles. However, the particle number for SPBP is much less than that for NBP in general. Thereby, SPBP method has far higher calculation efficiency than NBP method. In terms of communication traffic, the SPBP method takes the meanp method that requires transferring sub-particles, the SPBP method obviously has a small communication load. Simulation Configuration Based on a flight setting in urban scene, a UAV swarm containing 12 UAVs is simulated to verify the proposed SPBP method. Figures 4 and 5 display the initial position and horizontal trajectories of the UAV swarm, respectively. It is assumed that every UAV can interact with the UAVs within the working distance of the UWB transceiver, 60 m. In order to fully evaluate the performance of this method, all the results shown in this section are based on 200 Monte Carlo simulation experiments where each simulation takes 1000 s. In the simulation, the positioning estimation error is calculated as per the following equation: (37) where , and are the estimation errors on the east, north and upward directions respectively. Simulation Results for Cooperative Navigation To compare the proposed SPBP with other methods, this section simulates several cooperative positioning algorithms based on the same dataset, including H-SPAWN [21], Cooperative Positioning based Extended Kalman Filter (CP-EKF) [37], Multidimensional Scaling (MDS) [38] and Non-linear Regression (NLR) [39] methods. The positioning results are illustrated in Figures 6 and 7. The detail of each algorithm is presented as follows: ① H-SPAWN (particle number = 2500): It is short for hybrid sum-product algorithm for wireless network, a typical BP-based positioning algorithm proposed in [21]. Its message passing process is identical with Equations (20)- (24). In case 1, 2500 particles are used. ② H-SPAWN (particle number = 1000): This algorithm is the same as that stated in The parameters used for sensor configuration and simulation are as listed in Table 1 [36]. Each UAV is equipped with low-cost IMU, barometer and GNSS receiver, and can conduct inter-UAV ranging via a UWB transceiver. It is set that all UAVs realize the transmission of cooperative information through broadcasting, and UAVs that receive cooperative information and are within the ranging range can use this information for cooperation. Two UAVs in the swarm can obtain the position observation of GNSS directly, the remaining UAVs at a flight height above 30 m can receive pseudo-range observations of three satellites, while that at a flight height below 30 m cannot receive the pseudo-range observations. In the simulation, the positioning estimation error is calculated as per the following equation: where Err E , Err N and Err U are the estimation errors on the east, north and upward directions respectively. Simulation Results for Cooperative Navigation To compare the proposed SPBP with other methods, this section simulates several cooperative positioning algorithms based on the same dataset, including H-SPAWN [21], Cooperative Positioning based Extended Kalman Filter (CP-EKF) [37], Multidimensional Scaling (MDS) [38] and Non-linear Regression (NLR) [39] methods. The positioning results are illustrated in Figures 6 and 7. The detail of each algorithm is presented as follows: Figure 7 provides the example performance of a single UAV without satellite position measurement. NLR is a cooperative positioning method merely depending on the geometric relationship of UAV swarm and its precision is obviously lower than other cooperative methods as shown in the figure. CP-EKF integrates the navigation information of IMU, barometer, GNSS, and adjacent UAVs. However, without iterative update, the 1 H-SPAWN (particle number = 2500): It is short for hybrid sum-product algorithm for wireless network, a typical BP-based positioning algorithm proposed in [21]. Its message passing process is identical with Equations (20)- (24). In case 1, 2500 particles are used. 2 H-SPAWN (particle number = 1000): This algorithm is the same as that stated in 1 , but the number of particles used is 1000. 3 CP-EKF: It is a CN method, in which EKF is used for integrating local measurements and other UAV cooperative data. The error of navigation state is designed as the state of filter. The cooperative measurement obtained by inter-UAV ranging is also incorporated into the measurement model. It should be noted that the covariance of noise in the measurement model considers both the UWB ranging error and the collaborator's position error. 4 NLR: In this method, a nonlinear regression equation set is established on the basis of all distance and position measurements, and the UAV's position is solved in iterative least square method. 5 MDS: This method is based on multidimensional scaling and the maximum likelihood estimation. The approach is composed of three sub-steps: (a) construction of the distance matrix, (b) relative position estimation, (c) coordinates registration. If the node scale is large, the network can be divided into several local maps for relative positioning respectively. 6 SPBP: The proposed method in this paper. Figure 6 presents the simulation results of several cases, where the cumulative distribution functions (CDF) of position errors for all UAVs are used as the performance metrics. Figure 7 provides the example performance of a single UAV without satellite position measurement. NLR is a cooperative positioning method merely depending on the geometric relationship of UAV swarm and its precision is obviously lower than other cooperative methods as shown in the figure. CP-EKF integrates the navigation information of IMU, barometer, GNSS, and adjacent UAVs. However, without iterative update, the indirect cooperative information of non-adjacent UAV is not fully used. Hence, the positioning precision of CP-EKF is higher than that of NLR, but lower than MDS method and other BP-based cooperative methods. The relative positioning process of the MDS filter only utilizes the geometric constraints of the UAV swarm, thus its performance is lower than BP-based methods. Compared with H-SPAWN, the SPBP proposed in this paper is a method for approximating the distribution of state based on finite sampling points (i.e., sigma points). The difference between H-SPAWN and SPBP is similar to the difference between particle filter and UKF. Therefore, when there is no strong nonlinearity in the measurement, such as NLOS, multipath, and other errors, the CN model is basically in line with the linear mixed problem model, and the SPBP can realize similar performance to H-SPAWN as well as lower calculation complexity. As for the SPBP method proposed in this paper, its positioning precision is close to that of H-SPAWN (particle number = 2500) and superior to that of H-SPAWN (particle number = 1500). For instance, the percentiles with positioning error not greater than 2 m are lower than 94.9% for SPBP, close to 92.3% for H-SPAWN (particle number = 2500), and 82.1%, 11.7%, 45.7%, and 33.4% respectively for H-SPAWN (particle number = 1500), NLR, MDS and CP-EKF methods. To further analyze the complexity of the proposed method, the 'tic' and 'toc' functions of MATLAB are used to record the running time of the code, which can roughly represent the computation load. As can be seen from Figure 8, which presents the efficiency comparison among all the above methods, NLR and CP-EKF do not need iterative passing of message, hence the minimum computation load. The computational efficiency of the MDS filter is similar to SPBP but with large fluctuation, mainly because the iterative process of MDS filter is largely dependent on prior information, which is affected by measurement biases. For H-SPAWN (particle number = 2500), the processing time at each epoch is more than 30 times that of SPBP. This means that with similar performance, SPBP has much lower calculation complexity than H-SPAWN. indirect cooperative information of non-adjacent UAV is not fully used. Hence, the positioning precision of CP-EKF is higher than that of NLR, but lower than MDS method and other BP-based cooperative methods. The relative positioning process of the MDS filter only utilizes the geometric constraints of the UAV swarm, thus its performance is lower than BP-based methods. Compared with H-SPAWN, the SPBP proposed in this paper is a method for approximating the distribution of state based on finite sampling points (i.e., sigma points). The difference between H-SPAWN and SPBP is similar to the difference between particle filter and UKF. Therefore, when there is no strong nonlinearity in the measurement, such as NLOS, multipath, and other errors, the CN model is basically in line with the linear mixed problem model, and the SPBP can realize similar performance to H-SPAWN as well as lower calculation complexity. As for the SPBP method proposed in this paper, its positioning precision is close to that of H-SPAWN (particle number = 2500) and superior to that of H-SPAWN (particle number = 1500). For instance, the percentiles with positioning error not greater than 2 m are lower than 94.9% for SPBP, close to 92.3% for H-SPAWN (particle number = 2500), and 82.1%, 11.7%, 45.7%, and 33.4% respectively for H-SPAWN (particle number = 1500), NLR, MDS and CP-EKF methods. To further analyze the complexity of the proposed method, the 'tic' and 'toc' functions of MATLAB are used to record the running time of the code, which can roughly represent the computation load. As can be seen from Figure 8, which presents the efficiency comparison among all the above methods, NLR and CP-EKF do not need iterative passing of message, hence the minimum computation load. The computational efficiency of the MDS filter is similar to SPBP but with large fluctuation, mainly because the iterative process of MDS filter is largely dependent on prior information, which is affected by measurement biases. For H-SPAWN (particle number = 2500), the processing time at each epoch is more than 30 times that of SPBP. This means that with similar performance, SPBP has much lower calculation complexity than H-SPAWN. Figures 9 and 10 show the UAV platform for the test and the configuration and signal flow of the navigation equipment, respectively. Each UAV is equipped with Ublox M8 GPS module, Forsense IMU6132 module, BMP280 atmospheric sensor and LinkTrack UWB module. The sensor data are collected, packed and sent to Raspberry Pi4 by STM32H7 processor. Raspberry Pi4 is for storing and processing data and sending control commands. While ranging between UAVs, UWB as ranging and communication module may also be used by STM32H7 to send its position and other information to be passed to other UAVs. Each UWB module can range and communicate with the other four based on time-division mechanism. System Configuration GPS module, Forsense IMU6132 module, BMP280 atmospheric sensor and LinkTrack UWB module. The sensor data are collected, packed and sent to Raspberry Pi4 by STM32H7 processor. Raspberry Pi4 is for storing and processing data and sending control commands. While ranging between UAVs, UWB as ranging and communication module may also be used by STM32H7 to send its position and other information to be passed to other UAVs. Each UWB module can range and communicate with the other four based on time-division mechanism. Test Results The effectiveness of the proposed method was verified by an outdoor automatic flight test on a UAV swarm on the drill ground. This swarm is composed of five quadrotor UAVs ( Figure 11). To evaluate the navigation performance, each UAV was equipped with on-board RTK equipment to provide the true value of the reference position. Figure 12 displays the flight scene, and the 3D and 2D flight trajectories of UAV swarm. As shown, after taking off, the UAV swarm flew in an L-shaped loop for 6 cycles before landing. GPS module, Forsense IMU6132 module, BMP280 atmospheric sensor and LinkTrack UWB module. The sensor data are collected, packed and sent to Raspberry Pi4 by STM32H7 processor. Raspberry Pi4 is for storing and processing data and sending control commands. While ranging between UAVs, UWB as ranging and communication module may also be used by STM32H7 to send its position and other information to be passed to other UAVs. Each UWB module can range and communicate with the other four based on time-division mechanism. Test Results The effectiveness of the proposed method was verified by an outdoor automatic flight test on a UAV swarm on the drill ground. This swarm is composed of five quadrotor UAVs ( Figure 11). To evaluate the navigation performance, each UAV was equipped with on-board RTK equipment to provide the true value of the reference position. Figure 12 displays the flight scene, and the 3D and 2D flight trajectories of UAV swarm. As shown, after taking off, the UAV swarm flew in an L-shaped loop for 6 cycles before landing. Test Results The effectiveness of the proposed method was verified by an outdoor automatic flight test on a UAV swarm on the drill ground. This swarm is composed of five quadrotor UAVs ( Figure 11). To evaluate the navigation performance, each UAV was equipped with on-board RTK equipment to provide the true value of the reference position. Figure 12 displays the flight scene, and the 3D and 2D flight trajectories of UAV swarm. As shown, after taking off, the UAV swarm flew in an L-shaped loop for 6 cycles before landing. The entire flight process is divided into three stages: take-off, flight in loop (cooperation stage), and landing. It is set as follows: in the first and third stages, the navigations of all UAVs are realized by the combination of INS/GPS; after entering the second stage, UAV 1 and UAV 2 can obtain the satellite position measurements continuously, while other UAVs reject the input of satellite measurements and position cooperatively. To prove the performance of the proposed method, H-SPAWN was selected to compare with the method proposed in this paper based on actual flight. In the second stage, the hori- The entire flight process is divided into three stages: take-off, flight in loop (cooperation stage), and landing. It is set as follows: in the first and third stages, the navigations of all UAVs are realized by the combination of INS/GPS; after entering the second stage, UAV 1 and UAV 2 can obtain the satellite position measurements continuously, while other UAVs reject the input of satellite measurements and position cooperatively. To prove the performance of the proposed method, H-SPAWN was selected to compare with the method proposed in this paper based on actual flight. In the second stage, the horizontal positioning error CDFs of UAV 3, 4 and 5 are presented in Figure 13, and the horizontal positioning error of each UAV is illustrated in Figure 14. In order to compare the positioning performances of the mentioned algorithms more clearly, Table 2 lists the positioning results of the several algorithms for comparison. As shown, the total positioning precision of SPBP is similar to that of H-SPAWN (particle number = 3000), while the positioning precision of SPBP used for each UAV is superior to that of H-SPAWN (particle number = 1000). In terms of the average processing time at each epoch, the computation load of the proposed SPBP algorithm is 4 times and 20 times lower than those of H-SPAWN (particle number = 1000) and H-SPAWN (particle number = 3000) respectively. The flight test results demonstrate that the SPBP algorithm can significantly improve the calculation efficiency while guaranteeing the performance. Discussion This paper proposes a new CN method based on SPBP. In this method, a limited number of sigma points are selected to approximate the posteriori distribution of the navigation state quantity. The calculation load of this method is greatly less than that of traditional nonparametric BP method. Based on onboard multisource navigation sensor, the SPBP method proposed in [24] is expanded to a 3D environment in this paper, the state vectors of the time and measurement updates of sigma point filter are separated, and the state vectors are augmented only when updating the measurement of the cooperative information. Thereby, the messages passed by different UAVs can be received in every time of iteration and the SPBP-based new CN method proposed in this paper is more suitable for the UAV flight scene. Particularly, when the setting of convergence threshold is small, the covariance matrix of the sigma point filter may be overconfident over repeated iterations, resulting in overconcentrated particles in the next sampling. Although the SPBP method can be used to approximate state distribution on the basis of a few sigma points, it is still needed to satisfy the Gaussian assumption of the system. The estimation performance may be poor when the navigation system is not corrected for a long time or the observation is strongly nonlinear. For example, when NLOS exists in UWB measurement and is not removed, the navigation result may be biased; meanwhile, the state covariance cannot be adjusted accordingly so that the next sampling scope cannot cover the correct position. In the future, the SPBP algorithm can combine with some performance evaluation methods, such as posteriori Cramer-Rao bound [40,41], to solve the problem of overconfidence in the covariance matrix. In addition, for the cooperative navigation system, an excellent communication network is indispensable. For the case where the UAVs are closely distributed and the number is small, a wireless communication network can be established in a point-to-point manner, which can be realized through XBEE or UWB, and the cost and complexity are low. If the scale or the distribution range of UAV swarm is large, the situation becomes complex. It is best to realize cooperative information interaction through broadcasting, so that the UAVs that receive cooperative information and are within the UWB working distance can complete cooperative navigation. Conclusions Against the CN background of low-cost UAV swarm, this paper proposes a distributed CN method that integrates common onboard navigation sensors and cooperative measurement information. In this method, the filtering of sigma point filter is divided into two steps. First, local measurements are integrated. Then, the mean and covariance of state are used to approximate the belief of the navigation state, and the message passing in traditional BP method is implemented by augmentation, resampling and cooperative measurement update of the state, realizing a low-complexity approximation to the traditional message passing scheme. The simulation results show that in the simulation scene, the positioning performance of the SPBP method proposed in this paper is significantly superior to that of traditional single-step cooperative positioning methods such as CP-EKF and non-Bayesian CN method such as NLR and MDS. With similar performance, the computation load of the proposed SPBP method is much lower than that of the H-SPAWN method. The flight test results also verify the effectiveness of the proposed SPBP method. The future research will continue to study the SPBP and performance evaluation combined CN method, as well as the method for monitoring the integrity of CN information.
9,682.4
2022-04-20T00:00:00.000
[ "Computer Science" ]
Tunable Emission and Color Temperature of Yb3+/Er3+/Tm3+-Tridoped Y2O3-ZnO Ceramic Nano-Phosphors Using Er3+ Concentration and Excitation Pump Power In this study, a series of well-crystallized Yb3+/Er3+/Tm3+-tridoped Y2O3-ZnO ceramic nano-phosphors were prepared using sol–gel synthesis, and the phosphor structures were studied using X-ray diffraction, scanning electron microscopy, and thermogravimetric analysis. The phosphors were well crystallized and exhibited a sharp-edged angular crystal structure and mesoporous structure consisting of 270 nm nano-particles. All phosphors generated blue, green, and red emission bands attributed to Tm: 1G4→3H6, Er: 2H11/2 (4S3/2)→4I15/2, and Er: 4F9/2→4I15/2 radiative transitions, respectively. Increasing in luminescent centers, weakening of lattice symmetry, and releasing of dormant rare earth ions can enhance all emissions. Er3+ can obtain energy from Tm3+ to enhance green and red emission. These colors can be tuned by optimizing the doping concentrations of the Er3+ ion. The color coordinates were adjusted by tuning both the Er3+ concentration and excitation laser pump power to shift the color coordinates and correlated color temperature. The findings of this study will broaden the potential practical applications of phosphors. Introduction Due to 5s 2 5p 6 shell shielding of the 4f electron layer, the trivalent lanthanide ion has abundant 4f N energy levels which can realize radiative transitions of different wavelengths [1]. For example, Tb 3+ , Er 3+ , Ho 3+ , and Tm 3+ are often used as activators to achieve upconversion (UC) luminescence [2]. The Er 3+ ion has high luminescence efficiency and an emission peak located in the green and red light regions, which can be used as a source of green and red light [3]. As for the selection of sensitized ions, the Yb 3+ ion is an efficient sensitizer for many rare earth (RE) elements because its absorption region is approximately 976-980 nm [4]. This ion has a large absorption cross section and can transfer absorbed infrared light to Ho 3+ , Tb 3+ , Pr 3+ , and Tm 3+ through an energy transfer process. In addition, Er 3+ plasma also achieves green and red emissions, which are conducive to wavelength regulation in different wavelength bands. Therefore, doped luminescence materials are widely made into phosphors [5], glass [6,7], ceramics [8], semiconductor crystal materials [9], and thin films [10], and these materials are used in many fields, such as conversion lasers [11], flat panel displays, biological probes [12], and solar cells [13]. Y 2 O 3 has good performance as a matrix material, good chemical stability, high melting point, and desirable mechanical performance that allow this compound to be applied in challenging environments; in addition, the band gap width can accommodate most trivalent RE ion emission levels, and the radius of this compound and other RE ions are similar, leading to easy doping processes; finally, the low phonon energy of Y 2 O 3 reduces the probability of no radiative transition and increases the probability of radiative transition [14]. These properties enable this compound to improve the luminescence efficiency of RE ions. Y 2 O 3 is a type of RE oxide, and other RE elements that act as sensitizers and activators have the same valence state and similar oxide crystal structures that allows these other RE elements to mix easily into the lattice of Y 2 O 3 . These advantages make this compound a suitable substrate material. Similarly, ZnO has been widely used as an oxide matrix material in many fields. This compound is a multifunctional semiconductor material with a wide, direct band gap that is approximately 3.37 eV at room temperature. ZnO has three crystal structures, hexagonal wurtzite, sphalerite, and tetragonal halite, and the hexagonal wurtzite crystal structure is the most stable of these structures at room temperature. The density, surface work function, and relative molecular weight of hexagonal wurtzite ZnO are, respectively 5.606 g/cm 3 , 5.3 eV, and 81.39. The bonding state and geometric structure of ZnO crystals provide stable optical, chemical, and biological properties, as well as excellent thermal stability. ZnO materials have important applications as optical and infrared electric materials, indicative of its excellence as an oxide matrix [15]. Thus, many scholars have studied RE-doped Y 2 O 3 -ZnO composite matrix luminescent materials to enhance and adjust emissions. Mhlongo 3+ and Tm 3+ to ZnO substantially improves the absorption capacity for ultraviolet light, which enhances the photocatalytic activity [18]. However, the white luminescence process, optimum doping concentration of RE ions, and regulation of color temperature and coordinates still need further study. Therefore, a series of Yb 3+ /Er 3+ /Tm 3+ -tridoped Y 2 O 3 -ZnO UC ceramic phosphors were prepared using sol-gel synthesis. Ceramic phosphor white light emission was mainly dominated by blue, green, and red emissions originating from Tm 3+ and Er 3+ transition mechanisms. Efficient color emission was attributed to Yb 3+ /Er 3+ /Tm 3+ energy transfers. Additionally, the color emissions were tuned by changing the excitation laser pump power. Then, the Tm 3+ , Er 3+ , and Yb 3+ nitric acid-based solutions were added to the Y 3+and Zn 2+ -containing solution, citric acid was added, and the mixed solution was then stirred and heated to obtain a precursor sol, which was aged at 24 • C for 24 h to form a gel. Afterward, the gel was annealed at 1200 • C for 2 h. The product was ground into a ceramic phosphor powder, which was subsequently characterized. Materials and Methods All analysis tests were carried out at room temperature. Ceramic phosphors' Fieldemission scanning electron microscopy (SEM) and morphology and energy dispersive spectroscopy (EDS) for the samples were carried out by a Hitachi S4800 FE-SEM (Hitachi Inc., Tokyo, Japan). The photos of high temperature microscope were taken by a HSML-FLEX-ODLT 1400 high temperature microscope (TA Instruments Inc., New Castle, DE, USA). The thermo-gravimetric analysis (TG) analysis of the samples was performed from 25 to 700 • C using a TA STA 409PC thermal analyzer (TA Instruments Inc., New Castle, DE, USA). The crystal structure and phase purity were analyzed from 5 to 90 • by a Bruker D8 (Bruker Inc., Karlsruhe, Germany) Discover X-ray powder diffractometer (XRD) with a nickel-filtered Cu-Kα radiation (λ = 1.5406 Å). The grain size was measured by a dynamic laser scattering (DLS) test with a BT-9300Z laser particle size distributor (Bettersize Inc., Dandong, China). The photoluminescence (PL) spectrum was recorded by using a FLS 1000, Edinburgh instruments fluorescence spectrometer (Edinburgh instruments Inc., Livingston, UK) under a MSI 980 nm laser diode (MSI Inc., Taipei, China). SEM Morphology and EDS Mapping The nano-phosphor material surface morphology was characterized using SEM. Figure 1a shows a representative SEM image of the Yb 3+ /Er 3+ /Tm 3+ -tridoped Y 2 O 3 -ZnO ceramic nano-phosphor (Er 3+ : 1 mol%), clearly indicating the crystal size variation. As shown in Figure 1a, the samples were well crystallized and exhibited a sharp-edged angular crystal structure and mesoporous structure consisting of smaller nano-particles. Furthermore, the nano-phosphor chemical composition was analyzed using EDS maps, as shown in Figure 1b,c. Clearly, the nano-phosphor contained O, Y, Tm, Er, Yb, and Zn. No impurities were detected. Moreover, the semi-quantitative proportional variation in elements is also obtained by EDS and is shown in Table 1. It can be seen that the proportion of each element is basically consistent with the experimental design. Moreover, the semi-quantitative proportional variation in elements is also obtained by EDS and is shown in Table 1. It can be seen that the proportion of each element is basically consistent with the experimental design. Figure 2a shows the XRD patterns of all the samples. The diffraction peaks were sharp, indicating that all the samples exhibited good crystallinity. With increasing Er 3+ concentration, no additional peaks appeared in any of the spectra. According to the Joint Committee on Powder Diffraction Standards (JCPDS), the main diffraction peaks were indexed to the characteristic peaks of a Y 2 O 3 body centered cubic structure (JCPDS#41-1105). In addition, weak ZnO characteristic peaks (JCPDS#36-1451) also appeared in each pattern. The three principal diffraction peaks of ZnO overlapped with a diffraction peak of Y 2 O 3 , thus the diffraction peaks of ZnO were unclear in the spectrograms. Additionally, none of the patterns exhibited any peaks attributed to other phases, indicating that both Y 2 O 3 and ZnO were independent. Nanomaterials 2022, 12, x FOR PEER REVIEW 5 of 13 none of the patterns exhibited any peaks attributed to other phases, indicating that both Y2O3 and ZnO were independent. To further elucidate how the Er 3+ concentration affected the matrix lattice, the main Y2O3 (222) crystal plane diffraction peaks were amplified, as shown in Figure 2b. With increasing Er 3+ concentration, the (222) peak first shifted to a lower angle. Then, as the Er 3+ concentration increased to 0.4 mol%, the (222) peak shifted to a higher angle. Above 0.6 mol%, further increasing the Er 3+ concentration shifted the (222) peak to a lower angle again. According to Bragg's law, 2dsinθ = nλ (where d is the interplanar crystal spacing, θ is the angle between the incident X-ray and crystal face, n is the diffraction order, and λ is the X-ray wavelength), and lattices expand when the diffraction peak shifts to a lower angle, and vice versa. Moreover, Er2O3 and Y2O3 have almost identical lattice structures and the Y2O3 lattice gap lacks the space to accommodate Er 3+ ions, thus Y 3+ ions can only be substituted by Tm 3+ ones. Er 3+ (0.89 Å) has a smaller ionic radius than Y 3+ (0.90 Å), thus the host lattice Y2O3 part shrank and the (222) peak shifted to a higher angle when Y2O3 was doped with Er 3+ ions. As shown in Figure 2b, with increasing Er 3+ concentration, the local lattice Y2O3 Er 3+ concentration initially gradually decreased. Then, as the Er 3+ concentration increased from 0.4 to 0.6 mol%, the local lattice Y2O3 Er 3+ concentration considerably increased. Above 0.6 mol%, further increasing the Er 3+ concentration decreased the local lattice Y2O3 Er 3+ concentration again. Because the host lattice consisted of both Y2O3 and ZnO parts, increasing the Er 3+ concentration from 0.2 to 0.4 mol% initially decreased the Er 3+ local concentration in the ZnO lattice, but this local concentration increased when the Er 3+ concentration was in the range from 0.4 to 0.6 mol%; subsequently, this local concentration increased again when the Er 3+ concentration was above 0.6 mol%. The average crystallite size could be calculated with the Scherrer formula:D = kλ/(βcosθ), where D is the crystallite grain size of the nano-crystals, λ is the X-ray wavelength (0.154056 nm), θ is the Bragg angle of the diffraction peak, k is the Scherrer constant that is conventionally set to be 0.89, and β is the corrected full width at half maximum (FWHM) of the main diffraction characteristic peak of the XRD pattern. Table 2 lists the average crystallite sizes of the samples. The results show that the average crystallite sizes of samples vary slightly with Er 3+ concentration increase. The average crystallite sizes of nano-phosphors are about 270 nm To further elucidate how the Er 3+ concentration affected the matrix lattice, the main Y 2 O 3 (222) crystal plane diffraction peaks were amplified, as shown in Figure 2b. With increasing Er 3+ concentration, the (222) peak first shifted to a lower angle. Then, as the Er 3+ concentration increased to 0.4 mol%, the (222) peak shifted to a higher angle. Above 0.6 mol%, further increasing the Er 3+ concentration shifted the (222) peak to a lower angle again. According to Bragg's law, 2dsinθ = nλ (where d is the interplanar crystal spacing, θ is the angle between the incident X-ray and crystal face, n is the diffraction order, and λ is the X-ray wavelength), and lattices expand when the diffraction peak shifts to a lower angle, and vice versa. Moreover, Er 2 O 3 and Y 2 O 3 have almost identical lattice structures and the Y 2 O 3 lattice gap lacks the space to accommodate Er 3+ ions, thus Y 3+ ions can only be substituted by Tm 3+ ones. Er 3+ (0.89 Å) has a smaller ionic radius than Nanomaterials 2022, 12, 2107 5 of 12 Y 3+ (0.90 Å), thus the host lattice Y 2 O 3 part shrank and the (222) peak shifted to a higher angle when Y 2 O 3 was doped with Er 3+ ions. As shown in Figure 2b, with increasing Er 3+ concentration, the local lattice Y 2 O 3 Er 3+ concentration initially gradually decreased. Then, as the Er 3+ concentration increased from 0.4 to 0.6 mol%, the local lattice Y 2 O 3 Er 3+ concentration considerably increased. Above 0.6 mol%, further increasing the Er 3+ concentration decreased the local lattice Y 2 O 3 Er 3+ concentration again. Because the host lattice consisted of both Y 2 O 3 and ZnO parts, increasing the Er 3+ concentration from 0.2 to 0.4 mol% initially decreased the Er 3+ local concentration in the ZnO lattice, but this local concentration increased when the Er 3+ concentration was in the range from 0.4 to 0.6 mol%; subsequently, this local concentration increased again when the Er 3+ concentration was above 0.6 mol%. XRD Results The average crystallite size could be calculated with the Scherrer formula:D = kλ/(βcosθ), where D is the crystallite grain size of the nano-crystals, λ is the X-ray wavelength (0.154056 nm), θ is the Bragg angle of the diffraction peak, k is the Scherrer constant that is conventionally set to be 0.89, and β is the corrected full width at half maximum (FWHM) of the main diffraction characteristic peak of the XRD pattern. In order to investigate grain size and agglomeration, a DLS measurement is done for the sample (Er 3+ : 0.6 mol%) which is finely ball milled for 3 h, and the spectra is shown in Figure 3. The results show that grain sizes range from 300 nm to 6000 nm. In order to investigate grain size and agglomeration, a DLS measurement is done for the sample (Er 3+ : 0.6 mol%) which is finely ball milled for 3 h, and the spectra is shown in Figure 3. The results show that grain sizes range from 300 nm to 6000 nm. The information of DLS test for the sample (Er 3+ : 0.6 mol%) are list in Table 3. It can be seen that the median size of grain is 872 nm, and the average grain size is 1185 nm. Conspicuously, the particles aggregate into large porous grains, which is consistent with the observation of SEM imagine. This indicates that most phosphor particles are maintained a stable porous structure in nanoscale, only a few particles remain independent. The information of DLS test for the sample (Er 3+ : 0.6 mol%) are list in Table 3. It can be seen that the median size of grain is 872 nm, and the average grain size is 1185 nm. Conspicuously, the particles aggregate into large porous grains, which is consistent with the observation of SEM imagine. This indicates that most phosphor particles are maintained a stable porous structure in nanoscale, only a few particles remain independent. Figure 4a shows the sample PL spectra. Each PL spectrum exhibited blue, green, and red emission bands in ranges of 460-490, 510-570, and 630-680 nm, respectively. The emission peaks centered at approximately 470, 535(556), and 660 nm were attributed to the Tm 3+ ion 1 G 4 → 3 H 6 , Er 3+ ion 2 H 11/2 ( 4 S 3/2 )→ 4 I 15/2 , and Er 3+ ion 4 F 9/2 → 4 I 15/2 energy-level transitions, respectively. Figure 4b shows blue (460-490 nm), green (510-570 nm), and red emission (630-680 nm) integral intensities plotted as functions of Er 3+ concentration. As the Er 3+ concentration increased, the blue emission initially intensified dramatically. As the Er 3+ concentration increased to 0.4 mol%, the emissions increased to a peak at an Er 3+ concentration of 0.6 mol%, then decreased. In contrast, both green and red emissions initially intensified, but subsequently weakened in a small range, with the peak appearing at 0.3 mol%. Increasing the Er 3+ concentration to 0.4 mol% resulted in both emissions increasing again to another peak at an Er 3+ concentration of 0.6 mol%. After another decline in emission from 0.6 mol% to 1.0 mol%, emissions increased again. Additionally, the blue emission was stronger than the red one at low Er 3+ concentrations. However, at Er 3+ concentrations above 1.0 mol%, the red emission was stronger than the blue one. The International Commission on Illumination (internationale de I'èclairage, CIE) chromaticity test was performed, and the luminescence photos and corresponding results are shown in Figure 5a. The coordinates of samples were approximately linear with wide dispersion. As the Er 3+ concentration increased, the color of fluorescence changed from white to blue. Figure 5b shows the ratios of blue emission to green emission (EB/EG) and red emission to green emission (ER/EG). As the Er 3+ concentration increased, the EB/EG ratio decreased gradually, whereas the ER/EG ratio increased, resulting in color-tunable emission by adjusting the Er 3+ concentrations. The reduced blue emission and increased green emission jointly determined how the color coordinates changed to the white region with the increase in Er 3+ doping concentration. Green emission had a weak effect on the movement of color coordinates because of its weak relative intensity. The International Commission on Illumination (internationale de I'èclairage, CIE) chromaticity test was performed, and the luminescence photos and corresponding results are shown in Figure 5a. The coordinates of samples were approximately linear with wide dispersion. As the Er 3+ concentration increased, the color of fluorescence changed from white to blue. Figure 5b shows the ratios of blue emission to green emission (E B /E G ) and red emission to green emission (E R /E G ). As the Er 3+ concentration increased, the E B /E G ratio decreased gradually, whereas the E R /E G ratio increased, resulting in color-tunable emission by adjusting the Er 3+ concentrations. The reduced blue emission and increased green emission jointly determined how the color coordinates changed to the white region with the increase in Er 3+ doping concentration. Green emission had a weak effect on the movement of color coordinates because of its weak relative intensity. Photoluminescence (PL) Properties Nano-phosphors cannot always be replaced in practice. Therefore, changing the color of luminescence must be accomplished through other ways. Changing the power of laser excitation is a more convenient method to adjust the color coordinates in practical operation. Therefore, CIE chromaticity coordinates for Y 2 O 3 -ZnO: Yb 3+ /Er 3+ /Tm 3+ nano-phosphors under 980 nm diode laser excitation with different pump powers which were 0.6, 0.8, 1.0, 1.2, and 1.4 W were measured, as shown in (ii) of Figure 6. The color coordinates shifted to the blue direction as the laser power increased when the Er 3+ doping concentration ranged from 0.2 to 0.6 mol%, as shown in (ii) of Figure 6a-e. When the Er 3+ doping concentration was greater than 0.6 mol%, an increase in laser power resulted in color coordinates shifting to green as shown in (ii) of Figure 6f,g. As shown in Figure 5, it is known that the position of color coordinates is related to the emission intensity ratio of blue to green and red to green, and the intensity ratios of blue emission to green emission and red emission to green emission are shown in (i) of Figure 6. it can be seen that the Er 3+ doping concentration range had a larger ratio of E B /E G than that of E R /E G . Increasing the laser power widened the difference between the E B /E G and E R /E G ratios. Figure 4. (a) UC emission spectra generated for Y2O3-ZnO:Yb 3+ /Er 3+ /Tm 3+ -tridoped nano-phosphors excited using 980 nm laser diode operated at 1.0-W pump power. (b) Blue, green, and red peak intensities of UC spectra plotted as functions of Er 3+ concentration. The International Commission on Illumination (internationale de I'èclairage, CIE) chromaticity test was performed, and the luminescence photos and corresponding results are shown in Figure 5a. The coordinates of samples were approximately linear with wide dispersion. As the Er 3+ concentration increased, the color of fluorescence changed from white to blue. Figure 5b shows the ratios of blue emission to green emission (EB/EG) and red emission to green emission (ER/EG). As the Er 3+ concentration increased, the EB/EG ratio decreased gradually, whereas the ER/EG ratio increased, resulting in color-tunable emission by adjusting the Er 3+ concentrations. The reduced blue emission and increased green emission jointly determined how the color coordinates changed to the white region with the increase in Er 3+ doping concentration. Green emission had a weak effect on the movement of color coordinates because of its weak relative intensity. Nano-phosphors cannot always be replaced in practice. Therefore, changing the color of luminescence must be accomplished through other ways. Changing the power of laser excitation is a more convenient method to adjust the color coordinates in practical operation. Therefore, CIE chromaticity coordinates for Y2O3-ZnO: Yb 3+ /Er 3+ /Tm 3+ nano-phosphors under 980 nm diode laser excitation with different pump powers which were 0.6, 0.8, 1.0, 1.2, and 1.4 W were measured, as shown in (ii) of Figure 6. The color coordinates shifted to the blue direction as the laser power increased when the Er 3+ doping concentration ranged from 0.2 to 0.6 mol%, as shown in (ii) of Figure 6a-e. When the Er 3+ doping The correlated color temperature (CCT) of each sample were calculated according to the color coordinates, and the results are shown in (iii) of Figure 6. As the Er 3+ doping concentration increased, the CCT decreased. As shown in (iii) of Figure 6a, due to the intense blue emission, the CCT increased almost exponentially as the laser power increased when the Er 3+ doping concentration was low (0.2 mol%). Then, in the range of Er 3+ doping concentrations from 0.3 to 0.4 mol%, the CCT increased linearly with increasing laser power, as shown in (iii) of Figure 6b,c. When the Er 3+ doping concentration was above 0.4 mol%, the CCT still increased with increasing laser power, but the rate of increase was slower, as shown in (iii) of Figure 6d-g. Luminescence intensity, I UC , follows the relation I UC ∝P n pump , where n is the number of photons required to populate the emitting state [19]. The plot of I UC versus P pump with a double logarithmic scale for Y 2 O 3 -ZnO: Yb 3+ /Er 3+ /Tm 3+ nano-phosphors are shown in Figure 7 1.74, 1.62, 1.75, 1.74,1.71, 1.85 and 1.71, respectively. The results indicate that blue emission involves a three-photon process, and green and red emission involve a two-photon process, and the change of Er 3+ doping concentration has no obvious effect on emission processes. concentration increased, the CCT decreased. As shown in (iii) of Figure 6a, due to the intense blue emission, the CCT increased almost exponentially as the laser power increased when the Er 3+ doping concentration was low (0.2 mol%). Then, in the range of Er 3+ doping concentrations from 0.3 to 0.4 mol%, the CCT increased linearly with increasing laser power, as shown in (iii) of Figure 6b,c. When the Er 3+ doping concentration was above 0.4 mol%, the CCT still increased with increasing laser power, but the rate of increase was slower, as shown in (iii) of Figure 6d-g. double logarithmic scale for Y2O3-ZnO: Yb 3+ /Er 3+ /Tm 3+ nano-phosphors are shown in Figure 7. The values of n for blue emission are 3.02, 3.31, 3.34, 3.13, 3.10, 3.23 and 3.02, respectively. The values of green emission are 1. 92, 1.84, 1.87, 1.82, 1.89, 1.92 and 1.88, respectively. The values of red emission are 1.74, 1.62, 1.75, 1.74,1.71, 1.85 and 1.71, respectively. The results indicate that blue emission involves a three-photon process, and green and red emission involve a two-photon process, and the change of Er 3+ doping concentration has no obvious effect on emission processes. Conclusions A series of Yb 3+ /Er 3+ /Tm 3+ tri-doped Y2O3-ZnO ceramic nano-phosphors were prepared via a sol-gel method. The luminescence and structure of the obtained phosphors were investigated. Ceramic nano-phosphors were well crystallized and exhibited a sharpedged angular crystal structure and mesoporous structure consisting of smaller particles which size were about 270 nm. As described in the results, the blue emission band at 470 nm, green emission band at 535 nm, and red emission band at 660 nm are attributed to the 1 G4 to 3 H6 energy level transitions of Tm 3+ , 2 H11/2 ( 4 S3/2) to 4 I15/2 radiative transitions of Er 3+ , and 4 F9/2 to 4 I15/2 radiative transitions of Er 3+ , respectively. Er 3+ can get energy from Tm 3+ to enhance green and red emission. Yb 3+ , Er 3+ , and Tm 3+ did not mediate any obvious change in the crystal structure of either Y2O3 or ZnO matrix. The color coordinates were adjusted by changing the Er 3+ doping concentration and laser power, and the emission color was tuned to white light indicating the practical applications of the prepared phosphor in display devices and lasers. Under different doping concentrations, the CCT was adjusted in different ranges by changing the power of the excited laser. The energy transfer of Tm 3+ to Er 3+ , increase in luminescent centers, and release of Y2O3 symmetrically dormant RE ions are the fundamental reasons for the emissions change. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Another energy transfer path is from Yb 3+ to Er 3+ , which is described as follows [24,25]: Therefore, blue emission can be attributed to the transition of Tm: 1 G 4 → 3 H 6 ; green emission can be attributed to the transition of 4 H 11/2 ( 4 S 3/2 )→ 4 I 15/2 ; and red emission can be attributed to the transition of Er: 4 F 9/2 → 4 I 15/2 . As shown in Figure 4b, the spectra of green and red emissions had two peaks. For the first peak, blue emission was sharply reduced in this range. It is obvious that the energy is also transferred between Tm 3+ and Er 3+ . according to the research of D. Yan et al., the transition process between Tm 3+ and Er 3+ can be described by the following equations [26][27][28][29]: Contrary to Adnan Khan's research, Er 3+ not only cause the quenching of Tm 3+ [30], but also receives energy from Tm 3+ (Er: 4 I 13/2 + Tm: 3 H 5 → Er: 4 F 9/2 + Tm: 3 H 6 ), explaining the small increase in green and red emissions. When the Er 3+ doping concentration was 0.6 mol%, three emissions exhibited a strong peak, possibly because of the increase in luminescent centers, weakening of lattice symmetry, and release of dormant RE ions located in the symmetric positions of the Y 2 O 3 lattice [31,32]. Increasing the Er 3+ concentration up to 1.4 mol% resulted in emission enhancement, and the enhancement of the green and red emissions should be attributed to the energy from Tm 3+ . Meanwhile, it also can be seen Figure 4b that compared with the green and red emission intensities of sample of which Er 3+ doping concentration is 0.6 mol%, those of which Er 3+ doping concentration is 1.0 mol% begin to decrease, which can be attributed to the concentration quenching of Er 3+ . When Er 3+ doping concentration is large and the distance between the centers is less than the critical distance, they produce cascade energy transfer, i.e., from one center to the next, and then to the next until it finally enters a quenching center, resulting in the quenching of luminescence. Conclusions A series of Yb 3+ /Er 3+ /Tm 3+ tri-doped Y 2 O 3 -ZnO ceramic nano-phosphors were prepared via a sol-gel method. The luminescence and structure of the obtained phosphors were investigated. Ceramic nano-phosphors were well crystallized and exhibited a sharpedged angular crystal structure and mesoporous structure consisting of smaller particles which size were about 270 nm. As described in the results, the blue emission band at 470 nm, green emission band at 535 nm, and red emission band at 660 nm are attributed to the 1 G 4 to 3 H 6 energy level transitions of Tm 3+ , 2 H 11/2 ( 4 S 3/2 ) to 4 I 15/2 radiative transitions of Er 3+ , and 4 F 9/2 to 4 I 15/2 radiative transitions of Er 3+ , respectively. Er 3+ can get energy from Tm 3+ to enhance green and red emission. Yb 3+ , Er 3+ , and Tm 3+ did not mediate any obvious change in the crystal structure of either Y 2 O 3 or ZnO matrix. The color coordinates were adjusted by changing the Er 3+ doping concentration and laser power, and the emission color was tuned to white light indicating the practical applications of the prepared phosphor in display devices and lasers. Under different doping concentrations, the CCT was adjusted in different ranges by changing the power of the excited laser. The energy transfer of Tm 3+ to Er 3+ , increase in luminescent centers, and release of Y 2 O 3 symmetrically dormant RE ions are the fundamental reasons for the emissions change. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
7,044.8
2022-06-01T00:00:00.000
[ "Materials Science" ]
Charge conjugation symmetry in the finite basis approximation of the Dirac equation : 4-component relativistic atomic and molecular calculations are typically performed within the no-pair approximation where negative-energy solutions are discarded, hence the symmetry between electronic and positronic solutions is not considered. These states are however needed in QED calculations, where furthermore charge conjugation symmetry becomes an issue. In this work we shall discuss the realization of charge conjugation symmetry of the Dirac equation in a central field within the finite basis approximation. Three schemes for basis set construction are considered: restricted, inverse and dual kinetic balance. We find that charge conjugation symmetry can be realized within the restricted and inverse kinetic balance prescriptions, but only with a special form of basis functions that does not obey the right boundary conditions of the radial wavefunctions. The dual kinetic balance prescription is on the other hand compatible with charge conjugation symmetry without restricting the form of the radial basis functions. However, since charge conjugation relates solutions of opposite value of the quantum number κ , this requires the use of basis sets chosen according to total angular momentum j rather than orbital angular momentum (cid:96) . As a special case, we consider the free-particle Dirac equation, where the solutions of opposite sign of energy are related by charge conjugation symmetry. We note that there is additional symmetry in those solutions of the same value of κ come in pairs of opposite energy. Introduction Consider an electron of charge q = −e and mass m e , placed in an attractive Coulomb potential φ(r). Upon solving the time-independent Dirac equation, one gets a set of solutions ψ i associated with energy levels E i which forms the spectrum that is shown in a pictorial way in fig.(1.a). The charge conjugation operation [1] (C-operation) relates a particle to its anti-particle. The C-conjugated solution Cψ i , describes the solution of the same equation but with opposite charge (a positron), flipping the spectrum, as shown in fig.(1.c). In the free-particle case, φ(r) = 0, the two spectra, left and right, coalesce into the spectrum in fig.(1.b), that contains no bound solutions, and where the C-operation relates positive-and negative-energy solutions of the same equation, i.e. Cψ ±E i = ψ ∓E i . Note that since the free-particle equation does not require the specification of the charge, it describes equally well electrons and positrons. The Dirac equation is the starting point for 4-component relativistic atomic and molecular calculations. In the former case, the high symmetry of the problem allows the use of finite difference methods, whereas molecular applications generally call for the use of finite basis expansions. Early calculations using finite bases were flawed since the coupling of the large and small components was not respected. Spurious solutions appeared, and the calculations were converging to energy levels lower than it should be. It was observed by Schwarz and Wallmeier [2] as well as Grant [3] that in such calculations the matrix representation of the kinetic energy operator obtained in the non-relativistic limit of the Dirac equation did not match the Schrödinger one. It was realized that if the small components basis functions are generated from the large component ones by ϕ S i ∝ σ · pϕ L i , where σ are the Pauli spin matrices, then the non-relativistic limit of the kinetic energy operator goes directly to the Schrödinger one, and the spurious states disappear. This was further analyzed and formalized under the name of kinetic balance by Stanton and Havriliak [4] (see also Ref. 5). Calculations using this prescription were first done by Lee and McLean [6] (using unrestricted kinetic balance, see Section 2.3.3), and Ishikawa et al. [7]. Present-day 4-component relativistic atomic and molecular calculations are typically carried out within the no-pair approximation [8,9], where the electronic Hamiltonian is embedded by operators projecting out negative-energy orbitals, hence treating them as an orthogonal complement. However, going beyond the no-pair approximation and considering effects of quantum electrodynamics (QED), notably vacuum polarization and the self-energy of the electron, the negative-energy solutions take on physical reality and require a proper description. Charge conjugation symmetry also becomes an issue [10,11] and has to be considered when designing basis sets. In the present work, we investigate the realization of charge conjugation symmetry, in short C-symmetry, of the one-electron Dirac equation within the finite basis approximation. Since basis functions are typically located at nuclear positions, we limit attention to the central field (spherically symmetric) problem. We shall consider three schemes for basis set construction: restricted kinetic balance [4,12], inverse kinetic balance [13], and dual kinetic balance [14]. As such our work bears some resemblance to the study of Sun et al. [13], but our focus will be on whether these schemes allow the realization of charge conjugation symmetry. The Dirac equation and C-symmetry The relativistic behavior of an electron placed in an electromagnetic potential (φ, A) is described by the Dirac equation where are the Dirac matrices, anti-commuting amongst themselves. Dirac himself noted that if matrices α y and β are swapped, then the complex conjugate of a solution to eq.(1) will be the solution of the same equation, but with opposite charge [15]. Kramers coined the term charge conjugation for this symmetry linking particles and their anti-particles [1], and it has later been elevated to one of the three fundamental symmetries of Nature through the CPT-theorem [10,[16][17][18]. In the Dirac representation, the C-operator is given by where K 0 is the complex conjugation operator. The general form was investigated by Pauli [19]. For static potentials, the solution of eq.(1) has the form ψ(r, Since the action of the charge conjugation operation is Cĥ −e C −1 = −ĥ +e , we get the time-independent positronic equation asĥ +e Cψ(r) = −ECψ(r), with opposite sign of the energy. In passing, we note that the charge conjugation operator can be expressed as C = γ 5 βK, where K is the time-reversal operator. Radial problem We shall limit attention to the central-field case, with the vector potential A = 0, and a radial scalar potential φ(r). Solutions then have the general form where the imaginary number i is introduced to make both radial functions P κ and Q κ real. The Ω κ,m are 2-component complex eigenfunctions of theκ = −h − σ ·ˆ operator [20], and κ represent the corresponding eigenvalue. After separation of radial and angular variables, we obtain the radial Dirac equationĥ The C-operation shown in eq.(3), when applied to the spherical solution, eq.(6), gives [21] Cψ κ,m j = iS κ (−1) m j + 1 We observe that the κ and m j quantum numbers in the angular parts have switched sign, and that the radial components are swapped P κ Q κ . Free-particle radial problem Usually, the free-particle Dirac equation solutions are presented in the form of plane waves. However, we are interested in atomic (and molecular) calculations where we use spherical basis functions centered at the nuclear positions. It is therefore more appropriate to consider the free-particle solutions in spherical symmetry. By setting φ(r) = 0 in the radial Dirac equation eq.(7), the solution of this problem is found to be [21,22] where the large and the small components (upper and lower) are respectively given by where j are the spherical Bessel functions of the first kind, S x = x |x| is the sign function, and k(E) = E 2 − m 2 e c 4 /ch represent the wavenumber. These solutions are normalized to the delta function as they describe a continuum of solutions. By next applying the C-operator to the free-particle solution, we get the C-conjugated partner Finally, we see that it is possible to connect opposite energy solutions, in spherical symmetry, by the C-operation (as we expect from the trivial Dirac plane wave case) Details about the C-symmetry and the free electron in the spherical case can be found in [21][22][23]]. Finite basis approximation Generally, the plan is to specify a finite number of basis functions, construct the matrix representation of the Dirac equation, then diagonalize it to get the set of eigenfunctions and eigenvectors. We start by introducing radial basis sets for the large and the small components, , which means that the radial functions P κ and Q κ are expanded as giving the matrix representation of the Dirac equation as with Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 31 May 2020 doi:10.20944/preprints202005.0492.v1 The matrix elements of the above submatrices are given by We note that the κ appearing in superscripts refers to the radial basis functions, whereas the κ appearing as a subscript is associated with the operator. We also note that the −e term appearing in the subscripts denotes the charge that appears in front of the scalar potential. Since we expect a (real) Hermitian matrix representation, the off-diagonal matrices should be related by the transpose operation; Using integration by parts, this implies that that the basis functions should vanish at the boundaries; 0 and ∞. Gaussian type functions We shall work with Gaussian type functions since they play a central role in quantum chemical calculations. The large and the small component radial Gaussian functions are given by with N X κ i the normalization constants. We choose the exponents γ p and γ q to reproduce the small r behavior of the radial functions in the case of a finite nucleus [21], that is This also corresponds to the small r behavior of the free-particle radial solutions eqs. (10,11). Furthermore, Sun et al., in calculations on Rn 85+ using the dual kinetic balance prescription (discussed later in Section 2.3.4), investigated the use of different integers powers γ p and γ q of r for large and small Gaussian-type functions [13] and concluded that optimal results, in particular avoiding variational collapse and divergent integrals, were obtained using the powers given in eq.(23). C-symmetry in the finite basis approximation We say that a basis set respects C-symmetry, and thus leading to C-symmetric matrix representation, if the C-conjugation of all the elements of the basis set belongs to the basis set itself For simplicity reasons we shall set the phase factor in eq.(8) to 1, since it does not contribute to expectation values. C is a map of C 4 → C 4 , the last condition is equivalent to say that the subspace Φ Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 31 May 2020 doi:10.20944/preprints202005.0492.v1 of C 4 consisting of basis functions {ϕ 1 , ϕ 2 , ..., ϕ n }, is preserved by the C-map. We have seen before that in the spherically symmetric case, the C-operation replaces κ → −κ, π L π S and m j → −m j , which means that the realization of the C-symmetry at the basis set level implies Under these conditions, we find that and using the last equations, we get the connection between eigenvalues and eigenvectors +e,−κ = − −e,+κ , Kinetic balance Starting from the radial equation eq. (7), we get two coupled equations that relates the large and small radial components of the Dirac equation The exact couplings are energy and potential-dependent and therefore not appropriate for the construction of basis sets prior to the calculation of the energy. The energy-dependence can be eliminated by taking non-relativistic limit c → ∞ . It is sometimes stated that the expressions in square brackets go to one provided E ± eφ (r) << m e c 2 . However, the correct statement is rather that E ± eφ (r) should have a finite value as the limit is taken. For a point nucleus the scalar potential φ (r) is singular at r = 0, and so this condition is not satisfied. It can be restored by rather considering nuclei of finite charge distributions [24,25]. As it stands, the energy depends quadratically on the speed of light. This dependence can be eliminated by constant shifts, but implies taking different limits for the positive-and negative energy branches. For the positive energy branch, we introduce the shifted energy E + = E − m e c 2 and from eq.(29) obtain For the negative energy branch, we introduce the shifted energy E − = E + mc 2 and from eq.(30) obtain Conventional atomic and molecular relativistic calculations in a finite basis focus on positive-energy solutions and so bases are constructed according to the prescription of kinetic balance which imposes the non-relativistic coupling eq.(31) at the level of individual basis functions, that is Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 31 May 2020 doi:10.20944/preprints202005.0492.v1 The numerical factor a S κ i , here set to half the reduced Compton wavelength in accordance with eq.(31), is arbitrary. For instance, if one did not introduce an imaginary i in the atomic spinor, eq.(6), our present choice would be multiplied with this factor. This particular choice of basis functions provides a proper representation of the kinetic energy operator in the non-relativistic limit [4], and therefore prevents the appearance of spurious states in the calculation. For calculations at finite values of the speed of light, it is necessary that the basis is sufficiently flexible so that the relativistic coupling can be realized through adjustment of basis expansion coefficients [26]. The one-to-one correspondence between large and small component basis functions can only be realized in a 2-component basis and is denoted restricted kinetic balance (RKB). The terminology was introduced by Dyall and Faegri [12] to contrast with the use of scalar basis functions, where the small component basis functions are taken as derivatives of the large component ones, in no particular linear combinations. This latter scheme is denoted unrestricted kinetic balance (UKB). Restricted kinetic balance (eq.(33)) leads to the matrix eigenvalue equation, eq. (16) with elements given in eqs.(A1,A2) of the Appendix. The matrix representation of the Dirac equation in an RKB basis is that of the modified Dirac equation [27] (see also Ref. 28). From this Sun et al. [13] conclude that there is no 'modified' Dirac equation. However, this is formally incorrect, since the modified Dirac equation has an independent existence at the operator level. A corresponding prescription that favors negative-energy solutions has been termed inverse kinetic balance [13] (IKB) and is based on eq.(32). In this prescription the small component basis functions are introduced first, then the large ones are generated using the last equation. This prescription leads to the eigenvalue equation whose elements are given in the Appendix eqs. (A3,A4). In order to respect the C-symmetry we impose the conditions of eq.(25) on the RKB prescription eq.(33), which leads the following equation for large component basis functions Its general solution is and the small components are then π S κ i = π L −κ i . The c i are arbitrary coefficients, j α (z) and y α (z) are spherical Bessel functions of the first and second kind respectively. If, on the other hand, we impose C-symmetry on the IKB prescription eq.(34), we obtain the same general solution, now with We also note that if we combine the RKB and IKB prescriptions, to describe both positive and negative energy solutions on the same footing, we again get the same general solution, The problem with this specific choice of functions is that the boundary conditions π L κ /S κ i (0) = 0 and π L κ /S κ i (∞) = 0, are not obeyed simultaneously. Therefore they are not useful for atomic and molecular calculations. Dual kinetic balance The kinetic balance prescription is widely employed in atomic and molecular calculations, but favors the positive-energy solutions. The dual kinetic balance prescription (DKB) ensures the right coupling between the large and the small components (in the non-relativistic limit) for both positive and negative energy solutions. It was introduced by Shabaev et al. [14] with the use of B-splines and tested by calculating the one-loop self-energy correction for a hydrogen-like ion. The radial function is expanded as where the first and second set of basis functions have the non-relativistic coupling of positive-and negative-energy solutions, respectively, as indicated by the [±] symbol on the coefficients. This particular expansion leads to a generalized eigenvalue problem whose elements are defined in eqs.(A9-A11) of the Appendix. Contrary to the case of RKB/IKB the conditions for C-symmetry, eq.(25), can now be imposed without putting constraints on the choice of basis functions. The two matrix representations associated with (+e, −κ) and (−e, +κ), become related by leading to the C-connection between the eigenvalues, and the eigenvectors +e,−κ = − −e,+κ , Note, however, that the condition in eq.(25) that ensures the C-symmetry, implies that one has to use the same exponents for both ±κ Gaussian type functions, as has also been pointed out by Dyall [29]. This corresponds, in the terminology of Dyall to j bases, where exponents are optimized for the total angular momentum j quantum number [30], contrary to conventional basis sets where functions are optimized for orbital angular momentum . Computational details To illustrate our findings we have written numerical codes using the Wolfram Mathematica program [31]. We built the matrix representations of the Dirac equation in the RKB, IKB and DKB schemes, using spherical Gaussian functions, eqs. (21,22), and a point nucleus. Kinetic balance We started by doing a simple free-particle calculation, φ(r) = 0, within the RKB scheme. Using spherical Gaussian functions, we specify ζ κ = {1, 2}, with κ = ±1 (s 1 2 -and p 1 2 -type functions). By solving the generalized eigenvalue problem for each κ-block, we get the eigenvalues κ reported in Table 1. At first glance, one gets the impression that C-symmetry is respected since eigenvalues come in pairs of opposite sign. However, the pairs occur for same κ and not opposite κ as predicted by C-symmetry. This is confirmed by inspection of the eigenvectors, as exemplified by showing the first two normalized eigenvectors of each κ block in Table 2. We see clearly that the expected connection (C-conjugation) between positive and negative energy solutions, does not hold here. In order to understand the reason, we set φ(r) to zero in the RKB matrix equation whose elements are given eqs.(A1-A2) of the Appendix. We then get the following equation The first and second lines of the last equation give and by combining these equations, we get We see that each eigenvalue λ κ corresponds to two values κ = ± 2m e c 2 λ κ + m 2 e c 4 . The corresponding eigenvectors are Although the eigenvalues exist in pairs, it is clear that the upper and lower components of two opposite energy solutions are not related by C-symmetry. In fact, as shown in Section 2.3.3, the RKB prescription does not generally respect C-symmetry. Note that this pairing of energies can already be seen from the exact spherical free-particle solutions in eq.(9). Upon substitution E → −E and keeping in mind that E ∈ R \ [−m e c 2 , +m e c 2 ] we see that the solution of flipped energy sign can be expressed in terms of the original one Doing the same calculation using the IBK prescription, we get the sets of eigenvalues shown in Table 3. The first two eigenvectors of each spectrum are shown in Table 4. By comparing the eight eigenvectors we have chosen in Table 2 and Table 4, we see that positive and negative energy solutions that belongs to opposite κ-sign blocks are related by C-symmetry. Taking into account the condition in eq.(25), we see that RKB and IKB matrices eqs.(A1-A4) in the Appendix are indeed connected by C-symmetry, that is leading to the symmetry between RKB and IKB eigensystems IKB +e,−κ = − RKB −e,+κ , Since as we see, RKB and IKB are related by C-symmetry, this means that a combination of the two prescriptions would conserve the C-symmetry. And this is exactly what the DKB is about (eq.(37)). Dual kinetic balance We present two simple atomic calculations within the DKB prescription, where we used s For each calculation we get two sets of eigenvalues coming from each κ-block, shown in Table 5. Then we pick the first eigenvalues of each set and show their corresponding eigenvectors in Table 6. Conclusion We have investigated three basis set schemes for solving the Dirac equation in a central field: restricted, inverse and dual kinetic balance, and their compatibility with charge conjugation symmetry which connects solutions of opposite κ of the electronic and positronic problem. An interesting observation is that in the free-particle case, where the electronic and positronic problem coalesce, there is further symmetry such that pairs of eigenvalues of opposite sign also occur for same κ. We are not aware of any discussion of this feature in the literature. Charge conjugation symmetry can be realized within restricted and inverse kinetic balance, but only using special functions which do not respect the boundary conditions of the radial Dirac solutions and which are not useful for atomic and molecular calculations. Dual kinetic balance, on the other hand, is compatible with charge conjugation symmetry for any type of radial basis function, provided j bases are used. An alternative to dual kinetic balance, denoted dual atomic balance, has been proposed by Dyall [29]: In this scheme restricted and inverse kinetic balance is used separately for positive-and negative-energy solutions. This requires in principle two separate diagonalizations, followed by a final diagonalization in the dual basis. If one seeks to generate orbitals for use in QED calculations, then a possible simple alternative to the latter scheme is to first carry out a standard 4-component relativistic calculation within restricted kinetic balance and electronic charge q = −e and only retain the positive-energy solutions. Then a second calculation is carried out, again within restricted kinetic balance, retaining only positive-energy solutions and with the same potential, but now using the positronic charge q = +e. This scheme then has the intriguing property that the final set of orbitals is restricted to observable, positive-energy solutions only. We plan to study these schemes in future work.
5,150.6
2020-05-31T00:00:00.000
[ "Physics" ]
Spatial distribution of elements during osteoarthritis disease progression using synchrotron X-ray fluorescence microscopy The osteochondral interface is a thin layer that connects hyaline cartilage to subchondral bone. Subcellular elemental distribution can be visualised using synchrotron X-ray fluorescence microscopy (SR-XFM) (1 μm). This study aims to determine the relationship between elemental distribution and osteoarthritis (OA) progression based on disease severity. Using modified Mankin scores, we collected tibia plates from 9 knee OA patients who underwent knee replacement surgery and graded them as intact cartilage (non-OA) or degraded cartilage (OA). We used a tape-assisted system with a silicon nitride sandwich structure to collect fresh-frozen osteochondral sections, and changes in the osteochondral unit were defined using quantified SR-XFM elemental mapping at the Australian synchrotron's XFM beamline. Non-OA osteochondral samples were found to have significantly different zinc (Zn) and calcium (Ca) compositions than OA samples. The tidemark separating noncalcified and calcified cartilage was rich in zinc. Zn levels in OA samples were lower than in non-OA samples (P = 0.0072). In OA samples, the tidemark had less Ca than the calcified cartilage zone and subchondral bone plate (P < 0.0001). The Zn–strontium (Sr) colocalisation index was higher in OA samples than in non-OA samples. The lead, potassium, phosphate, sulphur, and chloride distributions were not significantly different (P > 0.05). In conclusion, SR-XFM analysis revealed spatial elemental distribution at the subcellular level during OA development. Scientific Reports | (2023) 13:10200 | https://doi.org/10.1038/s41598-023-36911-w www.nature.com/scientificreports/ in maintaining joint stability, preserving mechanical properties, acting as co-factors in signalling pathways, and participating in vital biological activities during pathophysiological processes 19 . The element Strontium (Sr) exhibits significant similarities to the chemical attributes of Ca, making it another vital constituent found in the inorganic mineral accumulation at the osteochondral interface 20 . Potassium (K) contributes to molecular homeostasis through its involvement in membrane potential, electrolyte balance, pH regulation, enzymatic reactions, and cell growth 21 . Magnesium (Mg) and potassium (K) intake has been shown to have disease-modifying effects in OA 22,23 . Phosphate (P) is essential for bone and CCZ, forming hydroxyapatite and supporting energy metabolism as an ATP component while aiding in nucleic acid and coenzyme synthesis and acid-base balance 24 . Intra-articular basic calcium phosphate (BCP) crystals, present in most OA joints, are associated with severe degeneration 25 . Sulphur (S) is pivotal in tissue stability, contributing to amino acids, protein synthesis, redox balance, antioxidant protection, and synthesis of coenzymes and vitamins 26 . Chloride (Cl) has diverse electrolyte and acid-base balance functions, osmotic pressure regulation, nerve function, and digestion 27 . Dysfunction of Cl channels in articular cartilage can disrupt the microenvironment, leading to imbalances in the matrix and bone metabolism, partial aseptic inflammation, and progression of OA 28 . Additionally, certain heavy metals, including lead (Pb) and Cesium (Cs), may impede joint homeostasis through regional deposition 29,30 . In line with these studies, in our previous studies, we employed EDS and LA-ICP-MS as analytical tools to qualitatively identify the elemental composition in OA disease progression encompassing Ca, P, Sr, oxygen, carbon, K, Mg, Na, and Cl. However, due to the limited sensitivity and lack of spatial analysis to osteochondral interface and quantitative data of EDS and LA-ICP-MS, a more comprehensive investigation is warranted to acquire invaluable insights into the underlying mechanisms of the disease, which could ultimately lead to the development of innovative diagnostic tools and therapeutic strategies. A range of methods have been devised for visualising the elemental composition and distribution in biological samples, such as Energy-dispersive X-ray spectroscopy (EDS), time-of-flight secondary ion mass spectrometry (TOF-SIMS), and laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). These techniques are chosen based on spatial resolution, scanning speed, tissue-specific affinity, and penetration depth. Among different available methods, synchrotron X-ray fluorescence microscopy (SR-XFM) offers subcellular spatial resolution (with an effective pixel size of 1.0 μm depending on various parameters). It is a non-destructive method capable of imaging large tissue areas (mm 2 ) with adequate penetration depth and minimal tissue preparation 31,32 . Despite its advantages, SR-XFM has not been used to study the endogenous distribution changes of osteochondral Figure 1. Illustrated workflow to analyse the spatial elemental distribution. Subsequent to collecting samples from total knee replacements, the samples were trimmed into 1 cm × 1 cm × 1 cm blocks utilising the EXAKT bandsaw and were plunge-frozen in a hexane-dry ice mixture. The samples were then embedded with SCEM and completely frozen in a hexane-dry ice mixture. Following the Kawamoto technique, the sample blocks were sectioned, and then medial OA and lateral non-OA sections were collected via a cryofilm tape-assisted system. The cryofilm tapes were subsequently trimmed using a surgical scalpel blade to fit the dimensions of Si 3 N 4 windows. Lastly, the sections were embedded in a sandwich structure for SR-XFM analysis, and the figure was generated with BioRender.com. Scientific Reports | (2023) 13:10200 | https://doi.org/10.1038/s41598-023-36911-w www.nature.com/scientificreports/ interface elements during OA due to challenges in preparing chemical-free fresh-frozen sections. Prior studies 33,34 have employed resin-embedded samples for SR-XFM preparation, but the formalin or ethanol fixation during this process may alter the endogenous elemental distribution in cells 35 , soft tissues 36 , and hard tissues 37 ; consequently, this could introduce bias and present an inaccurate picture of elemental distribution during disease progression 38 . To accurately examine endogenous changes, we developed a tape-assisted system that accommodated fresh-frozen tissue for SR-XFM 39 and implemented a three-stage approach to improve data accuracy and value, which involved enhancements in sample preparation, image acquisition, and signal processing. This research aims to provide a comprehensive understanding of the spatial distribution and potential variations of elements within the osteochondral interface during OA progression. Specifically, we have choose to analyse the spatial distribution of Zn, Ca, Sr, Pb, K, P, S, Cl and Cs based on our previous studies because of their role in cartilage and joint homeostasis, The utilization of synchrotron X-ray fluorescence microscopy (SR-XFM) will enable a more detailed analysis, offering valuable insights into the underlying mechanisms of OA and paving the way for the development of innovative diagnostic tools and therapeutic strategies. Results In order to examine the distribution patterns of elements in osteochondral tissues classified by the severity of the disease, we assessed the presence of Zn, Ca, Sr, Pb, K, P, S, Cl, and Cs in nine patient-matched osteochondral tissue sections based on the previous LA-ICP-MS and EDS results from our research group 40 , adhering to the process illustrated in (Fig. 1). The detailed modified Mankin score comparison is shown in Table 1. Spatial distribution of Zn during OA progression. First, Zn accumulation was stratigraphically localised in the non-OA samples in the osteochondral junction. The TM region of the non-OA sample showed a strong Zn accumulation signal, followed by the SBP zone and CCZ zone ( Fig. 2A .03 ng/cm 2 , respectively. When comparing non-OA and OA samples, Zn is more abundant and wavier in the non-OA samples than in the OA sample in the TM area (P = 0.0072). Zinc follows the TM contour, and we found the zinc tortuosity index is increased in the OA sample (P < 0.0001, Fig. 2C). However, no difference was observed between CCZ and SBP when comparing non-OA and OA samples (P > 0.05, Fig. 3A-D). Spatial distribution of Ca during OA progression. The Ca abundance varies depending on the osteochondral stratigraphy, similar to Zn. Both non-OA and OA samples' CCZ and SBP contain the same abundance of Ca ( Fig. 2A Sr distribution changes during OA progression. Sr follows a similar distribution pattern as Ca ( Fig. 2A Fig. 6A-D). We found no statistical difference in the spatial distribution of other elements, such as K, P, S, and Cl, between non-OA and OA samples (P > 0.05). The phenomenon is due to XFM's inherent higher sensitivity for metal elements, particularly from Titanium to Uranium, but lower sensitivity for lighter elements (Fig. S1). Colocalisation of elements during OA progression. Next, we used colocalisation analysis to reveal the co-distribution changes of elements during OA progression. The results showed that Ca-Sr, Ca-Zn, and Zn-Sr colocalisation index is higher in TM than CCZ and SBP in both non-OA and OA samples (P Ca-Sr = 0.032; P Ca-Zn = 0.001 < ; P Zn-Sr = 0.00151, Figs. S2-S4). OA Zn-Sr in the TM has a higher slope than non-OA (P = 0.015 < 0.05, Fig. S4). In both non-OA and OA samples, the Pb-Sr colocalisation index was higher in TM than CCZ and SBP (P = 0.017 < 0.05, Fig. S5), while no correlation was found between Ca-Pb samples (P > 0.05, Fig. S6). No correlation was found between Zn-Pb in both non-OA and OA samples (P > 0.05, Fig. S7). Discussion To our knowledge, this is the first paper that reports changes in the stratigraphy and disease-specific differences of elements in OA graded according to disease severity, in which the lateral side is healthy, and the medial side is damaged based on modified Mankin score system at a subcellular resolution (1 μm) 33,34 . In the present work, we found that unique spatial patterns of element distribution exist at the osteochondral interface. According to statistical analyses, non-OA osteochondral samples differ significantly from OA osteochondral samples in their elemental compositions, especially for Zn, Ca, and Pb. The TM separating the calcified cartilage from the noncalcified cartilage showed a significant Zn level. Unexpectedly, the Zn content of the OA www.nature.com/scientificreports/ TM was lower than that of the NON-OA TM (P = 0.0072). Additionally, we discovered less Ca in the TM in the OA samples than in the subchondral bone plate and calcified cartilage zone (P < 0.0001 for both). The Zn-Sr colocalisation index was higher in the OA TM region than in the non-OA samples. TM has traditionally been regarded as a remnant of the growth plate during secondary ossification. Prior studies have identified alterations in the TM, such as increased tortuosity index, TM duplication, and endochondral ossification occurring during the progression of OA 40,41 . These changes have been linked to the redifferentiation of chondrocytes in the CCZ 42 . Discontinuity of the TM at the osteochondral interface has also been observed during OA progression 40,41,43,44 . However, these studies mainly provided descriptive analyses without quantitative data. Our findings suggest that in non-OA samples, TM discontinuity is closely associated with an uneven distribution of Zn in the TM region. Additionally, a higher Zn-Sr colocalisation index indicates a distinct regional variation in Zn distribution. We also noticed a substantial change in Zn abundance and distribution in the TM area between non-OA and OA samples. Additionally, we found that the tortuosity index, shaped by Zn, is significantly increased (Fig. 2A). An increase in TM tortuosity is a characteristic identified through Safranin-O/fast green staining and is often associated with the reactivation of endochondral calcification and bone remodelling 40,44 . During biological processes, Zn interacts with various enzymes, such as alkaline phosphatase (ALP) and matrix metalloproteinases (MMPs), and serves as an essential trace element in numerous enzymes' reactive cores, contributing to healthy skeletal growth 45 . Although the precise role of Zn in bone metabolism remains unclear, recent studies suggest that it promotes bone formation by enhancing osteoblastic cell proliferation 46 . Conversely, research on Zndeficient rats found no differences in bone mineral density, turnover, architecture, or biomechanics compared to control subjects 47 . In light of these findings, we employed laser microdissection microscopy for proteomics identification in the CCZ and identified several Zn-related proteins, including superoxide dismutase (SOD1), S100 calcium-binding protein A7 (S100A7), ALP, and others 40 . The Zn change could also be related to crystal formation change, as researchers found a higher mineral crystal thickness in the lateral compartment of OA 48 . However, no evidence exists that the Zn contributed to the crystal growth at this stage; further study is needed to elucidate the specific mechanism behind Zn accumulation in the TM. This research also discovered a higher concentration of Ca in the CCZ of OA joints compared to the TM. The CCZ is an area of active remodelling during OA progression. Earlier studies have shown that the thickness of both the CCZ and the SBP undergoes dynamic changes during OA development 40,44 . In the early stages, CCZ thickness increases while SBP thickness decreases, but this reverses in later stages. These thickness fluctuations lead to active mineral changes within the CCZ and SBP. The exact role of abnormal mineralisation in the interaction between CCZ and SBP is still unclear. Additionally, it is well-established that collagen fibres and mineralisation determine the stiffness of the CCZ and SBP 40 . Our previous research found a reduction in the elastic modulus of both the CCZ and SBP, but the present study did not find any significant differences in Ca levels 40 . This discrepancy may be due to the combined effects of collagen bundles and the mineralisation process on the elastic modulus of the samples. A prior study reported increased stiffness in the collagen fibres of osteoarthritic cartilage, which could explain the observed changes in the elastic modulus 49 . As OA progresses, the reorganisation and entanglement of collagen fibres inevitably lead to a decrease in the elastic modulus. Blood Pb levels are commonly recognised as a risk factor for knee OA 29 . However, the impact of regionally deposited Pb on OA progression is poorly understood. We discovered Pb accumulation in the deeper layers of the TM (TM). It is important to note that the process of Ca 2+ being replaced by Pb 2+ in Ca-hydroxyapatite is wellestablished at high Pb concentrations and is expected to occur similarly at low concentrations 50 . However, our study revealed that Pb does not have the same distribution as Ca, as it exhibits a higher affinity for the TM area, which is consistent with prior reports 33,34 . Earlier studies have suggested that Pb accumulation may contribute to disease progression 33,34 . However, we found no correlation between Pb and other elements in the osteochondral interface, nor between non-OA and OA samples. Our patients were recruited from Brisbane in Australia, a city with a consistently good air quality index (PM 2.5 ranging from 10 to 30). This low exposure to Pb might explain why it does not reach a level that can influence or accelerate OA progression. Future studies analysing local Pb accumulation in the tidemark and its relation to OA progression may help clarify the association. There are some limitations to the current study. All the samples were collected from OA patients undergoing knee replacement surgery. Therefore, some changes may be overlooked or underestimated compared to normal samples without the signs of OA. The stratification of the non-OA and OA samples could omit the dynamic changes in the moderate OA stage, which will be investigated in future studies. Another limitation is that different elements hold different thresholds when performing quantitative XFM 51 . Therefore, some light elements have lower sensitivity, so we cannot measure the difference. However, this does not mean there is no difference between different groups. This study presents novel findings regarding the changes in stratigraphy and disease-specific differences of osteoarthritis (OA) elements at a subcellular resolution. We observed unique spatial patterns of element distribution at the osteochondral interface and significant differences in elemental compositions between non-OA and OA osteochondral samples, highlighting the significance of elemental distribution in OA pathogenesis. Methods Human ethics and sample preparation. All methods were performed in accordance with guidelines and regulations, which were approved by the ethics committee of the Queensland University of Technology (Human ethics number: #1400001024). Nine participants provided written informed consent to donate the tissues for the knee arthroplasty surgery. After the surgery, the medial and lateral bearing surfaces of the tibia plates of the human donors were collected from St Vincent's Private Hospital. Based on the Modified Mankin scoring system 52 , the samples from each patient were matched and graded as non-OA (Grade 0-1) and OA (Grade 4) www.nature.com/scientificreports/ samples by three blinded observers, in which non-OA is the relatively intact knee joint with cartilage and SBP. In contrast, OA contains degraded cartilage and SBP sclerosis (Detailed demographic data is shown in Table S1). Patients with inflammatory bone diseases were excluded from the study. Patients with a previous medical history, bisphosphonate, and other medication therapy that could contribute to bone and cartilage metabolism and the elemental change were also excluded. EXAKT 310 Diamond Band Saw (EXAKT Apparatebau GmbH & Co. KG; Norderstedt, Germany) was used to cut off the intact and lesion part of the cartilage, and 1 cm × 1 cm × 1 cm cubes were trimmed. We plunge-froze samples in a hexane-dry ice mixture which were then embedded in the super cryo embedding medium (SCEM) (SECTION-LAB, Japan) and froze completely in the hexane-dry ice mixture using the Kawamoto technique 53 . Sectioning. Following our previously published protocol 39 , we sectioned all samples at 10 μm thickness using a CryoStar NX70 cryostat (ThermoFisher Scientific, USA) with a tungsten carbide knife, D profile (Dorn & Hart Microedge, USA). Then, since better sections were produced at a lower temperature 54 , we set the specimen at -30 °C and the knife at -28 °C. After that, we attached the tissue to Kawamoto's cryofilm tapes (3C(16UF), SECTION-LAB, Japan) to support the tissue and cut blocks into 5 mm × 5 mm pieces. We flipped over the tape (with sections on top) and mounted them on a Si 3 N 4 window (600 nm thick; Australian National Fabrication Facility, QLD, Australia) with the matching tissue face exposed for analysis. After that, we freeze-dried the windows and stored them at room temperature in a sealed container to avoid protein degradation. We randomly positioned the samples in the frame by an observer-blind method to avoid selection bias. X-ray fluorescence microscopy image quantification. Prior to the beamtime, we mapped to identify the osteochondral interface using optical and fluorescence mosaic of the windows obtained using both Olympus VS120 Slide scanner and Zeiss LSM 710 Confocal Laser Scanning/Multi-photon Microscope with OlyVIA 2.9 software. Fluorescent mapping was undertaken at ~ 15 keV, with 5 mm × 5 mm areas mapped per sample at low resolution. We selected representative regions of interest with 1 mm × 1 mm size from these mappings. Next, we scanned these regions at high resolution and high sensitivity at parameters to achieve the best possible elemental maps. XFM photons were gathered at the Australian Synchrotron's XFM beamline as an event mode data stream 55 using the Maia detector system 56 and processed using the dynamic analysis method 57 as implemented in GeoPIXE 58 . The data were quantified using well-characterised metallic foils and exported as 32-bit tiffs with units in areal density (ng/cm 2 ). The tortuosity index was described as the ratio of the meandering curve to the straight-line length between the endpoints 59 . Colocalisation analysis. Following the previously published protocol 60 , we measured Zn, Ca, Pb, and Sr elemental mappings using colocalisation analysis as Pearson's r, Costes' regression threshold, and Mander's overlap coefficients, performed in Fiji using the 'Coloc 2' plugin. Statistical analysis. GraphPad Prism 8 (San Diego, USA) software was used to compute statistical analysis. They grouped experimental replicate data from each group and calculated the mean values at the sample level for further statistical comparison. A Shapiro-Wilk test was performed to assess the normality of the data, and all tested data groups passed this test successfully. After that, a paired t-test was conducted, and statistical significance was defined as p-values less than 0.05 for the above procedures. The researchers reported all data as mean values along with their standard deviation (SD). Data availability The manuscript contains all the necessary data. Any remaining information can be obtained from the corresponding author upon reasonable request.
4,608.8
2023-06-23T00:00:00.000
[ "Medicine", "Materials Science" ]
Use of the cefepime-clavulanate ESBL Etest for detection of extended-spectrum beta-lactamases in AmpC co-producing bacteria Background: Extended-spectrum beta-lactamases (ESBLs) may not always be detected in routine susceptibility tests. This study reports the performance of the cefepime-clavulanate ESBL Etest for the detection of ESBLs in Enterobacteriaceae, including those producing AmpC enzyme. Methodology: Consecutive non-duplicate isolates of Escherichia coli, Klebsiella pneumoniae, and Proteus mirabilis isolated from bloodstream infections from January to June 2008 were tested for ESBL by both the standard CLSI double-disk diffusion method using ceftazidime and cefotaxime disks and Etests using ceftazidime/ceftazidime-clavulanate, cefotaxime/cefotaxime-clavulanate and cefepime/cefepime-clavulanate gradients. Isolates were also tested for the presence of transferable AmpC beta-lactamase by AmpC disk test and the efficacies of the different Etests in detecting ESBL production were compared. Results: A total of 113 bacterial isolates (61 K. pneumoniae, 50 E. coli, and 2 P. mirabilis) were recovered. Respectively, 42 (37.2%) and 55 (48.7%) isolates were positive for ESBL by the ceftazidime-clavulanate and cefotaxime-clavulanate combined disk tests. The cefepime/cefepime-clavulanate Etest strip detected the maximum number of isolates (70/113, 61.9 %) as ESBL-positive compared to the ceftazidime/ceftazidime-clavulanate and cefotaxime/cefotaxime-clavulanate strips, which detected 57 (50.4%) isolates each as ESBLpositive. All three ESBL Etest strips were equally effective in detecting ESBL in the isolates that were AmpC negative. In the 66 (58.4%) isolates that co-produced AmpC in addition to the ESBL enzymes, cefepime/cefepime-clavulanate Etest strip detected ESBL in an additional 13 (11.4%) isolates as compared to the other ESBL Etest strips. Conclusions: Cefepime-clavulanate ESBL Etest is a suitable substitute to test for ESBL production, especially in organisms producing AmpC beta-lactamases. Introduction Since their first description more than twenty years ago, pathogens producing extended-spectrum beta ( ) lactamases (ESBLs) have become an increasing cause of clinical concern for several reasons [1][2][3].First, systemic infections due to ESBLproducing Enterobacteriaceae are associated with severe adverse clinical outcomes.Second, initially restricted to certain geographical areas, these enzymes have spread globally and their prevalence varies by geographic region.Third, primarily characterized in limited bacteria such as Escherichia coli and Klebsiella spp., ESBLs have been spreading and reaching other genera, principally Enterobacter and Proteus spp.Finally, besides the growing species diversity, ESBL phenotypes have become more complex due to the production of multiple enzymes including inhibitor-resistant ESBL variants, plasmidborne AmpC, production of ESBLs in AmpCproducing bacteria, production of ESBLs in KPCproducing bacteria, enzyme hyperproduction and porin loss [1][2][3][4]. The ESBLs are typically plasmid-mediated enzymes that hydrolyse penicillins, third-generation cephalosporins and aztreonam [5].They are not active against cephamycins (cefoxitin and cefotetan), but are susceptible to -lactamase inhibitors (clavulanic acid).In contrast, AmpC -lactamase usually is chromosomally encoded, poorly inhibited by clavulanic acid, reversibly inhibited by boronic acid, and can be differentiated from ESBLs by its ability to hydrolyse cephamycins as well as other third-generation cephalosporins [5,6].Plasmid-mediated AmpC -lactamases have arisen through the transfer of chromosomal genes for the inducible AmpC -lactamse onto plasmids.This transfer has resulted in plasmid-mediated AmpC -lactamases in isolates of E.coli, Klebsiella pneumoniae, Salmonella spp., Citrobacter freundii, Enterobacter aerogenes, and Proteus mirabilis [5].Recently, Gram-negative organisms that produce both ESBLs and AmpClacatamses are being increasingly reported worldwide [7,8].These organisms usually exhibit multidrug resistance that is not always detected in routine susceptibility tests.The inability to detect such complex resistance phenotypes is a serious challenge facing clinical laboratories and may have been a major factor in the uncontrolled spread of ESBLproducing organisms and related treatment failures.Hence, there is a need for better detection of ESBLs in the clinical laboratory. The Clinical and Laboratory Standards Institute (CLSI) recommendations for phenotypic confirmation of ESBL still relies on the minimum inhibitory concentration (MIC) difference test, in which a -lactamase inhibitor is used to protect the activity of an indicator drug against an ESBLproducing strain [9].Laboratory tests that have been developed include double-disk diffusion using cefotaxime and ceftazidime disks with or without clavulanic acid, microdilution, and MIC using Etest or automated systems such as Vitek [10].Etest is a convenient method for detection of ESBL by MIC reduction.Two different Etest gradient formats have been in use based on reduction of ceftazidime or cefotaxime MICs by 3 two-fold dilutions in the presence of clavulanic acid and have been used successfully for ESBL detection [10,11].However, in isolates that co-produce both ESBL and AmpClacatamase, high-level expression of AmpC may mask recognition of ESBL by the inhibitor-based method. Cefepime, a fourth-generation cephalosporin, is known to be a poor substrate for AmpC -lactamases making this drug a more reliable agent for ESBL detection in the presence of an AmpC enzyme [11]. Recently, a new Etest ESBL strip based on clavulanate synergy with cefepime has been reported to be a valuable supplement to current methods for detection of ESBLs in Enterobacteriaceae [12].In this study, we aim to report on the performance, in our laboratory, of cefepime-clavulanate ESBL Etest for detection of ESBLs in Enterobacteriaceae, including those producing AmpC enzyme. Bacterial strains The study was conducted on consecutive nonduplicate isolates of E. coli, K. pneumoniae and P. mirabilis isolated from bloodstream infections over a six-month period from January to June 2008.The study was limited to these organisms since CLSI recommends ESBL testing and reporting only for these organisms [9].Isolates from bloodstream infections were chosen for the study since they reflect systemic infections and inadequate detection of ESBLs may lead to inappropriate therapy resulting in therapeutic failure [1,2].Organism identification was performed by conventional biochemical tests using standard microbiological techniques [13]. Minimum inhibitory concentration (MIC) to ceftazidime, cefotaxime and cefepime was determined for all isolates by the Etest (AB Biodisk, Solna, Sweden). ESBL Detection All isolates showing reduced susceptibility to ceftazidime (zone diameter of ≤ 22 mm and/or MIC 2 mg/L) and cefotaxime (zone diameter of ≤ 27 mm and/or MIC 2 mg/L) were selected for ESBL production.Isolates were tested for ESBL by both the standard CLSI double-disk diffusion method and Etests using ceftazidime/ceftazidime-clavulanate, cefotaxime/cefotaxime-clavulanate and cefepime/cefepime-clavulanate gradients.The tests were quality controlled using standard strains E. coli ATCC 25922 (ESBL negative), Pseudomonas aeruginosa ATCC 27853 (ESBL negative) and K. pneumoniae 700603 (ESBL positive). CLSI disk method [9] For the CLSI method, ceftazidime (30µg) and cefotaxime (30µg) disks were used, each with and without clavulanate (10µg).ESBL production was indicated by an increase in zone size of 5 mm in the disk with clavulanic acid.[12] The ceftazidime/ceftazidime-clavulanate (CAZ-CLA) ESBL Etest strip generates a stable concentration gradient of ceftazidime (MIC test range, 0.5-32 mg/L) on one end and the remaining end generates a gradient of ceftazidime (MIC test range, 0.064-4mg/L) plus 4 mg/L clavulanic acid.Similarly, the cefotaxime/cefotaxime-clavulanate (CTX-CLA) Etest ESBL strip contains cefotaxime (MIC test range, 0.25 -16 mg/L) and cefotaxime (MIC test range, 0.016 -1mg/L) plus 4 mg/L clavulanic acid.The recently introduced cefepime/cefepime-clavulanate (PM-CLA) Etest ESBL strip contains cefepime (MIC test range, 0.25-16 mg/L) and cefepime (MIC test range, 0.064 -4 mg/L) plus 4 mg/L clavulanic acid.The Etest procedure, reading, and interpretation were performed according to the manufacturer's instructions.Isolated colonies from an overnight plate were suspended in saline (0.85% NaCl) to achieve an inoculum equivalent to 0.5 McFarland standard.This suspension was swabbed on a Mueller-Hinton agar plate and allowed to dry completely.An ESBL Etest strip was then applied to the agar surface with sterile forceps and the plate was incubated at 35ºC overnight.ESBL results were read either as MIC values or observation of "phantom zones" or deformation of inhibition ellipses.Reduction of MIC by 3 two-fold dilutions in the presence of clavulanic acid is indicative of ESBL production.Deformation of ellipses or the presence of a "phantom zone" is also indicative of ESBL production even if the MIC ratio is < 8 or cannot be read. Test for transferable AmpC -lactamase Detection After screening with cefoxitin (30 µg disk), all isolates were tested for the presence of transferable AmpC enzyme by AmpC disk test [14].The test was performed by preparing a lawn of 0.5 McFarland suspension of E. coli ATCC 25922 on Mueller-Hinton agar plates.Sterile disks (6 mm) were moistened with 20 l of a 1:1 mixture of saline and 100X Tris -EDTA and inoculated with several colonies of the test organism.The inoculated disk was placed beside a 30 g cefoxitin disk on the inoculated plate.After overnight incubation at 37 C, a positive test appears as a flattening or indentation of the cefoxitin inhibition zone in the vicinity of the test disk. Results A total of 113 bacterial isolates were recovered during the study period, which included 61 K. pneumoniae, 50 E. coli, and 2 P. mirabilis.Forty-one isolates were from the neonatal unit, 38 from the pediatric unit, 13 from the intensive care unit, and 21 were from the adult medical unit.Forty-two (37.1).Among the 70 ESBL-positive isolates detected by PM-CLA, 66 also tested positive for transferable AmpC -lactamses and 4 were lone ESBL producers.Thus, co-production of ESBL and AmpC -lactamses were observed in 66 (58.4%) isolates.AmpClactamase alone was detected in an additional 23 isolates, the total number of AmpC producing isolates thus being 89 (78.7%).All AmpC producers were found to be cefoxitin resistant. It was further observed that in all four isolates that were AmpC negative, all three ESBL Etest strips were equally effective in detecting ESBL.However, in the 66 isolates that co-produced AmpC in addition to the ESBL enzymes, the PM-CLA Etest strip detected ESBL in an additional 13 (8 K. pneumoniae and 5 E. coli; 11.4%) isolates as compared to the CAZ-CLA and CTX-CLA ESBL Etest strips (Table 1).Thus the PM-CLA ESBL Etest strip was found to be particularly useful for detecting ESBLs in AmpC producing bacteria, whereas the CAZ-CLA and CTX-CLA strips yielded a high number of nondeterminable or negative results and thus showed marked inability to detect ESBL production in this group of isolates (Fig. 1). When the MICs of ceftazidime, cefotaxime and cefepime in the ESBL and AmpC co-producing isolates (n = 66) were compared, it was observed that the MIC of 22 (33.3%)isolates were in the susceptible range (< 8 mg/L) for cefepime in contrast to one (1.5%) and no (0%) isolates in the susceptible range for ceftazidime and cefotaxime respectively (Table 2).This observation indicates the stability of cefepime in the presence of AmpC beta-lactamases as compared to ceftazidime or cefotaxime. Discussion The present study demonstrated that the new Etest ESBL strip containing cefepime-clavulanate was the most sensitive in detecting ESBL, especially in isolates producing AmpC -lactamse.Presence of ESBLs can be masked by the expression of AmpClactamse, which can be generated by chromosomal (eg., in most Enterobacter, Serratia, C. freundii, Morganella, Proteus and Pseudomonas species) or plasmid genes (mostly in E. coli and Klebsiella) [15].Even though they are not inducible, plasmid-encoded AmpC -lactamase typically are expressed at median to high levels [16].Like their counterpart on the chromosome, plasmid-encoded AmpC -lactamase provide a broader spectrum of resistance than ESBL and are not blocked by commercially available inhibitors [16].Thus, high-level expression of a plasmid-mediated AmpC enzyme as in E. coli and Klebsiella may also prevent recognition of an ESBL.In our study, dominant AmpC production also covered and masked underlying ESBL production in 13 additional strains of E. coli and Klebsiella spp.which were initially labeled as ESBL negative by the CAZ-CLA and CTX-CLA ESBL Etests. Possible approaches to overcome the difficulty of ESBL detection in the presence of AmpC include the use of tazobactam or sulbactam, which are much less likely to induce AmpC -lactamases and are therefore preferable inhibitors for ESBL detection tests with these organisms, or testing cefepime as an ESBL detection agent [11].Cefepime, a fourthgeneration cephalosporin, is a more reliable detection Antimicrobial agent No agent for ESBLs in the presence of an AmpClactamase, as this drug is stable to AmpC -lactamase and will thus demonstrate the synergy arising from the inhibition of ESBL by clavulanate in the presence of AmpC enzyme.This result has also been observed in our study, which shows that the MIC to cefepime was in the susceptible range for 33.3% of isolates producing ESBL and AmpC in contrast to ceftazidime and cefotaxime where one and none of the isolates respectively had MICs in the susceptible range.This reinforces the stability of cefepime in the presence of AmpC enzyme.Cefepime in double-disk synergy tests was first used for the detection of ESBLs among AmpC producers by Tzelepi et al. [17].In this study [17], the use of cefepime increased the sensitivity of the double-disk synergy test with expanded-spectrum cephalosporin for the detection of ESBLs in enterobacters from 16 to 61% when the disks were applied at the standard distance of 30 mm from clavulanate and from 71 to 90% with closer application of the disks.More recently, the performance of a modified double-disk test (MDDT) utilizing cefotaxime, ceftazidime, cefepime and aztreonam along with a amoxicillin-clavulanate disk was evaluated for the detection of ESBLs in clinical isolates of E. coli and K. pneumonia [18].Of the 136 isolates, 112 (82%) and 102 (75%) were positive for ESBL by the MDDT and NCCLS/CLSI methods respectively.Ten (7.4%) isolates (eight E. coli and two K. pneumoniae), all of which were positive for ESBL by the MDDT, yielded negative results with the NCCLS/CLSI disk method [18].These strains showed a clear extension of the edge of inhibition produced by cefepime towards the amoxicillinclavulanate disk, thus revealing the superior activity of cefepime for detecting ESBLs.Similarly, in another study [19], two K. pneumoniae isolates out of 100 consecutive isolates of E. coli and Klebsiella were positive by the double-disk synergy test for ESBL with cefepime only, but not with any of the other third-generation cephalosporins used.With regard to the detection of ESBLs by Etest, Stürenburg et al. [12] evaluated the performance of the cefepimeclavulanate ESBL Etest to detect ESBLs in an Enterobactriaceae strain collection.The ESBL Etest was 98% sensitive with cefepime-clavulanate, 83% with cefotaxime-clavulanate, and 74% with ceftazidime-clavulanate strips.The cefepimeclavulanate strip was observed to be the best configuration for detection of ESBLs, particularly in Enterobacter spp.where inducible chromosomal AmpC -lactamse can interfere with clavulanate synergy [12].In conclusion, the results of the study indicate that the current CLSI recommended methods to confirm ESBL enzymes by conducting clavulanate synergy tests with ceftazidime and cefotaxime may be insufficient for ESBL detection in clinical isolates of E. coli and K. pneumoniae since these organisms often produce multiple -lactamses.In such situations, where AmpC -lacatamse can interfere with clavulanate synergy, the new cefepimeclavulanate strips could be a more sensitive alternative for the detection of ESBL-producing organisms.Thus, in our opinion, cefepimeclavulanate Etest is a suitable substitute to test for ESBL production, especially in organisms producing AmpC -lactamase.Optimum identification of ESBL-producing isolates would allow clinical microbiologists and infectious disease specialists to formulate policies for empirical antimicrobial therapy, especially in high-risk units where infections due to these organisms are common.It also helps in monitoring the development of antimicrobial resistance and in the implementation of proper hospital infection control measures. Table 1 ) . When the ESBL Etest results were compared, it was observed that the cefepime-clavulanate Etest strip detected the maximum number of isolates Table 1 . ESBL test results for the Enterobacteriaceae isolates studied. Table 2 . Number of ESBL-and AmpC-producing isolates with different MICs to ceftazidime, cefotaxime and cefepime.
3,470.6
2010-02-01T00:00:00.000
[ "Medicine", "Biology" ]
Elastic symmetry with beachball pictures The elastic map, or generalized Hooke’s Law, associates stress with strain in an elastic material. A symmetry of the elastic map is a reorientation of the material that does not change the map. We treat the topic of elastic symmetry conceptually and pictorially. The elastic map is assumed to be linear, and we study it using standard notions from linear algebra—not tensor algebra. We depict strain and stress using the ‘beachballs’ familiar to seismologists. The elastic map, whose inputs and outputs are strains and stresses, is in turn depicted using beachballs. We are able to infer the symmetries for most elastic maps, sometimes just by inspection of their beachball depictions. Many of our results will be familiar, but our versions are simpler and more transparent than their counterparts in the literature. I N T RO D U C T I O N Elasticity is about the relation between strain and stress. We refer to the function T from strain to stress as the elastic map. It expresses the 'constitutive relations' of the material under consideration, or the 'generalized Hooke's Law' (Aki & Richards 2002). The map T describes the strain-stress relation at a particular point p in the material. A symmetry of T is a rotation of the material, about p, that does not change T. We present a treatment of elastic symmetry that we think is more conceptual than the usual approach through tensor analysis. Our approach has its beginnings in the work of William Thomson (1856) (Lord Kelvin) in the mid-19th century. According to Helbig (1994Helbig ( , 2013 and Cowin et al. (1991), Kelvin's insights were largely forgotten by the elasticity community until much later, when they were reintroduced by Rychlewski (1984). Most of the ideas-especially the notion of eigensystem of a linear transformation-were already routine for mathematicians and theoretical physicists of the early 20th century, so it is a bit surprising that they were still regarded as novel in elasticity in the 1980s. Rychlewski himself apparently felt much the same: Thus we deal with the linear symmetric operator α → C · α acting in a finite-dimensional space with a scalar product .. . . The situation has been investigated as fully as possible, and it only remains to translate the information available into the language of mechanics. (Rychlewski 1984, p. 305) Thus, although our exposition of elasticity is non-traditional visa-vis older expositions, it will be unremarkable to mathematicians and physicists. Our exposition does not use the Voigt matrix (eq. S13), and it requires no knowledge of tensors. What it does rely on is introductory linear algebra, which we review. Specifically, we rely heavily on orthogonality, on matrix representations of the elastic map T, and on eigensystems of T. Mathematically, strains and stresses are 3 × 3 symmetric matrices and can therefore be depicted as 'beachballs', as seismologists do for moment tensors. Because the strain-stress relation is assumed to be linear, any elastic map T can then be depicted using beachballs. The depiction in principle determines T completely, but of course one cannot just glance at the depiction and expect to infer T quantitatively. In Sections 4-9 we characterize elastic maps T that have as a symmetry the rotation Z ξ through angle ξ about the z-axis. There are five cases to consider: ξ = ±2π/n for n = 1, 2, 3, 4, as well as ξ regular, meaning none of the preceding; see Fig. 1. For each case there is an intrinsic characterization of T and a more conventional characterization using matrices. Figs 6, 9, 10 and 11 illustrate the intrinsic characterizations, and Table 1 lists the matrix characterizations. In Section 14 we give a relatively elementary proof that any material can be oriented so that its group of elastic symmetries is one of eight reference groups. The proof is largely a matter of looking at the intersections of circles on a sphere, as in Figs 16-18. Matrix characterizations for elastic maps associated with the reference groups are given in Table 4, and intrinsic characterizations are given in Section 12.1. The simplicity of the matrix characterizations relative to their traditional counterparts (e.g. Nye 1957Nye , 1985 is due to our use of the basis B defined in eq. (3). Nowhere do we assume that elastic symmetry groups arise from crystallographic symmetry groups. We nevertheless find that if an elastic map T has a symmetry with rotation axis v and rotation Figure 1. The six non-regular angles, here depicted as points on the unit circle. All other angles are regular. If the rotation Z ξ is a symmetry of an elastic map T for some regular ξ , then Z ξ is a symmetry of T for all ξ (Theorem 5). angle ξ , where ξ is regular, then all rotations about v, regardless of rotation angle, are symmetries of T. We also find that if T has a threefold or fourfold symmetry with axis v, then it has three or four (respectively) twofold symmetries with axes perpendicular to v. These facts go into deriving the eight reference groups mentioned above. In Section 15 we show by example how to find the symmetry group of virtually any elastic map T. We say 'virtually', because the method can be defeated by a carefully and maliciously constructed T (Section 15.8). Our method is related to that of Bóna et al. (2007), but we think that our beachball pictures offer a useful complement to the Bona approach. Many of our results will be familiar, at least to the experts. Fig. 6, for example, which characterizes elastic maps that have symmetry Z ξ for some regular ξ , would have been immediately recognizable to Rychlewski (1984). Likewise, the number eight for the number of elastic symmetry groups is now generally agreed upon (Forte & Vianello 1996;Chadwick et al. 2001). An idealized seismic plane wave travelling in an arbitrary direction in an anisotropic elastic material is apt to be neither a P wave nor an S wave. That is, the wave's vibration direction is neither parallel nor perpendicular to the direction of travel. If, however, the direction of travel is an elastic symmetry axis, then, with some unlikely exceptions, the wave must indeed be either a P wave or an S wave (Fedorov 1968). If also the relevant elastic map T has for its symmetry group one of the reference subgroups of Section 12, then in most cases both the vibration direction and the speed of the wave are simply related to the intrinsic parameters for T. (We do not treat these topics here.) Treatments of elasticity can be found in Fedorov (1968), Nye (1957Nye ( , 1985, Auld (1973), Musgrave (1970), Helbig (1994), Chapman (2004), Slawinski (2015), and many others. A reference for linear algebra is Hoffman & Kunze (1971). Our Appendix G is a glossary of notation. The elastic map and the c i jkl Expositions of elasticity are generally based on numbers c i jkl , i, j, k, l = 1, 2, 3, that are assumed to satisfy c i jkl = c jikl (1a) The c i jkl determine a linear mapping T of the 6-D space M of symmetric matrices to itself: where E = (e i j ) and F = ( f i j ) are 3 × 3 symmetric matrices (Aki & Richards 2002, eq. 2.18). If E is the strain matrix at a point in some hypothetical material described by the c i jkl , then F is the corresponding stress matrix. We refer to T as the elastic map. Eqs (1a) and (1b) arise from the symmetry of the strain and stress matrices. Eq. (1c) is due to the assumed existence of a strain-energy function (Aki & Richards 2002). Since the elastic map T is linear, we consider its matrix representation [T] BB with respect to a basis B for M. The calculations will be simplest, and [T] BB will best express T, if the basis vectors are chosen to be orthonormal. The 'vectors' must of course be 3 × 3 symmetric matrices, since they are in M. We take B to be the basis whose elements are The B i are indeed orthonormal, that is, B i · B j = δ i j . Here the dot is the inner product of matrices. The inner product of 3 × 3 matrices M = (m i j ) and N = (n i j ) is defined by (Juxtaposition of matrices, with no dot, signifies matrix multiplication. ) We let t i j be the ijth entry of the 6 × 6 matrix [T] BB . That is, We will find that [T] BB is symmetric. and hence the 21 entries t i j with j ≥ i, in conjunction with the basis B, are enough to determine T and thus to specify the elasticity of the material under consideration. We think that those 21 numbers are better parameters to focus on than the c i jkl . We nevertheless want to be able to translate between the t i j and the c i jkl : As will be explained in Section 2.2, the entries in the jth column of the matrix [T] BB are the coordinates of T(B j ) with respect to the basis B. That is, Since the B i are orthonormal, Hence 972 W. Tape and C. Tape Table 1. Matrices [T] BB for elastic maps T having rotational symmetry Z ξ for ξ as indicated. The rotation Z π/2 , for example, is a symmetry of T if and only if [T] BB = T 4 for some a,c,d,e,f,i,k (Section 6). Blank entries are understood to be zeros. T MONO (ξ = π ) T 4 (ξ = π/2) T XISO (ξ regular) As an example, we calculate t 14 . From eqs (2) and (3) Treating the other 33 entries t i j the same way, we would eventually have [T] BB . Eq. (S29), however, in the Supporting Information has a less painful calculation of [T] BB from the c i jkl . The main point at the moment is that [T] BB turns out to be symmetric. The peculiar form of eqs (10) may at first be regarded as reflecting poorly on the t i j . Conceptually, however, the t i j stand on their own. From eq. (8), the entry t i j tells how much the stress that is associated with strain B j resembles the stress B i . If anything, then, the form of eqs (10) calls for a conceptual justification of the c i jkl , not of the t i j . In this paper, we deal with T and [T] BB rather than with the c i jkl . If the t i j are known-through observation or otherwise-then, from a purely logical point of view, the c i jkl can be dispensed with. If desired, the c i jkl can be found from [T] BB using eqs (1) and (S28). Matrix representations of linear transformations of M Let F be a basis for M with elements (matrices) F 1 , . . . , F 6 . For a matrix E ∈ M we denote its F-coordinate vector by [E] F . Thus If the basis F is orthonormal, then Now let S be a linear transformation of M, and let F and G both be bases for M. We define the 6 × 6 matrix [S] G F to be the matrix that takes the F-coordinate vector of E to the G-coordinate vector of S(E): (Think of coordinate vectors as column vectors when matrix multiplication is concerned.) We refer to [S] G F as the matrix of S with respect to the bases F and G. where I is the identity transformation on M, where I 6×6 is the 6 × 6 identity matrix, and where the symbol • denotes composition of functions: Thus eq. (14a) says that matrix multiplication is the matrix analog of composition of functions. Eqs (13) and (14) look innocent enough, but they are the key to many matrix manipulations. Note how their form suggests the correct move. To arrive at a more familiar description of [S] G F : From eqs (11) the coordinate vector for F j with respect to the basis F is where e 1 , . . . , e 6 are the standard basis for R 6 . The jth column of the matrix [S] G F is therefore In words, the jth column of [S] G F consists of the coordinates of S(F j ) with respect to the basis G. The diagram below summarizes the relation between the linear transformation S and its matrix representation. Finally, from eq. (13) with F = G, That is, the 3 × 3 symmetric matrix E is an eigenvector of the transformation S if and only if the coordinate vector [E] F is an eigenvector of the matrix [S] FF . The eigenvalues are the same for both. In terms of the elastic map T We now take S in Section 2.2 to be the elastic map T, and we take both of the bases F and G to be the basis B of eq. (3). From eqs (3), (11), (12), the coordinate vector of the matrix E = (e i j ) ∈ M with Since the elastic map T : M → M is linear, it is determined by its values on any basis for M. To depict T, it is therefore enough to depict some basis elements F 1 , . . . , F 6 together with the corresponding T(F 1 ), . . . , T(F 6 ). This is done by means of 'beachballs', as explained in Section 2.4. In Fig. 2 the basis is B of eq. (3) and T is the elastic map whose matrix with respect to B is that of eq. (23). Beachballs-a picture for strain and stress Since the members of M are 3 × 3 symmetric matrices, they can be depicted as beachballs as is done in seismology. The radius of The lower balls therefore represent strains, and the upper balls represent the corresponding stresses. Since T is linear, it is entirely determined by B i and T(B i ), i = 1, . . . , 6. Since the colouring of a beachball, together with its size, determines its matrix, this picture in principle determines T. See, however, Section 2.4 regarding beachball perturbations. the beachball for E ∈ M is made proportional to E and, for any The nodal curves on the ball, which separate red from white, are The beachball is thus a contour map of the function v → (Ev) · v, but with only one contour, namely the zero contour. If the eigenvalues of the matrix E are of mixed sign, then the beachball for E shows both red and white, and the size and colouring of the ball determine E. If they are all of one sign, though, the ball is all red or all white, and it does not reveal E. In our figures, when we show a beachball for a matrix E whose eigenvalues all have the same sign (but not all equal), we therefore show not the beachball for E itself but for the perturbed matrix where I is the 3 × 3 identity matrix and where the number , positive or negative and not necessarily small, is such as to nudge the resulting beachball into the bicoloured regime. The ball then is not strictly correct, but it gives a suggestion of the matrix E. In Fig. 19 the ball for G 6 has been perturbed in this way; instead of being solid red, it has two small white caps. The ball for G 6 in Fig. 21 974 W. Tape and C. Tape has likewise been perturbed, giving it the narrow white band. (The solid red balls for B 6 and T(B 6 ) in Fig. 2 are correct, since those matrices are multiples of the identity.) We could have used a more sophisticated colouring scheme that would have made the perturbations unnecessary, but the existing binary scheme seems enough for what we are trying to show. The perturbations are only for display purposes; all calculations are done with the unperturbed matrices. Matrix of S with respect to an arbitrary orthonormal basis Continuing from Section 2.2, we now assume that the basis G for M is orthonormal. [An example of G would be the basis B of eq. (3).] Denoting the elements (i.e. matrices) of the basis F by F 1 , F 2 , . . . , F 6 and those of G by G 1 , G 2 , . . . , G 6 , we have, for any matrix E ∈ M, The G-coordinate vector for E is therefore From eq. (17) the jth column of the matrix [S] G F consists of the coordinates of S(F j ) with respect to the basis G. From eq. (29), the jth column is therefore with the 6-tuples thought of as column vectors. Hence the ijth entry Explicitly, From eq. (13) the matrix [I] G F takes F-coordinates to Gcoordinates. From eq. (31) with S = I, The jth column of [I] GF is thus the G-coordinate 6-tuple of F j . Two special types of transformation We consider a linear transformation S : V → V. Although V can be any finite dimensional (real) inner product space, the only relevant instances here are V = R 3 and V = R 6 with the standard inner product, and V = M with the inner product defined in eq. (4). The adjoint of S is the linear transformation S * : V → V such that, for all E 1 , E 2 ∈ V, From eq. (31) it follows that for any orthonormal basis G of V, where T = (t ji ) is the transpose of the matrix T = (t i j ). Unitary transformations The unitary transformations are those that preserve inner products, hence distances and angles. From eqs (14) and (34), For a square matrix to be orthogonal means that its transpose is its inverse. Hence eq. (36) Self-adjoint transformations From eq. (34), Since the matrix [T] BB is symmetric and the basis B is orthonormal, then the elastic map T is self-adjoint: Orthogonality terminology Some terminology regarding orthogonality: Vectors v 1 and v 2 in V are orthogonal if v 1 · v 2 = 0. Subspaces W 1 and W 2 of V are orthogonal, written W 1 ⊥ W 2 , if every vector in one subspace is orthogonal to every vector in the other: The orthogonal complement of a subspace W is The subspace W 1 , . . . , W n spanned by W 1 , . . . , W n consists of all the linear combinations of vectors from W 1 , . . . , W n . A subspace W is the orthogonal direct sum of subspaces W 1 , . . . , W n , written W = W 1 ⊥ . . . ⊥ W n , if W is the span of W 1 , . . . , W n and if W 1 , . . . , W n are pairwise orthogonal: [The notation W 1 ⊥ W 2 is therefore ambiguous, with meanings from both eqs (40) If, for example, U is rotation through 30 • about the z-axis in R 3 , then the prime subspaces would be the z-axis and the xy-plane. Those two subspaces would also be prime summands. If, however, the rotation is through 180 • then the z-axis and every horizontal line through the origin would be prime subspaces. The three coordinate axes would be prime summands. So also would be the z-axis together with the lines x = y and x = −y in the xy-plane. Finally, when V is the orthogonal direct sum of non-zero subspaces W 1 , . . . , W n , we write T = W 1 The numbers λ 1 , . . . , λ n are then the eigenvalues of T, not necessarily distinct. The eigenspace of T with eigenvalue λ (Section 3.4) is the orthogonal direct sum of the W i having λ = λ i . Orthogonality facts From eqs (41) and (42), for any subspace W of V, Lemma 1. Let U : V → V be unitary and let W be a non-zero subspace of V that is invariant under U. Then W is the orthogonal direct sum of subspaces of V that are prime for U. The lemma is proved in Appendix A. The Spectral Theorem applied to T The Spectral Theorem (Hoffman & Kunze 1971, p. 314) states that for each self-adjoint transformation S : V → V there is an orthonormal eigenbasis-a basis for V consisting of orthonormal eigenvectors of S. Since the elastic map T is self-adjoint, then, according to the Spectral Theorem, there must be an orthonormal basis for M consisting of six eigenvectors of T. Since T : M → M, an 'eigenvector' is now an element of M-a symmetric 3 × 3 matrix. In terms of strain and stress: For any elastic map T, there will be six independent 3 × 3 strain matrices G i such that each of the corresponding stress matrices T(G i ) is a scalar multiple of its strain matrix. Fig. 4 depicts T as did Fig. 2 but with the basis B replaced by an eigenbasis for T. Fig. 2 contain the same information-a complete description of T-but here the orthonormal basis vectors (3 × 3 matrices) G 1 , . . . , G 6 for M are eigenvectors of T. The indicated numbers λ 1 , . . . , λ 6 are the corresponding eigenvalues. For each i = 1, . . . , 6, the beachball for T(G i ) is therefore a resized version of the beachball for G i , with λ i being the resizing factor. Invertibility of T An elastic map T is invertible if and only if its eigenvalues are all non-zero. In that case the eigenvectors of T −1 are the same as those of T, and the eigenvalues of T −1 are the reciprocals of those of T. For T as in Fig. 4, the beachball depiction of T −1 would appear just as in the figure except that the radius of each ball T(G i ) on the top row would change from λ i to 1/λ i . Matrix version of the Spectral Theorem The Spectral Theorem implies that an n × n symmetric matrix S can be written for some numbers λ 1 , . . . , λ n and for some n × n rotation matrix U. The jth column of U is then an eigenvector of S with eigenvalue λ j . Conjugation by a rotation matrix Recall that a square matrix U is orthogonal if UU = I . If also det U = 1 then U is said to be a rotation matrix. We let U be the group of all 3 × 3 rotation matrices. Examples of matrices in U would be the 3 × 3 rotations X ξ , Y ξ , Z ξ through angle ξ about the x, y, z axes, respectively: For U ∈ U, we define a linear transformation U : M → M by In words, U is conjugation by U. From eq. (51), then by comparison with eq. (33), That is, U * is conjugation by U . Then U is unitary (eq. 35), since The beachball for U (E) For E ∈ M the beachball for U (E) is the result of applying the rotation U to the beachball for E. To see this, let E = U (E) and v = U v, with v = (x, y, z) ∈ R 3 . Then Thus, from eq. (25), the rotated point v is red on the ball for E if and only if the original point v is red on the ball for E. See Fig. 5. The rotation U of R 3 operates on the material and operates on the beachballs. The rotation U of M operates on 3 × 3 symmetric matrices (strains and stresses). The matrix of U Eqs (14) and (54) If G is an orthonormal basis for M, then from eq. (36) the matrix [U ] GG is orthogonal, that is, [U ] GG [U ] GG = I . Equivalently, Figure 5. Depiction of the rotation U of M, where U is the rotation Z π/6 of R 3 . The operation U is conjugation by U. Whereas U operates on matrices, U should be thought of as operating on their beachballs, regarded as real physical objects. Here, with the z-axis pointing out of the page, the effect of the 30 • rotation U = Z π/6 on the beachballs is obvious. The basis B 1 , . . . , B 6 for M is that of eq. (3). From eqs (53), (56), (57), The matrix [U ] GG is found from eq. (31); its ijth entry is More explicitly, If, for example, we take U = Z ξ (eq. 50) and G = B (eq. 3), then with blank entries understood to be zero. An arbitrary U ∈ U has the form U = V Z ξ V for some V ∈ U and ξ ∈ R. Since U is unitary, then it is a rotation (of M). Most rotations U of M, however, do not have the form U = U , as can be seen from Section 3.1.3. Retrieving U from U Given a rotation U of M, we ask whether U = U for some U ∈ U. Let E be the diagonal matrix with diagonal entries 3, 2, 1. Let V ∈ U be a TBP eigenframe for U(E), and let μ 1 , μ 2 , μ 3 be the eigenvalues of U(E) in descending order. (See Section 5 for TBP.) If U = U then Then Downloaded from https://academic.oup.com/gji/article/227/2/970/6273642 by guest on 04 August 2021 (μ 1 , μ 2 , μ 3 ) = (3, 2, 1) (63a) Eq. (63b) is from proposition 5 of Tape & Tape (2012). It says that the matrices U and V differ at most by sign changes of two columns. Thus U cannot have the form U = U unless the eigenvalues of U(E) are 3, 2, 1. And when the eigenvalues are in fact 3, 2, 1 there are only four candidates for U; we need only check to see whether U = V R for R = I, X π , Y π , Z π . Helbig (1994) and Mehrabadi & Cowin (1990) have more complicated approaches to this problem. How T changes when the material is rotated Let's be clear that our entire enterprise deals only with a specific point in some material; we are not interested in how the elasticity is changing from one point to another. When we speak of rotations, the rotations should be thought of, intuitively, as rotations about the specified point. Strains and stresses are likewise strains and stresses at the specified point. We imagine the point to be at the origin in R 3 . Suppose now that we use a rotation U ∈ U to rotate our material. We want to compare the elastic maps T and T before and after the rotation. Suppose the strains before and after the rotations are E and E . Both of the matrices E and E operate on vectors in R 3 . The output vector assigned to the input vector v by E is Since eq. (64) holds for all v, then, with the analogous fact for stresses included, The maps T and T take strain matrices to stress matrices: T(E) = F and T (E ) = F . Thus, Since eq. (66) holds for all E then If G is an orthonormal basis for M, then, from eqs (14a) and (58), the matrix equivalent of eq. (67) is The notion of symmetry for T We define V ∈ U to be a symmetry of an elastic map if the map does not change when the relevant material is rotated by V. More precisely, V is a symmetry of T if the two elastic maps T and V • T • V * [from eq. (67)] are the same. V is a symmetry of We require V to be a rotation matrix-an orthogonal matrix with determinant +1. We could have required V to be only an orthogonal matrix, so that perhaps det V = −1. But if V is orthogonal with det V = −1, then −V is a rotation matrix. Since −V = V , then V being a symmetry of T would be equivalent to −V being a symmetry of T. Allowing det V = −1 would gain nothing. The -test for a symmetry of T From eq. (69), where 0 6×6 is the 6 × 6 zero matrix and Suppose, for example, that we want to find the matrix [T] BB when Z π is a symmetry of T. If T were an arbitrary elastic map, its matrix T with respect to B would be, say, With V = Z π (eq. 50), we therefore want to find the entries a, b, c, . . . of T so that V is a symmetry of T. From eq. (60) with ξ = π and from eq. (71), where blank entries are understood to be zero. From eq. (70a), the rotation V is a symmetry of T if and only if (V, T) is the zero matrix. So h = m = n = q = r = t = u = v = 0, and T in eq. (71) becomes T MONO , where Eigenspaces of T and their role in symmetry For λ ∈ R we let If λ is an eigenvalue of T, then its eigenspace is M T (λ); it consists of the zero vector together with the eigenvectors of T having eigenvalue λ. Theorem 1. Let T be an elastic map and let V be a 3 × 3 rotation matrix. Then V is a symmetry of T if and only if all eigenspaces of T are invariant under V . Proof. Suppose first that V is a symmetry of T. Then T • V = V • T, from eq. (69). Hence if E ∈ M T (λ), then 978 W. Tape and C. Tape Conversely, suppose V (M T (λ)) ⊂ M T (λ) for all eigenvalues λ of T. Then if E is an eigenvector of T with eigenvalue λ, so is V (E), and so Since the eigenvectors E of T span M, then T • V = V • T, by linearity. Then V is a symmetry of T, by eq. (69). Theorem 1 was known to Rychlewski (1984). If T is an elastic map, then, from the Spectral Theorem, M is the orthogonal direct sum of the eigenspaces of T. Thus, if μ 1 , . . . , μ k are the distinct eigenvalues of T, If V is a symmetry of T, then each M T (μ i ) is invariant under V . From Lemma 1, each subspace M T (μ i ) is an orthogonal direct sum (eq. 42) of subspaces prime for V . In the hypothetical illustration in eq. (76), the eigenspace M T (μ 1 ) is the orthogonal direct sum of the prime subspaces W 1 , W 2 , W 3 , whereas the eigenspace M T (μ k ) is itself prime. Thus, (In this example, λ 1 = λ 2 = λ 3 = μ 1 and λ n = μ k .) This means, as in eq. (44), that M is the orthogonal direct sum of the W i , and that on W i the linear transformation T is multiplication by λ i . Here, however, the W i are prime for V . The converse is also seen to be true. Thus, Theorem 2. A rotation matrix V is a symmetry of an elastic map T if and only if, for some numbers λ 1 , . . . , λ n and for some subspaces W 1 , . . . , W n of M, Using eqs (43) and (44), we can paraphrase conditions (i) and (ii) as Some matrices, six-tuples, and subspaces With B 1 , . . . , B 6 the basis B given in eq. (3), we define matrices in M by Thus r, s, t are the angular polar coordinates in the respective x 1 x 2 -, x 3 x 4 -, and x 5 x 6 -planes. With S 1 , S 2 denoting the subspace spanned by S 1 and S 2 , we define subspaces of M by The corresponding subspaces of R 6 are E 12 = e 1 , e 2 = {(x 1 , x 2 , 0, 0, 0, 0) : where e 1 , . . . , e 6 is the standard basis for R 6 . Subspaces of M invariant under Z ξ when ξ is regular The notion of a prime subspace was introduced in Section 2.6.3. If A is a 6 × 6 matrix, a non-zero subspace E of R 6 is prime for A if it is invariant (under multiplication by A) and if it has no proper invariant subspaces. For the 6 × 6 matrix A = [Z ξ ] BB we can try to guess the prime subspaces from inspection of the matrix, and we will usually be right. From eq. (60), where R(θ) is the 2 × 2 rotation matrix from eq. (F1) of Appendix F, and where I 2×2 is the 2 × 2 identity matrix. (Blank entries are understood to be zeros.) From eqs (82) and (83), the subspaces E 12 , E 34 , E 56 of R 6 are invariant under [Z ξ ] BB . Since on E 56 the matrix [Z ξ ] BB is the identity, then E 56 itself is not prime (for [Z ξ ] BB ), but all of its 1-D subspaces are prime. Each has the form e 56 (t) for some t. On the subspace E 12 the matrix [Z ξ ] BB is rotation through angle −ξ , and on E 34 it is rotation through angle 2ξ , so E 12 and E 34 are prime for most choices of ξ . But are they always prime, and might there be other prime subspaces? Theorem 3 gives some answers, but in the context of M rather than R 6 . Recall from Fig. 1 that ξ is regular if rotations through angle ξ are neither onefold, twofold, threefold, nor fourfold: ξ is regular ⇐⇒ ξ = ±2π/n (mod 2π ), n = 1, 2, 3, 4 (84) If ξ is regular, they are the prime subspaces for Z ξ . (See Section 3.5 for notation.) Proof. The proof relies on the B-coordinate mapping to go back and forth between M and R 6 . Thus For ξ regular they are the prime subspaces for Z ξ , since E 12 , E 34 , e 56 (t) are then the prime subspaces of R 6 for [Z ξ ] BB ; see Lemma 6 of Appendix B. Prime summands for Z ξ when ξ is regular From eq. (43), subspaces W 1 , . . . , W n of M are prime summands for Z ξ if they are prime for Z ξ and if their orthogonal direct sum is all of M. If ξ is regular, there is not much choice about the prime summands for Z ξ , due to Theorem 3. They can only be, for some t, where Note that, although there is a prime subspace B 56 (t) for each t, there is little choice regarding t in eqs (85)-it must be t ± π/2, since B 56 (t) and B 56 (t ) are to be orthogonal. The prime summands for Z ξ with ξ regular are shown in Fig. 6. The figure illustrates the invariance of the prime summands under Z ξ . Consider, for example, the beachball at θ = 0 in the x 1 x 2plane (the 3:00 position), and rotate it through ξ = 45 • about its own vertical axis (perpendicular to the page). The resulting ball is present in the diagram, and the upper left 2 × 2 submatrix of [Z ξ ] BB (eq. 83), which describes a rotation through −ξ about the origin in the x 1 x 2 -plane, tells where to find it. (It is at the 4:30 position.) The balls in the x 3 x 4 -plane and x 5 x 6 -plane work analogously, but in the x 3 x 4 -plane the matrix [Z ξ ] BB is rotation through 2ξ , and in the x 5 x 6 -plane it is the identity. The xyz spatial coordinates have no logical relation to the Bcoordinates x 1 . . . x 6 . In a diagram like Fig. 6, where the xyz directions must be known in order to orient the beachballs, some decision must therefore be made that relates the two coordinate systems. We chose to have z point out of the page and x to the right. In the same vein, a beachball has no particular location in xyz space. Alternatively, all beachballs can be thought of as centred at the origin in xyz space. The location of a beachball in a diagram like Fig. 6 only serves to indicate the coordinate 6-tuple of the ball. Elastic maps with symmetry Z ξ for regular ξ According to eqs (78), an elastic map T having symmetry V is determined by specifying prime summands for V and by assigning a number to each of them. If V = Z ξ with ξ regular, then the prime summands are B 12 , B 34 , B 56 (t) , B 56 (t ) ; they depend only on t. Hence T is determined by giving t to specify B 56 (t) and B 56 (t ) , and then by assigning respective numbers λ 1 , λ 3 , λ 5 , λ 6 to B 12 , B 34 , B 56 (t) , B 56 (t ) . That is, T = T XISO (t), where, in the notation of eq. (44), The repetitions λ 1 λ 1 and λ 3 λ 3 in the orthogonal direct sum are reminders that dim B 12 = 2 and dim B 34 = 2. For T = T XISO (t) as in eq. (86), its symmetry Z ξ is seen in Fig. 6, though the figure itself does not involve T. In Fig. 6b, for example, the effect of T would be to resize the balls by the constant factor λ 3 . One can rotate a ball through angle ξ about its vertical axis, and then resize it, or one can resize it and then rotate it. The result is the same, but only because the rotated ball is in the same subspace as the original ball, so that the resizing factor does not change. is for the matrix whose B-coordinate vector is (0, 0, x 3 , x 4 , 0, 0). (c) Similar, but a ball at (x 5 , x 6 ) is for the matrix whose B-coordinate vector is (0, 0, 0, 0, x 5 , x 6 ). Of the four prime summands, only B 56 (t) and B 56 (t ) depend on t; here t = 30 • . According to Theorem 3 and as is seen in the figure, each of the prime summands is invariant under Z ξ . (The z-axis for the beachballs is perpendicular to the page.) If E is in one of the four subspaces, then not only is Z ξ (E) in the same subspace, but the matrix [Z ξ ] BB (eq. 83) tells where in the diagram to find its beachball; see text. An elastic map T having symmetry Z ξ for regular ξ is determined by giving t to specify the prime summands and then by assigning a number to each of them (eq. 86). Table 2. Selected subspaces W of M relevant to elastic symmetry. The subspace B 3 , B 4 , B 5 , for example, is the set of all deviatoric matrices with a principal axis (i.e. eigendirection) vertical. The matrices B 1 , . . . , B 6 are as in eq. (3). The descriptions in the second and third columns are intrinsic; they do not involve the basis B or any other basis of M. The last column gives the symmetry group S(W) of W, to be explained in Section 13. Subspaces that are orthogonal complements of each other, such as B 1 , B 2 , B 3 and B 4 , B 5 , B 6 , have the same symmetry group. Deviatoric, and z-axis is a principal axis Deviatoric, and xyz-axes are principal axes Double couple, and z-axis is a fault normal e 11 = e 22 = e 33 = e 12 = 0 Double couple, and z-axis is the null axis e 13 = e 23 = e 33 = 0, e 11 = −e 22 Proof. Theorem 4 is the special case of Theorem 7 in which r = 0, The form of the matrix [T] BB in Theorem 4 dictates the definition of the matrix T XISO , namely, The matrices T XISO and [T] BB are the same when Appealing to eq. (F2) of Appendix F, we see that T XISO and [T] BB are also the same when where θ (x, y) is the ordinary angular polar coordinate of the point (x, y). (If e = f and k = 0 then θ ∞ is undefined, but t can be chosen arbitrarily.) For T having symmetry Z ξ for ξ regular, eqs (88) give the matrix entries a, c, e, f, k of T XISO = [T] BB in terms of the 'intrinsic' parameters t, λ 1 , λ 3 , λ 5 , λ 6 of T, and eqs (89) do the reverse. The intrinsic parameters are not unique, but it hardly matters. From Appendix F we see that two 5-tuples of intrinsic parameters give the same T: Thus the expressions for λ 5 and λ 6 in eqs (89) can be swapped if t = θ ∞ is replaced by t = θ ∞ + π/2, but there is generally no reason to do so. One tuple of intrinsic parameters is enough. We have shown that for ξ regular the following are equivalent: We refer to the condition T = T XISO (t) as an intrinsic characterization of T, in order to distinguish it from the condition [T] BB = T XISO , which involves a basis for M. Although the subspaces B 12 , B 34 , B 56 (t) , B 56 (t ) in the intrinsic characterization appear to involve the basis B, in fact they can be described without B; see Table 2 for B 12 and B 34 , and see eq. (93) for B 56 (t). Intrinsic characterizations of elastic maps go back at least to Rychlewski (1984). Also see Bóna et al. (2007). Transverse isotropy Theorem 5. If Z ξ is a symmetry of an elastic map T for some regular ξ , then Z ξ is a symmetry of T for all ξ . Proof. Let Z ξ be a symmetry of T. To show that the rotation Z β is a symmetry of T, we need only show that the eigenspaces of T are invariant under Z β (Theorem 1). To that end, let W be an eigenspace of T. Then W is invariant under Z ξ , by Theorem 1. Hence W is an orthogonal direct sum of prime subspaces for Z ξ . Since ξ is regular, the prime subspaces for Z ξ are B 12 , B 34 , B 56 (t) . Those subspaces, by Theorem 3, are invariant under Z β , hence so is W. An elastic map T is said to be transverse isotropic (with respect to the z-axis) if Z ξ is a symmetry of T for all ξ . Theorem 5 says that if Z ξ is a symmetry of T for some regular ξ , then T is transverse isotropic. Herman (1945) has a weaker version of Theorem 5. Where our version has ξ regular, Herman has ξ = 2π/n for some integer n > 4. We need the stronger version in deriving the elastic symmetry groups (Section 14, especially Lemma 3). S U B S PA C E S O F M D E S C R I B E D I N T R I N S I C A L LY In Table 2 we list some subspaces of M that will be relevant to elastic symmetry. To describe them we borrow terminology from seismology, which we explain next. We do not intend, however, to discuss applications to seismology. As always, M is the space of 3 × 3 symmetric matrices. A TBP frame for a matrix E ∈ M is a rotation matrix whose first, second, and third columns are eigenvectors T, B, P of E corresponding to the respective largest, intermediate, and smallest eigenvalues of E. A deviatoric matrix in M is one with trace equal to zero. A double couple is a deviatoric matrix with determinant zero. Its eigenvalues therefore have the form μ, 0, −μ. The beachball for a double couple has the classic beachball look, with the ball surface divided into four congruent lunes having alternating colours (e.g. B 2 in Fig. 3). The fault planes of the double couple are the two planes that define the boundaries of the lunes; the normal vectors to the fault planes are T±P. The null axis is the intersection of the two fault planes; it is in the direction of B. A crack matrix is a matrix in M with two equal eigenvalues (not three). Its c-axis is in the direction of the eigenvector with the simple (i.e. non-repeated) eigenvalue. The beachball for a crack matrix has rotational symmetry about the c-axis through all angles; if bicoloured, it looks more like a striped pool ball than a traditional beachball (e.g. B 5 in Fig. 3). An isotropic matrix in M is a multiple of the identity . Its beachball is all red or all white. A generic matrix E ∈ M is neither a double couple (DC), a crack matrix, nor an isotropic matrix. Thus E is generic ⇐⇒ E has distinct eigenvalues but is not a DC The matrix for the beachball in Fig. 7(d) is generic. The subspace B 12 in eq. (81) and Fig. 6(a) consists of all double couples having a fault plane horizontal. See also Fig. 7(a). The subspace B 34 in eq. (81) and Fig. 6(b) consists of all double couples with null axis vertical. The subspace B 56 in eq. (81) and Fig. 8 consists of all crack matrices with c-axis vertical. They are the diagonal matrices with diagonal entries of the form p, p, q. The red or white band on the beachball for such a matrix has angular half-width ν given by Figure 8. A suggestion of the subspace B 56 . A beachball at (x 5 , x 6 ) is for the matrix in M whose B-coordinate vector is (0, 0, 0, 0, x 5 , x 6 ). The 1-D subspaces of B 56 correspond to lines through the origin in the x 5 x 6 -plane; two such subspaces appear in Fig. 6(c), though with only two beachballs shown for each. The elements of B 56 are 3 × 3 diagonal matrices with diagonal entries of the form ( p, p, q). The triples of numbers in the diagram give diagonal entries for matrices at the indicated points. The points 1 12 and11 2 on the x 5 -axis are the CLVD points, and 1 1 1 and111 on the x 6axis are the pure isotropic points. Any balls between 0 01 and 1 1 0 would have red bands and white caps, those between 001 and110 would have white bands and red caps. The two distinctive curves have polar equation r = 1 + ν(θ ), where ν(t) is the angular half-width of the red or white band on the beachball for B 56 (t) (eq. 93). Triples such as 1 12 denote unit vectors, with bars indicating minuses, so that 1 12 = (1, 1, −2)/ √ 6. tan ν = √ − p/q (from eq. (26)). The angular half-width of the band for the crack matrix B 56 (t) is therefore, from eqs (3) and (79), The subspace B 6 consists of the isotropic matrices. Its orthogonal complement B 6 ⊥ = B 1 , . . . , B 5 consists of the deviatoric matrices. The subspace B 4 , B 5 , B 6 consists of the diagonal matrices, and B 4 , B 5 consists of the deviatoric diagonal matrices. E L A S T I C M A P S W I T H S Y M M E T RY Z π/ 2 If an elastic map T has the symmetry Z π/2 then it also has the symmetry Z π . Hence from Section 3.3.1 its matrix with respect to B has at least the form of T MONO in eq. (73). With V = Z π/2 , we therefore look for the entries a, b, c, . . . of T MONO such that V is a symmetry of T. From eqs (60) and (70b), W. Tape and C. Tape The rotation V is a symmetry of T if and only if (V, T) is the zero matrix. Setting b = a and g = j = o = p = s = 0 in T MONO gives [T] BB = T 4 with T 4 as in Table 1. We will see momentarily that the intrinsic characterization of elastic maps T having symmetry Z π/2 is T = T 4 (s, t), where Theorem 6. (The matrix for T 4 (s, t)). For T = T 4 (s, t), The theorem is the special case of Theorem 7 in which , and λ 1 = λ 2 . The matrix [T] BB in Theorem 6 is the same as the matrix T 4 in Table 1 when The two matrices are also the same when t, λ 1 , λ 5 , λ 6 are as in eqs (89) (97) For T having symmetry Z π/2 , eqs (96) give the matrix entries a, c, d, e, f, i, k of T 4 = [T] BB in terms of the intrinsic parameters s, t, λ 1 , λ 3 , λ 4 , λ 5 , λ 6 of T, and eqs (97) do the reverse. The intrinsic parameters are not unique, but it rarely matters. The following three 7-tuples all give the same T. We have now shown that the following are equivalent: To see that the five subspaces in the orthogonal direct sum (eq. 95) are invariant under Z π/2 , note that invariance has nothing to do with the λ i . We can therefore assume for a moment that λ 1 , λ 3 , λ 4 , λ 5 , λ 6 are distinct, so that the five subspaces are eigenspaces of T. By Theorem 1 they are therefore invariant under Z π/2 . The four 1-D subspaces are then automatically prime for Z π/2 . The other subspace Figure 9. Like Fig. 6 but depicting prime summands for Z ξ when ξ = π/2 rather than for regular ξ . Each of the prime summands If E is in one of the five subspaces, then not only is Z ξ (E) in the same subspace, but the matrix [Z ξ ] BB (eq. 83 with ξ = π/2) tells where in the diagram to find its beachball. An elastic map T having symmetry Z π/2 is determined by giving s and t to specify the five prime summands and then by assigning a number to each of them (eq. 95). Here s = 55 • and t = 40 • . B 12 is also prime, since its only proper subspaces have the form B 12 (r ) , and they are not invariant under Z π/2 (e.g. Fig. 6a). Prime summands for Z π/2 -one quintuple of summands for each s and t-are therefore They are shown in Fig. 9 for s = 55 • and t = 40 • . An elastic map T having symmetry Z π/2 is determined by specifying s and t to give the five prime summands and by attaching a number λ 1 , λ 3 , λ 4 , λ 5 , λ 6 to each. E L A S T I C M A P S W I T H S Y M M E T RY Z π Let U = U 4×4 be the 4 × 4 rotation matrix Let B 2 = B 2 (r, U ) be the orthonormal basis of M whose elements are defined by their B-coordinate 6-tuples as follows: where r = r + π/2. Thus We will see momentarily that the intrinsic characterization of elastic maps T having symmetry Z π is T = T 2 (r, U ), where Theorem 7. (The matrix for T 2 (r, U )). For T = T 2 (r, U ), where [I] B B 2 is the matrix that takes B 2 -coordinates to Bcoordinates. From eq. (32) its jth column is the B-coordinate 6-tuple of the jth element of B 2 (r, U ). Hence from eq. (102), Eq. (105) therefore becomes which is the same as in the theorem. The matrix [T] BB in Theorem 7 has the same form as T MONO in Table 1, but when are the two matrices equal? From the theorem it is obvious how to find the entries a, b, c, . . . of T MONO in terms of the intrinsic parameters r, U, λ 1 , . . . , λ 6 of T. Conversely, one gets r, λ 1 , λ 2 from the submatrix a g g b of T MONO using eqs (F2) Here the z-axis for the beachballs is perpendicular to the page, making obvious the symmetry with respect to Z π . An elastic map T having symmetry Z π is determined by giving r and U to specify the six prime summands and then by assigning a number to each of them (eq. 104). or (F3), and, in principle, one gets U and λ 3 , λ 4 , λ 5 , λ 6 from an eigensystem for the lower right 4 × 4 submatrix of T MONO . Getting the eigensystem symbolically, however, is not appealing, since the characteristic polynomial is quartic. (Finding it numerically is not a problem.) In Section 3.3.1 we showed that an elastic map T has symmetry Z π if and only if its matrix with respect to B has the form T MONO . The following are therefore equivalent: Reasoning as we did from eq. (95), we find from eq. (104) that the prime summands for Z π -one sextuple for each choice of r and U-are (The colour reversal produced by 180 • rotation of the first and second beachballs is acceptable, since the matrix −E is always in the subspace E .) An elastic map T having symmetry Z π is determined by a number r and a 4 × 4 rotation matrix U to specify the six prime summands, and by numbers λ 1 , . . . , λ 6 to be assigned to them. Prime summands for Z 2π/3 Motivated by eq. (C1) of Appendix C, we define the matrix B (θ, u, v) by Then W. Tape and C. Tape For each u and v there is a subspace of M spanned by B (0, u, v) and B(π/2, u, v), namely, where e(θ, u, v) is from eq. (C1). Using eq. (113) to translate between M and R 6 , we conclude from Lemma 7 of Appendix C: Theorem 8. The subspaces of M that are prime for Z 2π/3 are B(u, v) and B 56 (t) (any t, u, v). The prime summands of M for Z 2π/3 are then where The prime summands for Z ξ with ξ = 2π/3 are a generalization of those for regular ξ in the sense that We let B(t, u, v) be the orthonormal basis of M whose elements are When feasible, we abbreviate B(t, u, v) to B 3 . For ξ = 2π/3 the 6 × 6 matrix of Z ξ with respect to B(t, u, v) is found from eqs (3) and (59) to be the same as [Z ξ ] BB in eq. (83); it is independent of t u v. Since for ξ = 2π/3 a rotation through 2ξ is the same as a rotation through −ξ , In Fig. 11 the coordinate planes are for coordinates with respect to the basis B(t, u, v). The figure illustrates the prime summands and their invariance under Z 2π/3 . In Fig. 11(b), for example, if the ball at θ = 0 (the 3:00 position) is rotated through an angle of 2π/3 about its own vertical axis (perpendicular to the page), the resulting ball is present in the diagram. According to the middle 2 × 2 submatrix in eq. (117), it should be the ball at θ = −2π/3 (the 7:00 position). The remarkable subspaces B(u, v) Whereas the subspace B 12 consists of the double couples having a fault plane horizontal, and B 34 consists of the double couples with null axis vertical, the subspaces B(u, v) are more subtle and intriguing. Since the matrices B (0, u, v) and B(π/2, u, v) are both orthogonal to B 5 and B 6 in the basis B (eqs 3), the subspace B(u, v) is a (2-D) subspace of B 12 ⊥ B 34 . The matrices in B(u, v) are therefore deviatoric, that is, each has trace zero. (The x 5 x 6 plane is the same as in Fig. 9 Elastic symmetry with beachball pictures 985 A beachball pattern is determined by the eigenvalue triple of the beachball matrix, with the entries of being in descending order. Conjugating a matrix preserves its eigenvalues, and hence, with β = −v/3 in eq. (119), and then Thus the totality of beachball patterns in the subspace B(u, v) is not affected by v. The subspace B(u 0 , v) also has the special property that its matrices all have a common eigenframe. See Section S3. Elastic maps with symmetry Z 2π/3 We now give an intrinsic characterization of elastic maps T that have symmetry Z 2π/3 . The reasoning is the same as for regular ξ in Section 4.3, but now the prime summands are as in eq. (114). From eqs (78), the map T is determined by giving t, u, v to specify the prime summands, and by assigning respective numbers λ 1 , λ 3 , λ 5 , λ 6 , to them. That is, Theorem 9. (The matrix of T 3 (t, u, v)). For T = T 3 (t, u, v), where [I] B B 3 is the 6 × 6 matrix Proof. From eq. (122) the matrix of T with respect to B(t, u, v) is diagonal with diagonal entries λ 1 , λ 1 , λ 3 , λ 3 , λ 5 , λ 6 . It is related to The matrix [T] BB in Theorem 9 dictates the definition of the matrix T 3 in Table 1. The two matrices are the same when the entries of T 3 are B(u, v). For a deviatoric matrix, the number γ determines the pattern on the beachball, with γ = 0 giving a double couple (DC) and with γ = ±π/6 giving a CLVD. The number γ MAX (u) is the maximum γ for matrices in B(u, v). The green segments indicate the range of γ in each of Figs 11(a,b). The very short green segment at u = 105 • is consistent with the beachballs in Fig. 11(b) all being nearly double couples. a = λ 1 cos 2 u + λ 3 sin 2 u c = λ 3 cos 2 u + λ 1 sin 2 u cos v e, f, k are as in eqs (88) The two matrices are also the same if h 2 + m 2 = 0 and t, λ 5 , λ 6 are as in eqs (89) Verification of eqs (126) is just a calculation, though best done by computer. (One might nevertheless wonder where the equations come from. See Appendix D). For the case h = m = 0 that is ruled out in eqs (126), the matrix T 3 becomes T XISO and hence is covered by eqs (89). For T having symmetry Z 2π/3 , eqs (125) give the matrix entries a, c, e, f, h, k, m of T 3 = [T] BB in terms of the intrinsic parameters t, u, v, λ 1 , λ 3 , λ 5 , λ 6 of T, and eqs (126) do the reverse. As usual, the intrinsic parameters are not unique. The following 7-tuples of intrinsic parameters all give the same T. One 7-tuple is usually enough, however. We now have the three equivalent conditions: 986 W. Tape and C. Tape a, c, e, f, h, k, m) for some a, c, e, f, h, k, m (128c) E L A S T I C M A P S W I T H S Y M M E T RY Z ξ W H E N ξ = 0 It remains to treat ξ = 0. The matrix Z ξ is then the identity matrix I. Since I is the identity transformation, all subspaces of M are invariant under I , hence all 1-D subspaces are prime for I . Any six 1-D and mutually orthogonal subspaces are therefore prime summands for I . Basis elements for the subspaces can be specified by a 6 × 6 rotation matrix U; the columns of U are the B-coordinate vectors for the basis elements, call them B 1 (U ), . . . , B 6 (U ). An elastic map T with symmetry I-that is, any elastic map whatsoever-is therefore determined by specifying U to give the prime summands B 1 (U ) , . . . , B 6 (U ) and by specifying numbers λ 1 , . . . , λ 6 to be assigned to them: This is not new. The numbers λ 1 , . . . , λ 6 are the eigenvalues of T, and B 1 (U ), . . . , B 6 (U ) are the eigenvectors. The group of 6 × 6 rotation matrices has dimension 15, and so 15 real parameters would be required to specify U. 0 H O W T H E S Y M M E T R I E S C H A N G E W H E N T H E M AT E R I A L I S RO TAT E D Elastic maps T and T are defined to be equivalent if there is a matrix U ∈ U such that Section 3.2 gave some motivation for the definition; the maps T and T can be regarded as describing the elasticity in a material before and after rotating the material using U. Section S2 has a test for equivalence of elastic maps whose eigenvalues are simple. We denote the group of symmetries of T by S T : Then a group U of rotations is said to be an elastic symmetry group if U = S T for some elastic map T. For U and V both in U, and with T = U • T • U * , Thus V is a symmetry of T if and only if U V U is a symmetry of T . If T and T are equivalent, then their symmetry groups S T and S T are conjugate. More precisely, if where U S T U consists of all matrices U V U , V ∈ S T . Orientation information in T 4 , T 3 , T 2 From here up until Section 16, virtually all of the matrix representations are with respect to the basis B. When feasible we therefore drop the subscript and write [T] for [T] BB . Recall that conjugation of T by U formally expresses the effect on T of rotating the material using the matrix U ∈ U. Recall also that T 4 (s, t), T 3 (t, u, v), T 2 (r, U ), and T XISO (t) are the most general elastic maps having the respective symmetries Z π/2 , Z 2π/3 , Z π , and Z ξ for ξ regular. Since Z β is a symmetry of T XISO , rotating by Z β has no effect on T XISO , but for most β it does impact T 4 , T 3 , and T 2 . Thus Eqs (134) and (136) follow by inspection of the matrix [Z β ] BB (eq. 83) and the matrices of T 4 and T 2 in Theorems 6 and 7. Eq. (135) follows from Theorem 9 and from the fact that (from eqs 83 and 124) From eq. (134), the elastic mappings T 4 (s, t) and T 4 (s + 2β, t) are equivalent. They describe a material having symmetry Z π/2 , before and after being rotated by Z β . The elastic maps T 3 (t, u, v) and T 3 (t, u, v + 3β) likewise describe a material having symmetry Z 2π/3 , before and after being rotated by Z β . Section S1 has the very simple matrix equivalents of eqs (134) and (135). U N A N T I C I PAT E D B U T U N AV O I DA B L E S Y M M E T R I E S We have considered elastic maps T that have symmetry Z ξ . Except when ξ = nπ , the map T turns out to have other (non-trivial) symmetries as well, perhaps unexpected. Fig. 6, for example, showed prime summands for Z ξ when ξ is regular. The beachballs for the prime summands obviously have every horizontal (i.e. in the plane of the paper) axis as a twofold axis of symmetry. Thus, Theorem 10. (A regular axis requires an orthogonal twofold axis.) If an elastic map T has the symmetry Z ξ for some regular ξ , then it also has as symmetries all twofold rotations about horizontal axes. (It also has all rotations about the z-axis as symmetries, by Theorem 5.) Symmetries accompanying a fourfold rotation What about a map T that has symmetry Z π/2 ? Prime summands for Z π/2 are shown in Fig. 9, and here again a horizontal twofold axis is obvious, now that we think to look for it. There are four of them, and Fig. 13 shows how to find them; they are at θ = s/2 + nπ/4. Thus, Theorem 11. (A fourfold axis requires an orthogonal twofold axis.) If an elastic map T has the symmetry Z π/2 , so that T = T 4 (s, t) for some s, t, , then it has four twofold symmetries with horizontal axes at θ = s/2 + nπ/4. In terms of the entries a, c, . . . of the matrix T 4 the horizontal twofold axes of T are at θ = θ 4 /2 + nπ/4, where, from eq. (97), θ 4 = 1 2 θ(c − d, 2i). As one might expect, there is nothing special about the fourfold axis for T being vertical. If T has a fourfold symmetry with axis in the direction of v ∈ R 3 then it also has four twofold symmetries with axes perpendicular to v. Figure 13. The same as Fig. 9(b), but showing one (blue) of the four horizontal twofold symmetry axes of T = T 4 (s, t) guaranteed by Theorem 11. The fourfold axis of T is in the z direction, here pointing out of the paper, and the twofold axes are in the xy-plane, in the directions θ = s/2 + nπ/4. Both of the subspaces B 34 (s) and B 34 (s ) are seen to be invariant under the twofold rotation about the blue axis. Symmetries accompanying a threefold rotation We consider T = T 3 (t, u, v)-the most general elastic map having a threefold symmetry with vertical axis. A twofold symmetry with horizontal axis would have the form V = Z β Y π Z β for some β. Using eqs (59) and (116), we calculate the matrix [V ] B 3 B 3 = (t i j ) of V with respect to the basis B 3 = B(t, u, v) and find that it has the unwanted entry We therefore try setting β = v/3. The prime summands B(u, v), B(u , v), B 56 (t) , B 56 (t ) -and hence the eigenspaces of T-are therefore invariant under V , where V = Z v/3 Y π Z v/3 = Z θ X π Z θ is now the 180 • rotation about the horizontal axis in the θ = π/2 + v/3 direction. The rotation V must be a symmetry of T, by Theorem 1. Thus, Theorem 12. (A threefold axis requires an orthogonal twofold axis.) If an elastic map T has the symmetry Z 2π/3 , so that T = T 3 (t, u, v) for some t, u, v, , then it has three twofold symmetries with horizontal axes at θ = π/2 + v/3 + nπ/3. In terms of the entries a, c, . . . of the matrix T 3 , the twofold axes of T are at θ = π/2 + θ v /3 + nπ/3, where θ v = θ(m, −h) (eq. 126). Figure 14. The same as Fig. 11(b), still with u = 15 • and v = 105 • , but showing only the beachballs at θ = 0 and θ = 110 • . Ignoring translations, the two beachballs are images of each other under the twofold rotation V whose axis is the blue segment. The rotation is more easily seen as a reflection in the plane (black segment) perpendicular to the axis. The rotation V is one of the three twofold symmetries of the elastic map T = T 3 (t, u, v) that are guaranteed by Theorem 12. The value θ = 110 • for the second ball was determined from the third column of the matrix in eq. (139); with the one ball at θ = 0 , its image must be at θ = π − 2v/3 = 110 • . The ball at θ = 110 • resembles the θ = 120 • ball in Fig. 11(b). There must be a twofold axis (three in fact) in Fig. 11, but it does not give itself away. Fig. 14 offers some help. Consistent with Theorems 11 and 12, Fedorov (1968) recognized that the distinctions between what are effectively our matrices T 4 and T TET , and between our T 3 and T TRIG , are only distinctions in orientation; it is a matter of where the twofold axes fall. (See Table 4 for T TET and T TRIG .) A baseless division into two groups of the classes in the tetragonal and trigonal systems is used in many works... (Fedorov 1968, p 31) A twofold axis requires no other twofold axes An elastic map T with the twofold symmetry Z π may fail to have a horizontal twofold axis, but it has an orientation marker nonetheless. For T = T 2 (r, U ), the eigenvector B 12 (r ) is a double couple with a horizontal fault plane. Its null axis, necessarily horizontal, is in the direction θ = −r , as for example in Fig. 10, where r = 30 • . The meaning of the parameters r, s, t, u, v Eqs (91b), (99c), (108c), (128b) were supposed to give conceptual characterizations of elastic maps having various rotational symmetries about the z-axis. We can now fulfill that promise by explaining the parameters r, s, t, u, v that appear in the equations. The numbers s and v are orientation parameters; they locate the respective twofold axes of T 4 (s, t) and T 3 (t, u, v), as described in Sections 11.1 and 11.2. The number r is also an orientation parameter, as explained in Section 11.3, but rotating the hypothetical material about the z-axis changes both r and U, as seen in eq. (136). The number u affects beachball patterns in the subspace B(u, v), as explained in Section 8.2. The number t determines the pattern on the beachball for the crack matrix B 56 (t), see eq. (93). The 24 rotational symmetries of the cube with vertices (±1, ±1, ±1) U TET I, Z π/2 , Z π , Z 3π/2 , X π , Y π , 110 π , 110 π U TRIG I, Z 2π/3 , Z 4π/3 , √ 310 π , √ 310 π , Y π U ORTH I, X π , Y π , Z π U MONO I, Z π U 1 I Table 3 lists the 'reference' subgroups of U; they are U 1 , U MONO , . . . , U ISO . In Section 14 we will see that, for any elastic map T, the group S T of its symmetries is a conjugate of one of the reference groups. In that sense there are only eight elastic symmetry groups. 2 T H E R E F E R E N C E S U B G RO U P S O F U To elaborate on the reference groups: The matrices in U XISO are the rotational symmetries of a vertical cylinder. The 24 rotational symmetries of any cube (the 'gyroid' group) are the fourfold rotations about the face centres of the cube, the twofold rotations about the midpoints of the edges, and the threefold rotations about the vertices. For U CUBE the cube is oriented with its face centres on the xyz coordinate axes. The matrices in U CUBE are the 3 × 3 rotation matrices having exactly one non-zero entry in each row and column, and with that entry being ±1 . The 8 members of U TET are the (rotational) symmetries of a square prism. The 6 members of U TRIG are the symmetries of an equilateral triangular prism. The four members of U ORTH are the symmetries of a brick. The two members of U MONO are the symmetries of a wedge-an isosceles triangular prism. From the second column of Table 3, the containments among the reference groups are Solid arrows mean 'is a subgroup of' and dashed arrows mean 'is a subgroup of a group conjugate to'. The integers give the number of elements in the group, if finite. The subscripts MONO, ORTH, TET, TRIG are for the terms monoclinic, orthorhombic, tetragonal, and trigonal, which are relics from an era-not yet completely past-when crystallographic symmetries were thought to determine elastic symmetries. More informative terms would be wedge-like, brick-like, square-prismatic, and (equilateral) triangular-prismatic. A material whose elastic symmetry is square-prismatic, for example, can be sculpted into a square prism whose geometric symmetries are the same as its elastic symmetries. The term transverse isotropic would become cylindrical, and isotropic would become spherical. Elastic maps for each reference group For each reference group U = U 1 , U MONO , . . . , U ISO we now give both an intrinsic and a matrix characterization of elastic maps whose symmetries are at least those in U. The matrix characterizations are the 'reference' matrices T 1 , T MONO , . . . , T ISO in Table 4. Each matrix characterization can be verified using the -test of eqs (70). In most cases the intrinsic characterization can then be found just by inspection of the matrix characterization. A fancier approach is to appeal to Theorem 7, which pertains to the vertical twofold symmetry Z π . Since all of the reference groups except U 1 and U TRIG contain Z π , then the intrinsic characterizations associated with the other six reference groups are special cases of that for U MONO (eq. 147b). (Their six reference matrices are likewise special cases of T MONO , as seen in Table 4.) The intrinsic characterizations are indeed intrinsic, in the sense that the subspaces in their orthogonal direct sums can be described without mentioning B or any other basis of M (see Table 2). We have talked about the notion of prime summands for an individual rotation matrix. The notion also makes sense for a group of rotation matrices; 'invariant' then means invariant under V for all V in the group. In each of the intrinsic characterizations below, the subspaces in the orthogonal direct sum are prime summands for the relevant reference group. Recall that S T is the group of symmetries of the elastic map T. Reference group U ISO The condition S T = U ISO is equivalent to each of for some λ 1 and λ 6 (141b) Table 2; it consist of the deviatoric matrices. Reference group U XISO The condition S T ⊃ U XISO is equivalent to each of [T] = T XISO (a, c, e, f, k) for some a, c, e, f, k (142a) for some t, λ 1 , λ 3 , λ 5 , λ 6 Downloaded from https://academic.oup.com/gji/article/227/2/970/6273642 by guest on 04 August 2021 Table 4. The reference matrices. They are the matrices, with respect to B, of elastic maps associated with the eight reference groups (Table 3). An elastic map T has matrix [T] BB = T XISO , for example, if and only if S T ⊃ U XISO . An elastic map T likewise has matrix [T] BB = T CUBE if and only if S T ⊃ U CUBE , and so forth. Reference group U CUBE The condition S T ⊃ U CUBE is equivalent to each of for some λ 1 , λ 4 , λ 6 (143b) Reference group U TET The condition S T ⊃ U TET is equivalent to each of [T] = T TET (a, c, d, e, f, k) for some a, c, d, e, f, k (144a) for some t, λ 1 , λ 3 , λ 4 , λ 5 , λ 6 The matrix T TET is the special case of T 4 where the horizontal twofold axes are at θ = nπ/4. Reference group U ORTH Here U is a 3 × 3 rotation matrix U = (u i j ), i, j = 4, 5, 6. Matrices B j (U ) are defined by Then the condition S T ⊃ U ORTH is equivalent to each of for some U, λ 1 , . . . , λ 6 (146b) Eq. (146b) is simpler than it appears, since the matrices B 4 (U ), B 5 (U ), B 6 (U ) are a basis for the subspace B 4 , B 5 , B 6 consisting of the diagonal matrices (Table 2). Reference group U MONO The condition S T ⊃ U MONO is equivalent to each of for some r, U, λ 1 , . . . , λ 6 (147b) where in eq. (147b) the matrix U is now a 4 × 4 rotation matrix and where the B j (U ) are as in eq. (103). This is a repetition of eqs (108). Reference group U TRIG The condition S T ⊃ U TRIG is equivalent to each of (a, c, e, f, k, m) for some a, c, e, f, k, m (148a) for some t, u, λ 1 , λ 3 , λ 5 , λ 6 The matrix T TRIG is the special case of T 3 where the horizontal twofold axes are at θ = π/2 + nπ/3. Thus the y-axis, not the xaxis, is one of the twofold axes. Reference group U 1 = {I } The condition S T ⊃ U 1 is satisfied for all T. Only a one-way test for The matrix T 0 MONO is the matrix of T when S T ⊃ U MONO and when the double couple eigenvectors B 12 (r ) and B 12 (r ) of T are B 1 and B 2 990 W. Tape and C. Tape (in either order), so that their null axes are in the x and y coordinate directions. The condition [T] = T 0 MONO implies S T ⊃ U MONO , but the converse is false. To describe all elastic maps having twofold symmetry with axis vertical, one wants the matrix T MONO . On the other hand, every T having twofold symmetry is equivalent to an elastic map whose matrix with respect to B is T 0 MONO (for some a, b, . . .). The matrices T MONO and T 0 MONO are comparable to the matrices in eqs (3.29) of Helbig (1994). 3 S Y M M E T RY F O R A S U B S PA C E O F M When a subspace W of M is invariant under V we will also say that V is a symmetry of W: We thus have two notions of symmetry: one for an elastic map T (eq. 69), and one for a subspace W of M. Theorem 13, next, relates the two notions. Due to the close relation, subspace symmetry will be our key to identifying the symmetry of elastic maps, in Section 15. For example, a consequence of Theorems 13 and 16 is that if an elastic map T has a simple eigenvalue whose eigenvector (3 × 3 matrix) is generic (Fig. 7d), then the symmetry of T can only be orthorhombic, monoclinic, or trivial. Thus, trigonal, tetragonal, cubic, transverse isotropic, and isotropic symmetry can often be ruled out by casual inspection of the eigensystem for T. Theorem 13. A rotation V ∈ U is a symmetry of an elastic map T if and only if V is a symmetry of each eigenspace of T. Proof. The theorem is just a paraphrase of Theorem 1. When V is a symmetry of a 1-D subspace W = E , we will also say that V is a symmetry of E itself. Theorems 14 and 16 show that the symmetries of E are easy to recognize from the beachball for E. Proof. First suppose V is a symmetry of E. Since E ∈ E then so is V (E). Since E is 1-D, then, for some number t, The condition V (E) = −E severely constrains E. If μ 1 , μ 2 , μ 3 are the eigenvalues of E in descending order, then −μ 3 , −μ 2 , −μ 1 are the eigenvalues of −E in descending order. Since for any V ∈ U the matrices E and V (E) have the same eigenvalues, then V (E) = −E implies μ 3 = −μ 1 and μ 2 = 0. Thus, We mentioned in Section 3.1.1 that the beachball for V (E) is the result of applying the rotation V to the beachball for E. Informally, Theorem 14 says that V is a symmetry of E if and only if the rotated ball differs from the original ball by at most a swapping of red with white. This of course assumes that the ball for E is bicoloured, not just one solid colour. Fig. 9 illustrates Theorem 14 and eq. (153). The rotation Z π/2 is a symmetry of each of the five subspaces in the figure. The 1-D subspaces are B 34 (s) , B 34 (s ) , B 56 (t) , and B 56 (t ) . Using Z π/2 to rotate the beachballs for the matrices B 56 (t) and B 56 (t ) has no effect on the appearance of the balls. Doing the same for B 34 (s) and B 34 (s ), which are double couples, has the effect of reversing red and white on each ball. We denote the group of symmetries of E ∈ M by S(E): For a subspace W of M, we likewise use the notation S(W) to refer to the group of symmetries of W. Given W, we can consider the elastic map T such that Since the symmetries of W are the same as those of W ⊥ (eq. 46), they are also the symmetries of T, by Theorem 13. The group S(W) is therefore an elastic symmetry group. There are not many possibilities for a symmetry V of E. If E is diagonal and generic (eq. 92) then S(E) = U ORTH (156d) Proof. The theorem should seem plausible just from beachball pictures. For algebraic proofs of eqs (156b) and (156d) see Appendix A of Tape & Tape (2012). A variation of the argument for eq. (156d) shows that if E is a double couple then the rotations V that give V (E) = −E are the two 180 • rotations about the fault plane normals, together with the ±90 • rotations about the null axis. Together with eq. (156d), this gives eq. (156c). From Theorem 15 we get, more generally, Theorem 16. (S(E) for arbitrary E). (i) If E is generic then its symmetry group S(E) is conjugate to U ORTH . The non-trivial members of S(E) are the three twofold rotations about the principal axes of E. (ii) If E is a double couple, then S(E) is conjugate to U TET . The null axis of E is the fourfold axis of S(E), and the T and P axes of E are two of the twofold axes of S(E). (iii) If E is a crack matrix, then S(E) is conjugate to U XISO . The c-axis of E is the regular axis of S(E). Thus the symmetry of E is obvious from the (perhaps perturbed) beachball for E. Symmetry groups S(W) for selected subspaces W of M were given in Table 2. As an example, we derive S(W) for W = B 4 , B 5 . The subspace W is B 6 ⊥ ∩ B 4 , B 5 , B 6 -the diagonal matrices that are deviatoric. Since conjugation preserves matrix trace, then S B 4 , B 5 , B 6 ⊂ S(W). Conversely, S(W) ⊂ S B 4 , B 5 , B 6 , since Elastic symmetry with beachball pictures 991 if V ∈ S(W) and F ∈ B 4 , B 5 , B 6 , then F = E + t I for some E ∈ W and t ∈ R, and Hence S(W) = S B 4 , B 5 , B 6 . From proposition 1 of Tape & Tape (2016) we know that S B 4 , B 5 , B 6 = U CUBE . Thus Section S4 of the Supporting Information has a picture proof of Eq. (158). Theorem 17. The eight reference groups are elastic symmetry groups. That is, for each reference group U (Table 3) there is an elastic map T for which S T = U. is the symmetry group of a subspace of M (Theorem 15 and eq. 158) and hence is an elastic symmetry group; see eq. (155). (vi) For U = U MONO . We can take where, say, s = 55 • , t = 40 • and where λ 1 , . . . , λ 6 are distinct, so that each of B 1 , . . . , B 56 (t ) is an eigenspace of T. The beachballs for B 1 , . . . , B 56 (t ) appear in Fig. 9, the balls for B 1 and B 2 being at (x 1 , x 2 ) = (1, 0) and (x 1 , x 2 ) = (0, 1). The symmetries common to all six beachballs are Z π and I, hence S T = U MONO . (vii) For U = U TRIG . We can take T as in eq. (148b) with λ 1 = λ 6 = 1, λ 3 = 2, λ 5 = 3, t = v = 0, and u = π/4. Then B 5 is an eigenspace of T and has symmetry group U XISO , hence S T ⊂ U XISO . The members of U XISO are the rotations Z ξ (any ξ ) and the horizontal twofold rotations Z θ X π Z θ (any θ ). Which of them are in S T ? The matrix of T with respect to B is Using the -test, we find that Z ξ is a symmetry of T if and only if ξ = n2π/3, and Z θ X π Z θ is a symmetry of T if and only if θ = π/2 + nπ/3. Hence S T = U TRIG . (viii) For U = U 1 . Let T be as in eq. (23). The eigenvalues of T are 3/5, 4/5, . . . , 8/5. Eigenvectors for eigenvalues 3/5 and 4/5 are The matrices G 1 and G 2 are generic and have no principal axis in common. The non-trivial symmetries of any generic matrix are the three twofold rotations about its principal axes, so the only symmetry common to the eigenspaces G 1 and G 2 of T is the identity. Hence S T = U 1 . Fig. 4 is the beachball picture for T. 4 T H E E L A S T I C S Y M M E T RY G RO U P S In Theorem 18 below we show that the symmetry group of every elastic map is the conjugate of some reference group (Table 3). Together with Theorem 17 this means that the elastic symmetry groups are exactly the conjugates of the eight reference groups. In a tour de force in their section 6, Forte & Vianello (1996) detail the long history of the problem of determining the number of elastic symmetry groups. We cannot possibly do justice to their recounting of it. We only mention that in the older literature the seemingly natural groups {I, Z π/2 , Z π , Z 3π/2 } and {I, Z 2π/3 , Z 4π/3 } were incorrectly considered to be elastic symmetry groups (e.g. Nye 1957Nye , 1985Cowin et al. 1991). (The discussions were not explicitly in terms of elastic symmetry groups, so our paraphrase is loose.) This would bring the number of elastic symmetry groups to ten, not eight. This of course counts conjugate groups as the same. Forte & Vianello (1996) gave a proof concluding that the correct number was eight, and other proofs appeared later, also concluding eight (e.g. Chadwick et al. 2001;Bóna et al. 2007). Our proof, also concluding eight, may nevertheless be of interest, due to its pedestrian approach. It mainly involves circles on a sphere, as in Figs 16, 17, 18. The proof is tedious, however, in that Lemma 4 requires consideration of various cases. Fortunately, the ideas in the proof of Lemma 4 are not needed elsewhere in the paper, so the proof can be skipped if desired. Neither Theorem 17 nor Theorem 18, however, should be regarded as mere formalities. Their conclusions are not obvious, as illustrated by the historical confusion over the two groups {I, Z π/2 , Z π , Z 3π/2 } and {I, Z 2π/3 , Z 4π/3 } alluded to above. In connection with Lemma 2, a point v on the unit sphere is a 'regular axis' for a group U of rotations if all rotations about v are in U. Thus v = ±001 are the regular axes for U XISO . Lemma 2. If a group U XISO is conjugate to U XISO , then there is no group U strictly between U XISO and U ISO . That is, If not, there is a rotation U in U − U XISO . If v 1 is one of the two regular axes for U XISO , then both v 1 and v 2 = U v 1 are regular axes for U, with v 1 = ±v 2 . Then U is all of U ISO , as illustrated in Fig. 15. Lemma 3. Let U be an elastic symmetry group containing distinct twofold rotations V 1 and V 2 . Let α be the angle between their rotation axes, here considered as lines rather than vectors, so that α ≤ 90 o . If α = 45, 60, 90 • , then U is either U ISO or a conjugate of U XISO . Proof. Since V 1 and V 2 are twofold, the product rotation V 1 V 2 has rotation angle 2α. (So does V 2 V 1 ; as vectors, the rotation axes of V 1 V 2 and V 2 V 1 are oppositely directed.) Since α = 0, 45, 60, 90 • , then 2α is regular. Since V 1 V 2 ∈ U, then U has a subgroup U XISO conjugate to U XISO , by Theorem 10. Thus U XISO ⊂ U, and so U must be U XISO or U ISO , from Lemma 2. We define a point v of the unit sphere to be an available twofold point for a group U of rotations if the angular distances between v and the axes of all twofold rotations in U are 45 o , 60 o , or 90 o . Note that if v is an available twofold point for U then so is −v. Lemma 4. For a subgroup U of U and for a twofold rotation V, let U(V ) be the smallest elastic symmetry group that contains U and V. Then if U is a conjugate of a reference group (Table 3), so is U(V ). Proof. Let v be one of the two points where the rotation axis of V intersects the unit sphere. (i) The case where v is not an available twofold point for U. There is a twofold rotation V ∈ U with rotation axis v such that v · v ≥ 0 and ∠(v, v ) = 45, 60, 90 • . Figure 15. Showing that a group U of rotations that has two regular axes v 1 = ±v 2 must contain all rotations. (A point v on the sphere is a 'regular axis' for U if all rotations about v are in U.) Rotating the regular axis v 1 through all angles about v 2 must result in regular axes for U, so the circle through v 1 with centre v 2 consists of regular axes for U. Rotating the circle through all angles about v 1 , thus covering the orange area, must also result in regular axes for U. Continuing is this fashion will soon cover the sphere with regular axes for U, thus making U = U ISO . If V = V , then V ∈ U and U(V ) = U. (Note that U is itself an elastic symmetry group, by Theorem 17 and eq. (133).) If V = V , then applying Lemma 3 to U(V ) shows that U(V ) is either U ISO or a conjugate of U XISO . (ii) The case where v is an available twofold point for U. If U = U MONO , then the point v is 45 • , 60 • , or 90 • from the north or south pole, and the group U(V ) is a conjugate of U TET , U TRIG , or U ORTH , respectively. (If U is only a conjugate of U MONO rather than being U MONO itself, the conclusion does not change.) If U = U ORTH the available twofold points for U are shown in Fig. 16; the point v must be one of them. From the figure, the points v 1 = 101 and v 2 = 11 √ 2 are the only two essentially different possibilities for v. The case v = v 1 : Since ∠(v, 100) = 45 • and v × 100 ∝ 010, then U(V ) has a fourfold axis at 010. The group U(V ) is then the conjugate of U TET that has twofold axes at 101, 100, 101, and 001. (It is a subgroup of the group of symmetries of the dashed cube in Fig. 16b.) The case v = v 2 : Since v and the three twofold axes for U are edge midpoints or face centres of the dashed cube in Fig. 16c, then U(V ) must be a subgroup of the rotational symmetry group of the cube. Since ∠(v, 001) = 45 • and v × 001 ∝ 110, then U(V ) has a fourfold rotation with axis at (the face centre) 110. Since ∠(v, 100) = 60 • and v × 100 ∝ 0 √ 21, then U(V ) has a threefold rotation with axis at (the lower right cube vertex) 0 √ 21. The group U(V ) is therefore a conjugate of U CUBE . If U = U TET the group U(V ) is a conjugate of U CUBE . The argument is similar to that for U = U ORTH . If U = U TRIG then U(V ) is a conjugate of U CUBE or U XISO . See Fig. 17. If U = U CUBE there are no available twofold points for U; see Fig. 18. If U = U XISO there are also no available twofold points for U. Theorem 18. (The elastic symmetry groups are conjugates of the reference groups.) For any elastic map T the group S T of its symmetries is a conjugate of one of the eight reference groups U 1 , U MONO , . . . , U ISO in Table 3. That is, for each T there is a reference group U and a rotation matrix U such that S T = U U U . (Table 3). The group U has twofold axes coinciding with the coordinate axes. The available twofold points are the points on the sphere whose angular distances from each twofold axis of U is 45 Proof. The idea of the proof is to start with the trivial group {I } and add twofold rotations from S T one-by-one, and then to see what groups are generated. More precisely, we construct subgroups U k of U by The construction terminates when S T − U k contains no twofold rotation V k+1 to add. Until then, we have Figure 17. (a) Available twofold points (black dots) for U when U is the conjugate of U TRIG (Table 3) that has its rotation axes at 111, 101, 110, and 011 as indicated. The orange, green, and blue curves are arcs of circles of radii 45 • , 60 • , and 90 • centred on the twofold axes, and the available twofold points are the points where an orange, green, and blue curve all meet. Whether a twofold rotation V has its axis v at v 1 or v 2 , it will be in U CUBE , the symmetry group of the dashed cube. Since the rotations in U are also symmetries of the cube, then U(V ) is a subgroup of U CUBE . In fact, U(V ) = U CUBE . (If v = v 1 , then U CUBE is generated by V, 101 π , and 111 2π /3 . If v = v 2 , then U CUBE is generated by V and 111 2π /3 .) If v 1 and v 2 are rotated nπ/3 about 111, then the dashed cube can rotate with them. The conclusion does not change except that U(V ) and U CUBE are then conjugates rather than being equal. Finally, if V is a twofold symmetry with axis at 111, then U(V ) is a conjugate of U XISO , since the axis becomes sixfold and hence regular. Each U k is a subgroup of S T , and each U k is a conjugate of some reference group, by Lemma 4. Then, since the subgroup containments in eq. (163) are strict, there can be at most eight of the U k . (If U k is a conjugate of U XISO , then U k+1 = U ISO , by Lemma 2). Thus, for some k ≤ 8, Theorems 10, 11, 12 then tell us that the set S T − U k is not just devoid of twofold rotations, it is in fact empty. Then S T = U k and so S T is a conjugate of a reference group. With S T = U U U as in the theorem, we refer to U as the reference group for T. We then call the symmetry of T trivial, monoclinic, . . ., isometric according to whether the reference group is U 1 , U MONO , . . . , U ISO . Although T uniquely determines its reference group, the rotation matrix U is not unique, since U can always be replaced by UV, where V UV = U. For each elastic map T there is a 'characteristic solid' whose group of (rotational) geometric symmetries is S T . If the solid is sculpted out of the material whose elasticity is described by T, without reorienting it, then its elastic symmetries are the same as its . An orthonormal eigensystem for the elastic map T whose matrix with respect to B is given in eq. (165). The group S T of symmetries of T consists of the symmetries that are common to all of the eigenspaces of T . Since G 3 is generic, then by Theorem 16 the non-trivial symmetries of the eigenspace G 3 are the three twofold rotations about the principal axes of G 3 . The symmetries of G 4 are analogous, and so the only non-trivial symmetry common to G 3 and G 4 is the twofold rotation k π about k . That rotation is also seen to be a symmetry of the other two eigenspaces G 1 and G 2 , G 5 , G 6 , and so S T = {I, k π }; the symmetry of T is monoclinic. This agrees with the more formal treatment in the text, which finds that S T = U U MONO U for U = Y π/4 . The group S T is the group of rotational symmetries of the wedge-shaped solid at upper right. The view is from infinity, with k pointing directly at us, so as to emphasize the twofold symmetry. This view, however, does not show the wedge to advantage; the base of the wedge is in the plane of the paper. The coincident eigenvalues λ 2 = λ 5 = λ 6 are not typical of monoclinic symmetry (eq.147b). geometric symmetries. In Fig. 20 (next section), for example, the characteristic solid for T is the brick at the upper right. The elastic symmetries of T are obvious from the brick. If the material being considered is reoriented, its elastic map T is apt to change, and its elastic symmetry group S T = U U U is apt to change, but its reference group U will not. 5 F I N D I N G T H E S Y M M E T R I E S O F E L A S T I C M A P S In Sections 15.1-15.7 we find the symmetries of seven elastic maps T . To get an impression of the method, it is enough to read just one or two of the seven sections. Readers wanting to use the method themselves, however, will want the full repertoire of seven examples. Figure 20. An orthonormal eigensystem for the elastic map T whose matrix with respect to B is given in eq. (172). The group S T of symmetries of T consists of the symmetries that are common to all of the eigenspaces of T . Since the matrix G 4 is generic, then by Theorem 16 the non-trivial symmetries of the eigenspace G 4 are the three twofold rotations about the principal axes of G 4 . Those rotations are also symmetries of the other four eigenspaces G 1 , G 2 , G 3 , G 5 , G 6 . The symmetry of T is therefore orthorhombic, that is, S T = U U ORTH U for some U ∈ U, in agreement with the more formal treatment in the text. (The matrix U is given by eq. (175b); it is an eigenframe for G 4 , G 5 , and G 6 ). The group S T is the group of rotational symmetries of the brick shown. The coincident eigenvalues λ 5 = λ 6 are not typical of orthorhombic symmetry (eq. 146b). Given an elastic map T , we know from Theorem 18 that its symmetry group has the form S T = U UU , where U ∈ U and where U is one of the eight reference groups. For most T the reference group U can be found just by inspection of the beachball picture for T . Initially, we therefore recommend ignoring the main text and just looking at the beachball figures and their captions (e.g. Figs 19 and 20). First, however, review Theorem 16, so as to be able to recognize beachball symmetries. In the figures the rotation U gives the orientation of the beachballs. Although U can usually be guessed approximately and informally from the figure, the analytic approach described in the text is needed to find the matrix U explicitly and thus to give a complete description of the symmetry group S T . We treat the entries in our matrices [T ] as exact. Thus we are ignoring the important practical problem of how to incorporate observational uncertainties into our analyses. See, for example, Danek et al. (2015). Example: monoclinic We will find the symmetry group S T of the elastic map T whose matrix with respect to B is [T ] = An eigensystem of T is shown in Fig. 19. From the figure, The 1-D eigenspaces are G 1 , G 3 , G 4 . The matrix G 1 is a double couple and therefore has tetragonal symmetry, whereas G 3 and G 4 are generic and therefore have orthorhombic symmetry; see Theorem 16. Orthorhombic symmetry is more informative than tetragonal symmetry, in the sense that it puts more constraints on the symmetry of T . We will consider G 3 -the eigenvector of T with eigenvalue equal to 2. From eq. (165), (We have omitted the normalizing factor 1/(4 √ 5), which is inessential.) Diagonalizing gives G 3 = U H 3 U , where The matrix of the elastic map T = U And from eqs (166) and (47), From eq. (168a) the matrix H 3 is diagonal and generic. Hence from eq. (156d) the group S(H 3 ) of symmetries of H 3 is U ORTH = {I, X π , Y π , Z π }. Since H 3 is an eigenspace of T, then from Theorem 13, Using eq. (169) and applying the -test to X π , Y π , Z π , we find that only I and Z π are symmetries of T. Thus S T = {I, Z π } = U MONO , and then S T = U U MONO U ; the symmetry of T is monoclinic. (Here U can be replaced by the more transparent matrix Y π/4 , since U Z 3π/4 = Y π/4 .) The two matrices in the group S T are ⎛ The wedge at the upper right in Fig. 19 is the characteristic solid for T . If it had been sculpted out of the hypothetical material under consideration, without reorienting the material, then its geometric symmetries would be the same as its elastic symmetries. (All symmetries are understood to be rotational, as usual.) Example: orthorhombic We next find the symmetry group S T of the elastic map T whose matrix with respect to B is An eigensystem of T is shown in Fig. 20. From the figure, The 1-D eigenspaces are G 1 , G 2 , G 3 , G 4 . The matrices G 1 , G 2 , G 3 are double couples and therefore have tetragonal symmetry, whereas G 4 is generic and therefore has the more informative orthorhombic symmetry. We therefore consider G 4 -the eigenvector of T with eigenvalue equal to 4. From eq. (172), The matrix of the elastic map T = U And from eqs (173) and (47), The matrix H 4 is diagonal and generic (eq. 175a), and so the group S(H 4 ) of symmetries of H 4 is U ORTH = {I, X π , Y π , Z π }, from eq. (156d). Then The first subset containment is due to the matrix [T] in eq. (176) having the form of the reference matrix T ORTH in Table 4, and the second is due to H 4 being an eigenspace of T; see Theorem 13. From eq. (178) we have S T = U ORTH and then S T = U U ORTH U ; the symmetry of T is orthorhombic. Figure 21. An orthonormal eigensystem for the elastic map T whose matrix with respect to B is given in eq. (179). The group S T of symmetries of T consists of the symmetries that are common to all of the eigenspaces of T . Since the matrix G 3 is a double couple, then by Theorem 16 the group S G 3 of symmetries of the eigenspace G 3 is a conjugate of U TET ; here the fourfold axis for G 3 is in the direction of the arrow, and the four twofold axes perpendicular to it are the lines on the disk. The rotations in S G 3 are also symmetries of the other four eigenspaces, though this is harder to see for the eigenspace G 1 , G 2 , since it is 2-D. (See Fig. 9(a) for help.) The symmetry of T is therefore tetragonal, that is, S T = U U TET U for some U ∈ U, in agreement with the more formal treatment in the text. (The matrix U is given by eq. (182).) The group S T is the group of rotational symmetries of the square prism shown. The brick at the upper right in Fig. 20 is the characteristic solid for T . If it had been sculpted out of the hypothetical material under consideration, without reorienting the material, then its geometric symmetries would be the same as its elastic symmetries. Example: tetragonal Next we find the symmetry group S T for the map T whose matrix with respect to B is [T ] = An eigensystem of T is shown in Fig. 21. From the figure, The 1-D eigenspaces are G 3 , G 4 , G 5 , G 6 . The matrices G 3 and G 4 are double couples, and G 5 and G 6 are crack matrices. The double couples are more informative than the crack matrices. We consider G 3 -the eigenvector of T with eigenvalue equal to 3. From eq. (179), W. Tape and C. Tape The matrix of the elastic map T = U And from eqs (180) and (47), From eq. (182) the matrix H 3 is a double couple of the form in eq. (156c), and so S(H 3 ) = U TET . Then The first subset containment is due to the matrix [T] in eq. (183) having the form of the reference matrix T TET , and the second is due to H 3 being an eigenspace of T; see Theorem 13. From eq. (185) we have S T = U TET , and then S T = U U TET U ; the symmetry of T is tetragonal. For example, a fourfold rotation in S T is The square prism in Fig. 21 is the characteristic solid for T . Example: transverse isotropic Next we find the symmetry group S T of the map T whose matrix with respect to B is [T ] = An eigensystem of T is shown in Fig. 22. From the figure, The 1-D eigenspaces are G 5 and G 6 . Both G 5 and G 6 are crack matrices. We consider G 5 -the eigenvector of T with eigenvalue equal to 5. From eq. (187), Diagonalizing gives G 5 = U H 5 U , where Figure 22. An orthonormal eigensystem for the elastic map T whose matrix with respect to B is given in eq. (187). The group S T of symmetries of T consists of the symmetries that are common to all of the eigenspaces of T . Since G 5 is a crack matrix, then by Theorem 16 the group S G 5 of symmetries of the eigenspace G 5 is a conjugate of U XISO ; the group consists of all rotations about the c-axis of G 5 (arrow), together with all 180 • rotations about axes perpendicular to the c-axis. Since those rotations are also symmetries of the other three eigenspaces G 1 , G 2 , G 3 , G 4 , and G 6 , they are also the symmetries of T . The symmetry of T is therefore transverse isotropic, that is, S T = U U XISO U for some U ∈ U, in agreement with the more formal treatment in the text. (The matrix U is given by eq. (190b)). The group S T is the group of rotational symmetries of the cylinder shown. We always require U to be a rotation matrix, but here, with G 5 being a crack matrix, independent eigenvectors of G 5 are not necessarily orthogonal, so some care was required in getting U. The matrix of the elastic map T = U And from eqs (188) and (47), From eq. (190a) the matrix H 5 has the form in eq. (156b), and so S(H 5 ) = U XISO . Then The first subset containment is due to the matrix [T] in eq. (191) having the form of the reference matrix T XISO , and the second is due to H 5 being an eigenspace of T. From eq. (193) we have S T = U XISO , and then S T = U U XISO U ; the map T is transverse isotropic. The cylinder in Fig. 22 is the characteristic solid for T . If it had been sculpted out of the hypothetical material under consideration, without reorienting the material, then its geometric symmetries would be the same as its elastic symmetries. Figure 23. An orthonormal eigensystem for the elastic map T whose matrix with respect to B is given in eq. (194). Together, three features of the eigensystem imply cubic symmetry for T . First, there are exactly three eigenspaces. Secondly, the 1-D eigenspace G 6 = I is isotropic (eqs E1). Thirdly, the 2-D subspace G 4 , G 5 consists of matrices having a common eigenframe U (arrows). The symmetry group S T of T is then cubic, with S T = U U CUBE U . The group S T is the group of rotational symmetries of the cube shown. If the matrix [T] in eq. (191) had not had the form of T XISO , we would not have had the benefit of the first containment in eq. (193). We would then test the rotations in U XISO to see which are symmetries of T. The proof of Theorem 17 (vii) describes a comparable calculation. Example: cubic Next we find the symmetry group S T of the map T whose matrix with respect to B is One eigensystem of T is shown in Fig. 23 and given in Appendix E. From the figure, where W = G 1 , G 2 , G 3 . The lone 1-D eigenspace is G 6 = I , whose symmetry puts no constraints on the symmetry of T . All is not lost, however. From Fig. 23, the matrices G 4 and G 5 appear to have a common eigenframe U. Analytically, we find from eq. (194) that where H 4 and H 5 are given in eqs (E1) and where Since H 4 and H 5 are diagonal, then U is indeed a common frame for G 4 and G 5 . Letting T = U * • T • U , we have, from eqs (195) and (47), The 2-D subspace H 4 , H 5 , being orthogonal to I , consists of deviatoric matrices, and since they are diagonal, then H 4 , H 5 must be B 4 , B 5 , whose symmetry group is U CUBE (eq. 158). Then W must be B 4 , B 5 , B 6 ⊥ , whose symmetry group is also U CUBE . Writing the appropriate symmetry group above each summand, we have, from eq. (197), Thus S T = U CUBE (Theorem 13) and then S T = U U CUBE U ; the map T is cubic. The cube in Fig. 23 is the characteristic solid for T . Cubic symmetry in general We have now found the symmetry group of T in eq. (194). We can see from that example how cubic symmetry can arise more generally. For an arbitrary T and for U ∈ U, the map T has symmetry group S T = U U CUBE U if one of the following holds (for some subspace W and some numbers λ 1 , λ 2 , λ 3 ): The equations are not as daunting as they appear, since matrices in U B 4 , B 5 , B 6 U and U B 4 , B 5 U all have the common eigenframe U . The subspace U B 4 , B 5 , B 6 U consists of all such matrices, and U B 4 , B 5 U consists of those that are deviatoric. Both subspaces are therefore relatively easy to recognize. Eqs (199) are the only possibilities for cubic symmetry, as follows from eqs (47), (143b), and Theorem 13. Hence, from eq. (199a), if an elastic map T has exactly three eigenspaces, a necessary and sufficient condition for S T to be a conjugate of U CUBE is that one of the eigenspaces be I and that another be 2-D and consist of matrices all with a common eigenframe. (Fig. 23 is typical.) This is theorem 4.2 of Bóna et al. (2007). Similarly, if T has exactly two eigenspaces, a necessary and sufficient condition for S T to be a conjugate of U CUBE is that one of the eigenspaces be 3-D and consist of matrices having a common eigenframe (eq. 199b), or that one of the eigenspaces be 2-D and consist of deviatoric matrices having a common eigenframe (eq. 199c). Example: trigonal Next we find the symmetry group S T of the map T whose matrix with respect to B is [T ] = 1 16 An eigensystem of T is shown in Fig. 24. From the figure, W. Tape and C. Tape The 1-D eigenspaces are G 5 and G 6 . Both G 5 and G 6 are crack matrices. We consider G 5 -the eigenvector of T with eigenvalue equal to 5. From eq. (200), Diagonalizing gives where The matrix of the elastic map The matrix [T 0 ] has the form of T 3 in Table 1, so T 0 has a vertical threefold axis. Its horizontal twofold axes, from Theorem 12, are at θ = π/2 + π/18 + nπ/3. We therefore let The map T has its horizontal twofold axes at θ = π/2 + nπ/3. Its matrix is And from eq. (201), Eq. (203a) remains correct when U is substituted for U 0 , since Z ξ H 5 Z ξ = H 5 . (In changing U 0 to U , we are only rotating the eigenframe for the crack matrix G 5 about its c-axis.) From eq. (203b) the matrix H 5 has the form in eq. (156b), and so S(H 5 ) = U XISO . Figure 24. An orthonormal eigensystem for the elastic map T whose matrix with respect to B is given in eq. (200). The group S T of symmetries of T consists of the symmetries that are common to all of the eigenspaces of T . As in the transverse isotropic example of Fig. 22, the symmetry groups for the eigenspaces G 5 and G 6 are the same, with both of them conjugate to U XISO . Here, however, relying only on the figure, we do not see how to go further to find S T . Proceeding analytically, we find S T = U U TRIG U with U as in eq. (205); the symmetry of T is trigonal. The group S T is the group of rotational symmetries of the triangular prism shown. Then Using the -test and eq. (207) we then examine the members of U XISO to see which are symmetries of T. (The proof of Theorem 17 (vii) describes a comparable calculation.) The result is S T = U TRIG . The map T is therefore trigonal, with S T = U U TRIG U . The triangular prism in Fig. 24 is the characteristic solid for T . Example: trivial symmetry Let T be the elastic map with [T] BB as in eq. (23). In item (viii) in the proof of Theorem 17 we noted that the eigenvalues λ 1 and λ 2 of T were simple and that their eigenvectors G 1 and G 2 were generic, with no principal axis in common. Hence T had only the trivial symmetry. Fig. 4 is the beachball picture for T. The characteristic solid for T, not shown, would be an irregularly shaped solid. A sufficient condition for trivial symmetry of an arbitrary elastic map is that it have simple eigenvalues λ i and λ j with eigenvectors G i and G j that have only the trivial symmetry in common. Example: a defeat Here is an example of an elastic map T where our method fails to identify its symmetry. Two eigensystems of T are shown in Fig. 25, one with orthonormal eigenvectors G 1 , . . . , G 6 . the other with orthonormal eigenvectors J 1 , . . . , J 6 . From the figure, Figure 25. Two orthonormal eigensystems for the elastic map T whose matrix with respect to B is given in eq. (210). As always, the group S T of symmetries of T consists of the symmetries that are common to all of the eigenspaces of T . Here there are only two eigenspaces, one with eigenvalue λ 1 = λ 2 = λ 3 = 1 and one with λ 4 = λ 5 = λ 6 = 2. (a) The eigenvectors are G 1 , . . . , G 6 . The diagram gives no hint as to the symmetry of T . (b) The eigenvectors are J 1 , . . . , J 6 . The diagram shows that the symmetry of T is at least monoclinic. Mathematical software is unlikely to produce eigenvectors of T like J 1 , . . . , J 6 , and the symmetry of T is therefore undetectable by our method. The view is from infinity, with the vector k pointing directly at us, so as to emphasize the twofold symmetry. where As always, the symmetry group S T of T is the intersection of the symmetry groups of the eigenspaces of T . Neither of the eigenspaces W 1 and W 2 , however, is 1-D, which makes their symmetries harder to recognize. In fact in Fig. 25(a) we do not recognize either G 1 , G 2 , G 3 or G 4 , G 5 , G 6 as conjugates of any of the subspaces in Table 2, whose symmetry groups are known and would have helped. With only Fig. 25(a) to work with, we are at a dead end. In Fig. 25(b), however, twofold symmetry for T is clear. Given analytic expressions for J 1 , . . . , J 6 , we can confirm that the symmetry of T is monoclinic, with S T = U U MONO U and U = Y π/4 -the same as for T in Section 15.1. Mathematical software, however, when asked for eigenvectors of T here, is not apt to be so kind as to return J 1 , . . . , J 6 . Our method would therefore fail to find the symmetry of T . (We do, however, have an alternate method that succeeds.) 6 S TA B I L I T Y An elastic map T is said to be stable if Equivalently, the matrix of T with respect to an orthonormal basis G should satisfy Either of eqs (212) is equivalent to the eigenvalues of T being positive. Thus the elastic map T in Fig. 4 is stable, as seen from its eigenvalues. Had it been unstable, there would have been a colour reversal between the beachballs for G i and T(G i ) for at least one of the eigenvectors G i . Using intrinsic characterizations of elastic maps-for example eqs (143b) or (147b)-we can easily make up examples of stable elastic maps T that have prescribed symmetries; see eq. (49). Attempting the same using matrix characterizations will usually fail. If, for example, we choose each matrix entry a, b, . . . , p of T ORTH randomly between −1 and 1, the probability of getting a stable matrix is only ≈ 0.001. We can get some insight into why this should be so by considering the probability of getting a stable matrix T = a g g b when choosing each of the entries a, b, g randomly between −1 and 1. The fractional volume of the unit abg-cube occupied by stable matrices T can be visualized and then found to be only about 0.1. If the same experiment is performed with the arbitrary 6 × 6 symmetric T = T 1 of Table 4, thus choosing each entry a, b, . . . , v randomly between −1 and 1, the probability of getting a stable T is for all practical purposes zero; you cannot construct a stable matrix that way. Either of eqs (212) is equivalent to the more traditional characterization of stability in terms of the 6 × 6 Voigt matrix C. That is, T is stable if and only if where C is from eqs (S13). Slawinski (2015) explains the physical meaning of eq. (213). 7 C O N C L U S I O N A N D A F T E RT H O U G H T S Two reminders: All of our elastic symmetries are rotational. The vector space M consists of all 3 × 3 symmetric matrices; its members can be thought of as strains or stresses. The elastic map T : M → M, assumed to be linear, relates strain and stress at a point p in some material. A symmetry of T is a rotation of the material about p that leaves T unchanged. Given an arbitrary T, we wish to find the group S T of all its symmetries. In Sections 4-9, we describe elastic maps having the symmetry Z ξ -rotation through angle ξ about the z-axis. In Section 11, however, we find that the seemingly natural group {(Z ξ ) n : n ∈ Z} of integral powers of Z ξ is not an elastic symmetry group unless the angle ξ is 0 or π . That is, unless ξ = 0 or π , there is no elastic map T such that {(Z ξ ) n : n ∈ Z} = S T . This raises the question of what in fact are the elastic symmetry groups. That is, when is a group of rotations also the group of symmetries of some elastic map? The answer is given in Theorems 17 and 18: The elastic symmetry groups are the conjugates of the eight reference groups in Table 3. The proof of this fact does not assume that elastic symmetries arise from crystallographic symmetries; it is purely mathematical. We have two notions of symmetry for a rotation. One is as a symmetry of an elastic map, as above, and the other is as a symmetry of a subspace W of M. In beachball terms, a rotation V is a symmetry of W if using V to rotate beachballs whose matrices are in W gives only beachballs whose matrices are also in W. The symmetries of an elastic map T turn out to be the symmetries that are common to its eigenspaces (Theorem 13). Since the symmetries of a subspace are often relatively easy to recognize, we are usually able to realize our original goal of finding the group S T of symmetries of T (Section 15). For more of a summary than that, we recommend the introduction. In this concluding section we only add a few comments that would not have made sense in the introduction. The orthonormal basis B of M (eq. 3) makes the reference matrices simple (Table 4). In the literature, one encounters the basis defined in our eq. (S23); see, for example, eq. (2.5) of Mehrabadi & Cowin (1990) or eqs (4) and (5) of Bóna et al. (2007). The basis plays the same role as B, but it is less suited than B for the study of symmetry. Historically, arose because it was orthonormal and because the matrix [T] was closely related to the Voigt matrix for T. The traditional Voigt matrix, defined in eq. (S13), is still used by some authors, but it is undesirable for reasons explained in Section S5.5.3. It has been an obstacle to understanding. The fact that the groups {I, Z π/2 , Z π , Z 3π/2 } and {I, Z 2π/3 , Z 4π/3 } are not elastic symmetry groups was not always recognized and can cause some confusion. Nye (1957Nye ( , 1985, for example, has ten matrices, not eight, that would be the analogs of our reference matrices. The significance of Theorem 5 is apt to be missed. For an elastic map T, the theorem says that if Z ξ is a symmetry of T for some regular ξ (Fig. 1), then Z ξ is a symmetry of T for all ξ . We used the theorem in deriving the elastic symmetry groups. The proof of the theorem looks easy, but the work had already been done in Lemmas 5 and 6 of Appendix B. We are intrigued by the prime subspaces B(u, v) for Z ξ , ξ = 2π/3. The contrast with their tame counterparts for ξ = π/2 is striking; compare Fig. 11 with Fig. 9. We suspect that we are still missing some insights. The results in this paper depend only on the elastic map being linear and self-adjoint. No other assumptions are involved. A C K N O W L E D G E M E N T S We thank editor Duncan Agnew, reviewer Naofumi Aso, and an anonymous reviewer for their constructive suggestions. Chris Chapman and Michael Slawinski read early drafts of the paper and made helpful comments. CT was supported by National Science Foundation grant EAR 1829447. DATA AVA I L A B I L I T Y S TAT E M E N T There are no new data associated with this article. Mathematica notebook files for generating the beachball pictures for elastic maps are available at https://github.com/carltape/mtbeach. S U P P O RT I N G I N F O R M AT I O N Supplementary data are available at GJ I online. Section S1. Matrix versions of eqs (134) and (135). Section S2. A test for equivalence of elastic maps. Section S3. Cubic symmetry from trigonal. Section S4. A picture proof of Eq. (158). Section S5. Comparison with the traditional Voigt approach. Figure S1 Beachballs for selected matrices in the subspace B(u 0 , 0) of M. The three arrows on each ball are the frame vectors (column vectors) of the common frame U (eq. S8b). The frame vectors are inclined to the paper at angle u 0 , the same for each, and their angular coordinates in the plane of the paper are −π/3, π/3, π. Compare with Fig. 11, where u = u 0 . Also see Fig. 12. Figure S2 Beachball, upper left, for a typical matrix E in the subspace B 4 , B 5 . The six beachballs are for the matrices V (E) such that V is a symmetry of the unit cube. A TBP frame is shown for each ball (red, blue, and yellow arrows). The balls are scaled so as to be inscribed in the unit cube. Please note: Oxford University Press is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the paper. Lemma 1 Let U : V → V be unitary and let W be a non-zero subspace of V that is invariant under U. Then W is the orthogonal direct sum of subspaces of V that are prime for U. Proof. The proof is by induction. Let P(n) be the statement that if W is a non-zero subspace of V with dim ≤ n and if W is invariant under U, then W is the orthogonal direct sum of subspaces that are prime for U. The statement P(1) is true, vacuously. Assume P(n) and prove P(n + 1): Let W be a non-zero subspace with dim ≤ n + 1 that is invariant (under U). If W is prime, then the sought-after orthogonal direct sum is W = W. If W is not prime, then, among the non-zero invariant subspaces of W, let W 1 be one of smallest dimension. We note: (i) The orthogonal complement W ⊥ 1 of W 1 in W is invariant by eq. (46). (ii) dim W ⊥ 1 ≤ n. (iii) W ⊥ 1 = {0}. Thus P(n) can be applied to W ⊥ 1 , so that W ⊥ 1 is the orthogonal direct sum of prime subspaces W 2 , . . . , W j . Since W 1 is also prime, then A P P E N D I X B : P R I M E S U B S PA C E S F O R [Z ξ ] BB W H E N ξ I S R E G U L A R Let A be a 6 × 6 matrix and let w ∈ R 6 be non-zero. In Appendices B and C we will be considering the smallest subspace w of R 6 that contains w and that is invariant under A. If w, Aw, . . . , A j w are linearly dependent, then w = w, Aw, . . . , A j−1 w . Since E is invariant, then w, Aw, A 2 w, A 3 w are all in E, and since dim E ≤ 3 they are linearly dependent. The determinant in eq. (B3) must therefore be zero. The factors involving ξ , however, are nonzero, since ξ is regular. Hence a = b = 0 or c = d = 0. Lemma 6. If ξ is regular, the prime subspaces of R 6 for (multiplication by) [Z ξ ] BB are E 12 , E 34 , and e 56 (t) (any t). Proof. Let A = [Z ξ ] BB . We look first for the subspaces of dim ≤ 3 that are prime for A. From eq. (83) we can see that the following subspaces are invariant under A. dim 3 E 12 ⊥ e 56 (t) E 34 ⊥ e 56 (t) dim 2 But are they prime, and have we found all of them? To that end, suppose that a subspace E has dim ≤ 3 and is prime (for A). Since it is prime, it contains a non-zero element w = (a, b, c, d, e, f ). Again since E is prime, the smallest invariant subspace w containing w must be all of E. Since dim E ≤ 3 then w = w, Aw, A 2 w , from eq. (B1). The rows w, Aw, A 2 w of the matrix K (2) in eq. (B2) therefore span E, and the same will be true for the rows of any matrix that is row equivalent to K (2). (i) The case c 2 + d 2 = 0 (and necessarily a = b = 0, from Lemma 5): The matrix K (2) is row equivalent to ⎛ The subspace E is therefore spanned by e 3 , e 4 , and e e 5 + f e 6 . If e 2 + f 2 = 0, then E is the 3-D subspace E 34 ⊥ e 56 (t) for some t, but it is not prime, since it has proper invariant subspaces. If e = f = 0, then E is the 2-D space E 34 . (ii) The case a 2 + b 2 = 0: Similar to (i). The only candidate for a prime subspace is E 12 . (iii) The case a = b = c = d = 0: The subspace E must be e 56 (t) for some t. Thus the only possible prime subspaces of dim ≤ 3 are E 12 , E 34 and e 56 (t) . Since no one of them contains another, they are indeed prime. Could there be a prime subspace with dim > 3? If so, it would have dimension 4 or 5 (R 6 is not prime) and it would then have to be the orthogonal complement of an invariant subspace of dimension 1 or 2. But the invariant subspaces of dimension 1 and 2 are now known [bottom two rows of eq. (B6)], and their orthogonal complements, shown above them, are not prime: A P P E N D I X C : P R I M E S U B S PA C E S F O R [Z ξ ] BB W H E N ξ = 2 π / 3 We define the unit vector e(θ, u, v) in R 6 by e(θ, u, v) = (cos θ) (cos u, 0, sin u cos v, sin u sin v, 0, 0) (C1) + (sin θ) (0, cos u, − sin u sin v, sin u cos v, 0, 0).
32,943.4
2021-05-11T00:00:00.000
[ "Physics", "Geology" ]
Transcriptome Profiling Reveals a Novel Mechanism of Antiviral Immunity Upon Sacbrood Virus Infection in Honey Bee Larvae (Apis cerana) The honey bee is one of the most important pollinators in the agricultural system and is responsible for pollinating a third of all food we eat. Sacbrood virus (SBV) is a member of the virus family Iflaviridae and affects honey bee larvae and causes particularly devastating disease in the Asian honey bees, Apis cerana. Chinese Sacbrood virus (CSBV) is a geographic strain of SBV identified in China and has resulted in mass death of honey bees in China in recent years. However, the molecular mechanism underlying SBV infection in the Asian honey bee has remained unelucidated. In this present study, we employed high throughput next-generation sequencing technology to study the host transcriptional responses to CSBV infection in A. cerana larvae, and were able to identify genome-wide differentially expressed genes associated with the viral infection. Our study identified 2,534 differentially expressed genes (DEGs) involved in host innate immunity including Toll and immune deficiency (IMD) pathways, RNA interference (RNAi) pathway, endocytosis, etc. Notably, the expression of genes encoding antimicrobial peptides (abaecin, apidaecin, hymenoptaecin, and defensin) and core components of RNAi such as Dicer-like and Ago2 were found to be significantly upregulated in CSBV infected larvae. Most importantly, the expression of Sirtuin target genes, a family of signaling proteins involved in metabolic regulation, apoptosis, and intracellular signaling was found to be changed, providing the first evidence of the involvement of Sirtuin signaling pathway in insects’ immune response to a virus infection. The results obtained from this study provide novel insights into the molecular mechanism and immune responses involved in CSBV infection, which in turn will contribute to the development of diagnostics and treatment for the diseases in honey bees. Transcriptome Profiling Reveals a Novel Mechanism of Antiviral Immunity Upon Sacbrood Virus Infection in Honey Bee Larvae (Apis cerana) INTRODUCTION The Asian honey bee Apis cerana is one of two honey bee species that have been truly domesticated and used in apiculture. It has adapted diverse environments, and its natural distribution is also broad, and is widely distributed in complex topographic regions with different habitats, diverse flora, and divergent climate in Asia (Hepburn and Radloff, 2011). Like its western counterpart, Apis mellifera, A. cerana also plays a vital role in agricultural production and biodiversity conservation. However, due to degradation of ecosystems, loss of biodiversity, overexploitation of natural resources, excessive use of pesticides, and introduction of exotic species, the original habitats of the Asian honey bee have shrunk by 75% over the past century and the populations of managed Asian honey bees have declined by 80% (He and Liu, 2011;Chen et al., 2017). As a result, in 2006, A. cerana was listed as an endangered species. A. cerana suffers from a variety of diseases caused by viruses, bacteria, fungi, and parasites. Of all the pathogens causing diseases in honey bees, Sacbrood virus (SBV) is the most dangerous pathogen of A. cerana. SBV belongs to Iflaviridae, a viral family of positive-sense single-stranded RNA viruses infecting insects (Chen and Siede, 2007). While SBV infects both brood and adult stages of honey bees, the larval stage is the most susceptible to SBV infection. Infected larvae fail to pupate while ecdysial fluid rich in SBV accumulates beneath the unshed larval cuticle, forming the sac, hence the name "Sacbrood." Sacbrood disease was first described in the western honey bee A. mellifera in 1913 and later in A. cerana in 1972 . Since then, the catastrophic outbreaks of the SBV disease have occurred periodically in a cycle of every 6-7 years, causing massive deaths and collapse of entire colonies in Asia. According to historical records, SBV disease killed 100% of A. cerana colonies in Thailand in 1976, 95% of A. cerana colonies in India in 1978, greater than 90% of A. cerana colonies in China and completely destroyed the Korean apiculture industry in 2010 (Bailey et al., 1982;Verma et al., 1990;Choe et al., 2012). The severe losses of A. cerana populations across Asia due to SBV disease were caused by a variety of strains of SBV reflecting their geographic isolations, namely Thai Sacbrood virus, Chinese Sacbrood virus, Korean Sacbrood virus, etc. (Bailey et al., 1982;Choe et al., 2012). Chinese Sacbrood virus (CSBV) is a geographic strain of SBV isolated from the Asian honey bee Apis cerana in China and has been catastrophic for the Chinese beekeeping industry . Nonetheless, the molecular mechanisms underlying SBV disease pathogenesis and host susceptibility to the viral infection remain poorly understood. The advent of deep sequencing technologies has revolutionized the biological sciences and enable the measurement of unbiased, large-scale, genome-wide gene expression patterns. In order to have a better understanding of host responses to CSBV infections, we employed an RNA-Seq approach (Wang et al., 2009;Marguerat and Bähler, 2010;Mutz et al., 2013) to unravel global host transcriptional changes in CSBV-infected larvae. Furthermore, we conducted RT-qPCR to validate the differential expression of selected differentially expressed genes (DEGs). The knowledge and information gained from this study will provide novel insights into mechanisms about how host antiviral immune responses are generated during the course of a natural infection by CSBV, thereby leading to the development of effective disease management strategies. Samples Collection and CSBV Identification CSBV infected 4-instar larvae with recognized SBV disease symptoms were collected from three A. cerana colonies maintained in the apiary of the Institute of Apicultural Research, Chinese Academy of Agricultural Sciences (CAAS) for transcriptome analysis and qRT-PCR. The three adjacent healthy colonies of 4-instar larvae that were identified to be negative for CSBV infection were used for sampling healthy larvae as a negative control. Individually collected larvae were treated with 75% alcohol and subjected to RNA extraction using the QIAGEN RNeasy Mini Kit following the manufacturer's instructions. The concentration of extracted RNA was measured by using a Nanodrop2000. The quality of RNA was confirmed using 1% TAE (Tris-acetate-EDTA) agarose gel electrophoresis. cDNA was synthesized from RNA using random hexamer primers and reverse transcriptase with the QIAGEN Reverse Transcription Kit following the manufacturer's instructions. PCR assay was performed for each cDNA sample to confirm the status of CSBV infection with a pair of CSBV primers (Forward: 5 -GACCCGTTTTCTTGTGAGTTTTAG-3 ; Reverse: 5 -GTGTAGCGTCCCCCTGAATAGAT-3 ) (Ma et al., 2011). The specificity of the PCR product was visualized on 1% agarose gel electrophoresis, sequenced, and analyzed using the BLAST server at the National Center for Biotechnology Information, NIH. Host RNA-Sequencing The Illumina Hiseq sequencing platform was used for transcriptome sequencing of ten CSBV infected larvae and nine health larvae. An Illumina PE library was constructed for 2 × 150 bp sequencing, and the obtained sequencing data were subject to quality control. The full-length cDNA was fragmented and ligated to an Illumina paired-end adapter for PCR enrichment and sequencing. The Illumina reads were processed to remove low-quality sequences, adaptor and to eliminate sequences of rRNA and tRNA. RNA-Seq Data Analysis The information about the methods used to analyze the transcriptomic data is as follow: first, SeqPrep software was used to trim the raw data. (a) removed the adaptor sequence in reads, and removed the reads that have not been inserted due to the adaptor self-connection; (b) the base with low quality (quality value less than 20) at the end (3 end) of the sequence was pruned away; (c) removed reads with N ratio over 10%; (d) discarded the adapter and any sequence whose length was less than 20 bp after quality pruning. SeqPrep softwares 1 were used to deal with quality trimming. Secondly, Mapping reads to reference genome of honeybee (Apis cerana) and analysis profile of gene expression. The TopHat2 (Trapnell et al., 2009) software was used to map reads to reference genome of Apis cerana (version: ACSNU-2.0). RSEM software (Li and Dewey, 2011) was used to calculate gene expression level, and FPKM (Fragments Per Kilobase of Transcript per Million Fragments Mapped) (Mortazavi et al., 2008) was used as criteria of measuring gene expression. For differential expression analysis, the edgeR (Robinson et al., 2010) was used to conduct differential gene expression analysis. Differential expression calculation was based on gene read count and a negative binomial distribution model. In this case, the screening criteria of significantly differentially expressed genes was FDR < 0.05 and | log2FC| ≥ = 1. Clustering analysis was conducted to gain expression patterns of Differentially Expressed Genes (DEGs). The distance algorithm was adopted (Spearman correlation between samples, Pearson correlation between genes, and the complete algorithm were adopted in the distance method). For functional analysis of differentially expressed genes, Goatools (Klopfenstein et al., 2018) was used to perform Gene Ontology (GO) enrichment analysis based on Fisher's exact test. The genome of A. cerana was used as the background to determine GO terms enriched in the DEG dataset using the hypergeometric test. FDR, as a corrected P-value, was used to control the false positive rate, FDR (<0.05) as a threshold to identify significantly enriched terms. DEGs were classified into three categories; biological process (BP), cellular components (CC), and molecular function (MF). Then, the same enrichment method was used to conduct KEGG pathway enrichment analysis. KOBAS (Xie et al., 2011) was performed to identify significantly enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways in the DEG datasets based on Fisher's exact test and a corrected P-value (FDR ≤ 0.05) as a threshold. Next, Qiagen's Ingenuity Pathway Analysis system (IPA 2 ) was used to analyze the significant differentially expressed genes (DEGs, FDR ≤ 0.05). Briefly, the DEGs were first converted to their Drosophila melanogaster homologs based on BLAST (BLAST alignment parameter: e-value of 1e-5), then uploaded into the IPA system for core analysis and overlaid with the built-in knowledge base. Quantitative PCR for Quantification of Candidate DEGs The expression levels of selected DEGs were confirmed by quantitative PCR (qPCR) in individual CSBV-infected bees from three colonies (10 bees/colony). Healthy worker larvae (10 bees/colony) were collected from three colonies as a control. The primer pairs evaluated in the study are included in Supplementary Table 9. Total RNA extracted from each bee samples was used in cDNA synthesis for downstream qPCR. qPCR was performed in a total reaction volume of 20 µL containing the following reagents: 10 µL SYBR Green PCR 1 https://github.com/jstjohn/seqprep 2 www.ingenuity.com Master Mix (2×), 2 µL QN ROX Reference Dye, 0.8 µL of each primer (10 µmol/L), 4.4 µL ddH 2 O, 2 µL cDNA. The qPCR reactions were performed on a MX3000P system (Axygen) using the following cycle conditions: 95 • C for 3 min, 35 cycles of 95 • C for 30 s, 60 • C for 30 s, and 72 • C for 30 s. A melting curve was observed at the end of each run to confirm the specificity of each primer. Each sample was carried out in triplicate. Each cDNA sample was normalized by mRNA level of a housekeeping gene beta-actin. The relative expression levels of the selected DEGs were interpreted by comparative Ct method (2 − Ct) (Livak and Schmittgen, 2001). Sequencing Data Quality Control and Detection of Chinese Sacbrood Virus in Two Groups The quality control results of the host sequencing data showed that the Q30 percentage value of the 19 samples was at an average of 94%, and the base error rate was at an average of 1%. After filtering the host RNA-Seq reads, the clean reads of the control samples were at an average of 1.1% and the clean reads of the treatment group were at an average of 52%. The relative abundance of CSBV reads in the CSBV-infected group was up to about 97.6% on average, while that of the healthy group was zero ( Figure 1A and Supplementary Tables 1, 2). RNA-Seq Analysis of the Host Transcriptome The Analysis of Correlationship of All the Nineteen Samples We used RNA-seq data to analysis the correlationship of all the nineteen samples. The nine samples of healthy group were similar expression pattern. The seven tenth of the infected-CSBV group were the similar expression pattern, the other three samples were much more like that of the healthy group ( Figure 1B and Supplementary Table 3). Differentially Expressed Genes (DEGs) Function Analysis Using RNA-Seq to obtain genome-wide expression profiles of the DEGs in the larval bodies of the CSBV-infected larval group and the healthy larval group, we identified 2,534 genes that were significantly differentially expressed (FDR < 0.05; Supplementary Table 4). In the CSBV-infected group, 1,689 genes were upregulated and 845 genes were downregulated ( Figure 1C). The DEGs of the CSBV-infected group and the healthy group had distinct expression profiles and were clearly separated according to hierarchical clustering analysis. In fact, most DEGs between the CSBV-infected group and the healthy group showed opposite expressed patterns; i.e., the DEGs were downregulated in the healthy group while are upregulated in the CSBV-infected group ( Figure 1D). Specifically, the genes of antimicrobial peptides (AMPs) including abaecin, hymenoptaecin, apidaecin, and defensin had significantly higher The correlationship of the samples. The correlation coefficient is more closer to absolute value of one, the similarity is higher among samples. The samples with bigger size, and darker color have more stronger correlationship to each other. The correlationship among the samples is calculated by the FPKM. (C) The volcano plot of the differentially expressed genes. The red dot is an upregulated gene, and the blue dot is a downregulated gene. The gray dot is a gene which has no significantly differentially expressed gene. The x-axis is the log of the fold change of FPKM which is between the treatment group and the control group. The y-axis is the negative log of the FDR. The FPKM is to standardize the relative abundance of expression of genes. The screening criteria for significantly differentially expressed genes are FDR < 0.05, and | log2(Fold change)| > 1. (D) The expression profile of the differentially expressed genes. The red color is upregulated, and the blue color is downregulated. The FPKM is to standardize the relative abundance of the gene expression. The z-score value with FPKM is used to draw the heatmap. The genes with P < 0.05 were used to cluster here. levels of expression in the CSBV-infected group, compared to the healthy group. All four AMPs were significantly different within the two groups (P < 0.05; Figures 2A, 3). Gene ontology analysis revealed that the DEGs were significantly enriched in metabolic pathways including pyruvate metabolic process, glycolytic process, gluconeogenesis, carbohydrate metabolic process, oxidoreductase activity, and transmembrane transport (Figure 4 and Supplementary Table 5). The DEGs enriched in the KEGG pathways included carbohydrate metabolism, glycolysis, biosynthesis of amino acids, starch and sucrose metabolism, pyruvate metabolism, galactose metabolism, the pentose phosphate pathway, and neuroactive ligand-receptor interaction (Figure 5 and Supplementary Table 6). The DEGs involved in antiviral response-associated signaling pathways almost were significant up-regulated. The gene Toll which encode Toll like receptor was up-regulated. The gene Basket involved in Imd pathway indirectly induced the production of antimicrobial peptides and apoptosis. Two key components of the RNAi pathway, Argonaute-2 (Ago2) and Dicer-like were also up-regulated (Figures 6A,B). In particular, the expression of genes involved in carbohydrate metabolism and energy metabolism such as the citrate cycle, FIGURE 2 | Heatmaps of candidate differentially expressed genes (DEGs). The function "aheatmap" in the R package named "NMF" were used to analyze the clustering. The distance measure used in clustering rows and columns was "euclidean," and the clustering method used to cluster rows and columns was hclust with "complete." (A) The expression pattern of the differentially expressed genes including antimicrobial peptides, RNA interference, and Toll like receptor gene expression. (B) The expression pattern of the differentially expressed genes involved in the tricarboxylic acid (TCA) cycle of energy metabolism. The rate-limiting enzyme including DLD, CS, DLST, and IDH1 were down-regulated in the infected group. (C) The expression pattern of the differentially expressed genes involved in the Glycolysis. The rate-limiting enzyme including HK, PFK, and PK were down-regulated in the infected group. (D) The expression pattern of the differentially expressed genes involved in the Sirtuin pathway. The genes whose RNAseq data were consistent with the qPCR verification data were ACLY, ATP5F1B, G6PD, and PFK. These genes are also closely involved in energy metabolism. glycolysis, the pentose phosphate pathway and oxidative phosphorylation was significantly altered in CSBV-infected bees. Further, the results also showed that genes encoding citrate synthase (CS), isocitrate dehydrogenase (IDH1), dihydrolipoyl dehydrogenase (DLD, and dihydrolipoyl transsuccinylase (DLST) which are enriched in the TCA cycle, an important aerobic pathway for the final steps of the oxidation of carbohydrates and fatty acids, were significantly downregulated. In the process of glycolysis, all three rate-limiting enzymes, hexokinase encoded by HK, phosphofructokinase encoded by pfkA, and pyruvate kinase encoded by PK, were also downregulated here. Some subunits of NADH dehydrogenase, succinate dehydrogenase, cytochrome c oxidase, F-type ATPase, and V-type ATPase of the oxidative phosphorylation were also downregulated ( Figures 2B,C and Supplementary Table 7). Most importantly, we found that 15 DEGs were associated with the Sirtuin signaling pathway (Figure 2D and Supplementary Table 8). For example, ATG4D is necessary FIGURE 3 | The expression levels of the four AMPs in the treatment and the control. The four antimicrobial peptides are abaecin, apidaecin, defensin, and hymenoptaecin. FPKM is a criteria to represent the relative abundance of the four genes. The analysis of significant difference was conducted by t-test (abaecin: P = 0.0023; apidaecin: P = 0.0142; defensin: P = 0.0044; hymenoptaecin: P = 0.0039). for autophagy and regulated by Sirt1. Both ACLY and ACSS2 participated in the synthesis of Acetyl-CoA. ACLY, was an enzyme of the TCA cycle in the cytoplasm, while ACSS2 was in the mitochondria. SOD2 was involved in oxidative stress and ROS detoxification. The other three DEGs, PGAM, G6PD, and ATP5F1B, were involved in oxidative phosphorylation, the pentose phosphate pathway, and ROS accumulation. The other DEGs were involved in pathways of adipogenesis and apoptosis (Figure 8). DISCUSSION The association of CSBV with high mortality of Asian honey bee colonies has led to an increased awareness of the risks of viral infections on bee health. It is critically important to understand which host genes were repressed or activated in response to the virus infection, thereby identifying prognostic biomarkers and drug targets for the early diagnosis and treatment of the disease. There are two key findings of the present research. First, the functional analysis of the DEGs revealed that the expression of genes involving immune defense and energy metabolism was significantly altered in response to the CSBV infection (Figure 2), reflecting intrinsic connection between metabolism and innate immune system (Ayres, 2020). Second, regulation of Sirtuin genes expression provides the first evidence of the involvement of Sirtuin signaling pathway in honey bees' response to CSBV infection (Figure 7). This study represents the first comprehensive analysis of host changes on a global scale upon CSBV infection and the information obtained from this study will contribute significantly to the rational design of drugs for the treatment of honey bee viral diseases. The innate immunity represents insects' first line of defense against invasion of infectious agents and is composed of cellular and humoral responses (Hoffmann, 1995). Humoral immune response is characterized by the synthesis and secretion of antimicrobial peptides (AMPs), small cationic peptides that penetrate microbial membranes and kill pathogens directly (Zasloff, 2002). The rapid production of AMPs in response to disease infection is also an integral part of honey bee humoral immunity. The expression of genes encoding AMPs in honey bees are regulated by intracellular signaling pathways Toll and Imd/JNK and could be induced by infection of bacteria, fungi, and viruses (Evans et al., 2006;Nazzi et al., 2012;Flenniken and Andino, 2013;Chen et al., 2014;Kuster et al., 2014). A previous study reported that expression of AMPs was upregulated in adult honey bees after injections of Escherichia coli, saline buffer, bee pathogen Paenibacillus larvae, or wounding (Evans et al., 2006). Our transcriptome profiling revealed many DEGs associated with honey bees' immune responses to CSBV infection (Figure 4 and Supplementary Table 4). In accordance with previous reports, the expression of well-known described AMPs FIGURE 4 | GO enrichment plot of differentially expressed genes (DEGs) between the infected group and the healthy group. The x-axis is the negative logarithm of the p-value, and the y-axis is Goterm. Go Term shows entries for Top14 (P bonferroni <0.05). including apidaecin, hymenoptera, abaecin and defensin was found to be upregulated in CSBV infected larvae (Figure 3), this suggusts that AMP genes were induced not only by bacteria and fungi but also by some kinds of viruses like CSBV, DWV, and IAPV. Honey bee antiviral responses include not only immune pathways but also RNA interference (RNAi) (Mukherjee and Hanley, 2010;Kingsolver et al., 2013;Merkling and van Rij, 2013). Deformed wing virus (DWV) is the most common virus found in honey bee colonies. The immune pathways mounting a response against DWV infection in honey bees include RNAi, Toll, Imd, autophagy, and endocytosis (Nazzi et al., 2012;Chen et al., 2014;Kuster et al., 2014;Ryabov et al., 2014). In our study, the same immune signaling pathways were found to be triggered by CSBV infection. In addition to the upregulation of AMP genes controlled by the Toll and Imd pathways as well as genes involved autophagy and oxidative phosphorylation-related genes (Supplementary Table 7). Our study also showed that the expression of two core components of the RNAi pathway, Dicer-like and Argonaute-2 (Ago2), was upregulated in CSBVinfected larvae (Figure 2A). RNAi-mediated antiviral defense has been identified and described in honey bees (Niu et al., 2014;Brutscher and Flenniken, 2015). When honey bees were experimentally infected with Israeli acute paralysis virus (IAPV), the levels of Ago-2 and Dicer-like expression were markedly upregulated as compared to negative control of uninfected bees (Galbraith et al., 2015). The upregulation of Dicer-like and Ago2 expression in CSBV infected larvae observed in our study suggests that RNAi machinery was triggered by the CSBV infection and presumably exerted its function to cleave the viral RNA through the action of Dicer-like and Argonaute (AGO) proteins. This result provides evidence that that RNAi pathway has been implicated in honey bees' antiviral response to CSBV infection, and suggests that RNAi is a general antiviral response mechanism in virus infection. Viruses depend entirely on their hosts' cellular metabolism as energy resources to support their replication (Sanchez and Lagunoff, 2015). As a result, virus infections can dramatically alter host metabolic pathways and in turn the infection-induced alteration in metabolism can influence hosts' immune functions to viral infections. Our results of RNA-seq and qPCR analyses showed that the DEGs including ACLY, ACSS2, ATP5F1B, SOD2, PGAM, G6PD, ATP citrate lyase that were associated with the tricarboxylic acid (TCA) cycle and energy metabolism were downregulated in the CSBV-infected bees. Concurrently, FIGURE 5 | KEGG pathway enrichment plot of differentially expressed genes (DEGs) between the infected group and the healthy group. The x-axis is the negative logarithm of the p-value, and the y-axis is KEGG pathway (P-value < 0.05). a large number of genes encoding NADH dehydrogenase, succinate dehydrogenase, cytochrome bc1 complex, cytochrome c oxidase, and ATP synthase that are involved in and oxidative phosphorylation and fatty acid oxidation were downregulated. Also, genes encoding phosphofructokinase (PFKA) which is one of the most important regulatory enzymes in glycolysis and G6PD, glucose-6-phosphate dehydrogenase involved in the pentose phosphate pathway (PPT) were found to be down-regulated in CSBV infected larvae (Figures 8A,B and Supplementary Tables 7, 8). These interconnected metabolic pathways including TCA cycle, glycolysis, the pentose phosphate pathway (PPP), oxidative phosphorylation, and the fatty acid oxidation were reported to have a role in the immune responses to various infectious diseases on both the cellular and the organismal levels (Ayres, 2020). The down-regulation of genes associated with diverse functions of cellular metabolic pathways in our study indicated that energy metabolism/biogenesis that are required for proper immune functions was seriously altered due to CSBV infection. Sirtuins are a highly conserved family of proteins, these protein activity can prolong the lifespan of model organisms such as yeast, worms and flies (Verdin et al., 2010). Sirtuins are class III histone deacetylases that use NAD + as a co-substrate for their enzymatic activities. In mammals, there are seven Sirtuin members (SIRT1-7) that are present in different cellular compartments with SIRT1, SIRT6, and SIRT7 predominantly localize to the nucleus, SIRT2 cytoplasmically located; and SIRT3, SIRT4, and SIRT5 being mitochondrial (Vassilopoulos et al., 2011). The mitochondrial sirtuins function in energy production, metabolism, apoptosis and intracellular signaling (Verdin et al., 2010). There are multiple enzymatic activities associated with SIRTs. In addition to their deacetylase activity, SIRTs are involved in other enzymatic activities including ADP ribosylation (SIRT1, SIRT4, and SIRT6), desuccinylation and demalonylation (SIRT5), delipoylation (SIRT4), and demyristoylation and depalmitoylation (SIRT6) (Koyuncu et al., 2014;Budayeva et al., 2016). The core molecular machinery of autophagy which named the "autophagy proteins" orchestrates diverse aspects of cellular responses to other dangerous stimuli such as infection, and autophagy pathway and proteins play a crucial role in immunity and inflammation (Levine et al., 2011). A study reported that SIRT1 has been associated with the induction of autophagy and the regulation of inflammatory mediators (Owczarczyk et al., 2015). A more recent study shows that SIRT1 regulates FIGURE 6 | (A) The Signaling pathway of the canonical innate immune response of the honey bee. The pathways are those including Toll pathway, Imd/JNK pathway, JAK/STAT pathway, and RNAi pathway. The genes filled with yellow are differentially expressed genes (DEGs). The genes with red border are upregulated, and the genes with blue border are downregulated. (B) The DEGs (Dicer-like and Ago2) were involved in RNA interference (Dicer-like: P = 1.75e-08; Ago2: P = 9.45e-11). FIGURE 7 | (A) The differentially expressed genes (DEGs) were invovled in Sirtuin signaling pathway. Sirtuins are class III histone deacetylase enzymes that use NAD + as a co-substrate for their enzymatic activities. There are 15 DEGs involved in the Sirtuin pathway, and the DEGs are associated with energy metabolism, oxidative stress and apoptosis. (B) ATG4D associated with autophagy was significant differential expressed between the infected group and the healthy group. mitochondrial function and immune homeostasis in respiratory syncytial virus infected dendritic cells (Elesela et al., 2020). SIRT2, a nicotinamide adenine dinucleotide (NAD + )-dependent class III histone deacetylase, was found to be upregulated in wild-type hepatitis B virus (HBV WT)-replicating cells, leading to tubulin deacetylation (Piracha et al., 2018). SIRT3, SIRT4, and SIRT5 are mitochondrial deacetylases regulating a wide range of metabolic pathways that are known to be altered during viral infection as confirmed in our study (Koyuncu et al., 2014;Betsinger and Cristea, 2019). Of fifteen DEGs associated with and regulated by Sirtuin signaling pathway in our study (Supplementary Table 8), ATG4D (Autophagy Related 4D Cysteine Peptidase), a regulator of autophagy which is a mechanism that protect cells from degradation under stress conditions such as energy deprivation, and virus infection, was significantly upregulated after CSBV infection. Meanwhile, genes encoding the subunits of dehydrogenase, cytochrome c oxidase, ATPase of the oxidative phosphorylation as well as PFKA (phosphofructokinase) of glycolysis were down regulated in CSBV infected larvae. it is conceivable that down-regulation of metabolic pathways which are mediated by SIRTs would lead to insufficient fuel supply and immune suppression in infected bees. In sum, this paper provides valuable insights into honey bee transcriptome responses to CSBV infection. Given the significant finding that CSBV-infected larvae displayed a marked upregulation of many genes involved in Sirtuin signaling pathway, we speculate that sirtuin inhibitors could be potently antiviral agents against CSBV infection in honey bees. CONCLUSION Sacbrood virus have negative influence to honey bee larvae. The interaction between the host and the virus is complex. We showed that a large number of the genes had greatly changed in transcriptional regulation. Many genes which have relative to energy metabolism had down regulated. Other differentially expressed genes had play an important roles in immunity, oxidative stress, autophagy, and apoptosis. Interestingly, these important metabolism process is involved in a novel regulated pathway named sirtuin pathway. Our results showed that Sacbrood virus infection lead to activation of immunity system and dysfunction of energy metabolism involved in respiratory chain. These genes may be as targets of antiviral therapy to provide a new strategy for honey bee virus infection. DATA AVAILABILITY STATEMENT The raw sequence data reported in this paper have been deposited in the Genome Sequence Archive (88) in BIG Data Center (89), Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, under accession numbers CRA001558 that are publicly accessible at http://bigd.big.ac.cn/gsa.
6,615
2021-06-02T00:00:00.000
[ "Biology" ]
The Shear Strength of Granite Weathered Soil Under Di ff erent Hydraulic Paths : At present, there is no clear understanding of the influence of di ff erences in soil mineral composition, particle size grading, and hydraulic paths on the shear strength of unsaturated soil, and the related strength models are not applicable. The shear strength characteristics of di ff erent saturation specimens under di ff erent hydraulic paths were studied on two granite weathered soils. The experimental results show that the shear strength index of the prepared specimen is “arched” with the increase of saturation, and the dehydration specimen decreases linearly with the saturation. As considering the cementation of free oxides in soils and the interaction among soil particles at di ff erent saturations, it is assumed that there are three di ff erent contact modes among soil particles: direct contact, meniscus contact, and cement contact. The di ff erence in contact modes will reflect the di ff erent laws of shear strength. A shear strength model capable of distinguishing between the capillary e ff ect and the adsorptive e ff ect was established. The model predicted and verified the shear strength data of granite weathered soil under di ff erent hydraulic paths well, and then theoretically explained the evolution law of the shear strength of granite weathering soil under the change of saturation. Introduction Shear strength is the ultimate ability of soil to resist shear force, which is one of the most important indexes of soil mechanical properties. The pressure of retaining wall, bearing capacity of foundation, and stability of landslide and collapse in a natural environment are all related to the shear strength of the soil. Soils in unsaturated states are much more commonly seen in nature [1][2][3]. The shear strength of unsaturated soils depends on several factors such as soil type, particle distribution, density, hydraulic path, and stress state [3][4][5]. Over the past few decades, a number of laboratory tests have been conducted to study the hydraulic behavior of unsaturated soils in the low suction range using axial translation techniques [6][7][8][9]. The suction was less than 500 kPa in most direct shear or triaxial tests, and the test soil was mainly sand or silt. However, in geotechnical engineering, unsaturated soils with high or wide suction range are widespread, and soil types contain a lot of silty clay or clay. At the same time, it is challenging to control suction in mechanical tests in high suction range accurately, and the vapor flow technique or the osmotic suction techniques have been adopted to control the high suction of specimens [10][11][12][13]. However, the test results of unsaturated soil mechanical behavior under high suction in the literature are still intensely limited. In addition, most existing shear strength models are based on analysis of test results obtained by direct shear or triaxial shear tests at low suction ranges (typically below 500 kPa). As a result, the shear strength of unsaturated soils in the medium suction range is significantly overestimated, especially for clays [14,15]. Through the verification and comparison of various existing strength models, it is found that most of them are based on an empirical and phenomenological basis, and there are few explanations and simulations for the strength in the medium and high suction range [14]. On the other hand, different soil types, such as sand and clay, exhibit differences in microscopic states, which often significantly affect the strength of the soil. Soils in nature are complex and vary widely in their microscopic structure and mineral composition [16]. Although there are numerous models, there is no widely accepted method for strength influence mechanism and model calculation. Therefore, it is necessary to study the influence mechanism of different soil types, particle distribution, hydraulic path, and stress state on soil strength under high suction or wide suction range and establish a corresponding strength model. The widely distributed granite weathered soil is affected by its soil-forming environment and geotechnical composition [17,18]. Its physical and mechanical properties are unique, especially the microstructure of its profile variation [19,20], which also causes difficulties in numerical simulation of the mechanical behavior and engineering characteristics of weathered granite soil. Therefore, the granite weathered soil can be used as a typical research object in the study of unsaturated strength. There is an urgent need to clarify the mechanism of action of various factors that affect the strength of granite weathered soil, and establish a model directly used to simulate or explain its physical and mechanical characteristics under a wide suction range. Therefore, in this paper, the direct shear strength test of two different weathered granite soils under different hydraulic paths is carried out, and the evolution law of the soil strength on the effect of mineral composition and particle size gradation is analyzed under a wide suction range. The factors affecting the strength change of granite weathered soil are explained by the contact modes between soil particles established, and further verified and explained by scanning electron microscope (SEM) from the perspective of microscopic mechanism. The relationship between soil suction and strength is also discussed, and a shear strength model that is applicable and has a clear physical concept is established. Physical and Chemical Properties of Weathered Granite Soil The test soil was taken from a typical granite weathered soil in a slope in Wuzhou in Guangxi Autonomous Region, China. According to the weathering profile and erosion characteristics, the soil was divided into the residual soil layer and the fully weathered soil layer [21]. Representative samples were taken from the residual soil zone (2-5 m from the ground surface) and the fully weathered soil zone (6-10 m from the ground surface). As shown in Figure 1, the residual soil is brick red or brownish red with a small amount of white quartz particles; the fully weathered soil is mainly yellow-brown, mixed with red and white. Figure 2 indicates the measured particle gradation curve of the two soils. The content of residual soil with particle size less than 0.002 mm reaches 22%, while only about 5% in fully weathered soil and more than 50% of fully weathered soil is sand. The feature of grading curve of the granite weathered soils used in this paper is consistent with the typical granite weathering characteristics [18,21]. According to the soil particle size and structural characteristics, and the classification of granite weathered soils [22], the residual soil and the fully weathered soil adopted in this paper are named silty clay and silty sand, respectively. The basic physical and chemical indexes of the two soils are shown in Tables 1 and 2. The main minerals such as potassium feldspar, plagioclase, and mica of granite are enriched by weathering, hydrolyzed and oxidized to form secondary silicates such as kaolinite and illite. Because granite weathering is a process of enriching iron and aluminum, iron oxide and alumina remain, which could be hydrated to form cementation. X-ray diffraction(XRD) quantitative test results of clay minerals show that weathered granite is mainly composed of elo and illite, accompanied by quartz and calcium carbonate [23]. The degree of weathering makes the difference in the content of weathered products distinct, resulting in the specific gravity, liquid-plastic limit, and specific surface area of residual soil being significantly higher than that of fully weathered soil. Appl. Sci. 2020, 10, x FOR PEER REVIEW 3 of 15 The basic physical and chemical indexes of the two soils are shown in Tables 1 and 2. The main minerals such as potassium feldspar, plagioclase, and mica of granite are enriched by weathering, hydrolyzed and oxidized to form secondary silicates such as kaolinite and illite. Because granite weathering is a process of enriching iron and aluminum, iron oxide and alumina remain, which could be hydrated to form cementation. X-ray diffraction(XRD) quantitative test results of clay minerals show that weathered granite is mainly composed of elo and illite, accompanied by quartz and calcium carbonate [23]. The degree of weathering makes the difference in the content of weathered products distinct, resulting in the specific gravity, liquid-plastic limit, and specific surface area of residual soil being significantly higher than that of fully weathered soil. The basic physical and chemical indexes of the two soils are shown in Tables 1 and 2. The main minerals such as potassium feldspar, plagioclase, and mica of granite are enriched by weathering, hydrolyzed and oxidized to form secondary silicates such as kaolinite and illite. Because granite weathering is a process of enriching iron and aluminum, iron oxide and alumina remain, which could be hydrated to form cementation. X-ray diffraction(XRD) quantitative test results of clay minerals show that weathered granite is mainly composed of elo and illite, accompanied by quartz and calcium carbonate [23]. The degree of weathering makes the difference in the content of weathered products distinct, resulting in the specific gravity, liquid-plastic limit, and specific surface area of residual soil being significantly higher than that of fully weathered soil. Direct Shear Test and The Soil-Water Characteristic Curve Test of Weathered Granite Soil The test soil was dried and sieved through a 2-mm screen, and the dry density of the specimen took the average value of the two samples, namely 1.55 g/cm 3 . The test uses a ZJ-2 strain control type direct shear instrument, and the specimens were sheared under unconsolidated-undrained with the vertical pressures of 100, 200, 300, and 400 kPa (shearing rate is 0.8 mm/min). The method of soil-water characteristic curve test is reference [23]. All the tests were performed at room temperature of about 20 • C. Hydraulic path 1 (Prepared specimen referred to as P): Prepare specimens with initial saturation of 20, 40, 60, and 80% respectively. Specifically, it is to prepare soils with different initial moisture content corresponding to the target saturation, after sealing and moisturizing for h, weigh the soils of corresponding quality to make the specimens by static compact method, the error of saturation is controlled within ±0.5%. The 100% saturated specimen is obtained by pumping and saturating the specimen with an initial saturation of 20% and then the direct shear test is carried out after standing for 24 h. Hydraulic path 2 (dehydrated specimen referred to as D): The saturated sample is dehydrated slowly in a constant temperature and humidity box with a relative humidity of 25% and a temperature of 20 • C to achieve 20, 40, 60, and 80% saturation. The water content of the specimen is controlled by the mass weighing method, and the specimen body change is measured by the shrinkage instrument and vernier caliper method. The error of saturation is controlled within ± 0.5%. After reaching the corresponding saturation, the specimen is taken out for direct shear test (the body changes little during the dehydration process, so ignore the influence of body changes). Simulation of Existing Strength Model At present, the shear strength equations are established based on Bishop's effective stress [24] or independent stress variables (mean net stress and matric suction) [25]. Alonso et al. [26] compared and commented on the selection of stress state variables for unsaturated soils, they considered that the two stress variable forms of Bishop stress (effective stress) and suction should be uncontested choices as Equation (1). The main difference between many strength equations is the difference in the adsorption strength as Equation (2), and the setting of the parameter of χ in the strength equation in the Bishop format, which is mainly a function of saturation [14,27]. In order to verify the applicability of the model to the experimental results of weathered granite soils, the corresponding suction value of the corresponding saturation was obtained by using the soil-water characteristic curve, and the strength was calculated by Bishop shear strength equation of χ = S r where τ f is the shear strength, c is the adsorption strength, c' is the effective saturation cohesion and ϕ' is effective internal friction angle, χ is effective stress parameter, u a − u w is net normal stress, and u a − u w is matric suction. Results of the Direct Shear Test and the Soil-Water Characteristic Curve Test Under different saturation and stress states, the failure mechanism of the specimen is different. When the saturation is low, the specimen is brittlely broken, and the peak and residual strength of the direct shear strength appear; at high saturation, the specimen may undergo plastic failure, that is, with increasing shear displacement, the strength will continue to increase without peak strength. Therefore, the shear strength of specimen is determined as follows: for brittle failure, the strength value is taken as its peak strength, and for plastic failure, the strength value is taken when the shear displacement is 4-mm. The strength failure line is shown in Figure 3. It is known from Figure 3 that the corresponding intensity values of residual soil at different initial saturations are significantly higher than that of fully weathered soils. The strength of the prepared specimens of both soils increased first and then decreased with saturation increased, and the peak intensity appears at S r = 40%. At the same time, dehydrated specimens present different trends. As the saturation decreases, the shear strength of the specimen will continue to increase. The dehydrated sample with the same saturation has a higher shear strength value than the prepared sample; The opposite trend even occurs at low saturation, the strength of the prepared sample is relatively low and that of the dehydrated sample is the highest when S r = 20%. Under different saturation and stress states, the failure mechanism of the specimen is different. When the saturation is low, the specimen is brittlely broken, and the peak and residual strength of the direct shear strength appear; at high saturation, the specimen may undergo plastic failure, that is, with increasing shear displacement, the strength will continue to increase without peak strength. Therefore, the shear strength of specimen is determined as follows: for brittle failure, the strength value is taken as its peak strength, and for plastic failure, the strength value is taken when the shear displacement is 4-mm. The strength failure line is shown in Figure 3. It is known from Figure 3 that the corresponding intensity values of residual soil at different initial saturations are significantly higher than that of fully weathered soils. The strength of the prepared specimens of both soils increased first and then decreased with saturation increased, and the peak intensity appears at Sr = 40%. At the same time, dehydrated specimens present different trends. As the saturation decreases, the shear strength of the specimen will continue to increase. The dehydrated sample with the same saturation has a higher shear strength value than the prepared sample; The opposite trend even occurs at low saturation, the strength of the prepared sample is relatively low and that of the dehydrated sample is the highest when Sr = 20%. Figure 4, we can see that the changing trend of the cohesive force and friction angle of the specimen is consistent with the changing trend of the strength value of the hydraulic path. However, the friction angle of the prepared specimen does not change much. The changing range of the prepared specimen is within 5 • , and the change amplitude of the dehydrated specimen does not exceed 10 • . When S r ≤ 60%, the friction angle of the dehydrated specimen of fully weathered is greater than that of residual soil, which is related to the fact that fully weathered soil contains more sand. As can be seen from Figure 4b, the cohesion of both soils is more sensitive to saturation, while the residual soil changes more drastically; when S r = 100%, the cohesion of fully weathered soil is 5 kPa, the amplitude of its prepared specimen is still small. However, the dehumidification specimen increases linearly with the decrease of saturation, reaching 98 kPa when S r = 20%. The cohesion of the residual soil changes obviously whether it is prepared specimens or dehydrated specimens, the peak value of the prepared specimens has reached 81 kPa, and the linear change rate of the dehydrated specimens is also more significant than that of the fully weathered soil. This is related to the residual clay containing more clay particles, iron oxide, and other types of cementation, which will be analyzed in detail below. contains more sand. As can be seen from Figure 4b, the cohesion of both soils is more sensitive to saturation, while the residual soil changes more drastically; when Sr = 100%, the cohesion of fully weathered soil is 5 kPa, the amplitude of its prepared specimen is still small. However, the dehumidification specimen increases linearly with the decrease of saturation, reaching 98 kPa when Sr = 20%. The cohesion of the residual soil changes obviously whether it is prepared specimens or dehydrated specimens, the peak value of the prepared specimens has reached 81 kPa, and the linear change rate of the dehydrated specimens is also more significant than that of the fully weathered soil. This is related to the residual clay containing more clay particles, iron oxide, and other types of cementation, which will be analyzed in detail below. Figure 5 is the soil-water characteristic curve (SWCC) of the whole range of suction of the granite weathered soil obtained by the pressure plate method and the salt solution method [23]. The soilwater characteristic curve is a functional curve describing the constitutive relation between soil suction and water content (saturation), which is often used to calculate the strength of the soil [1]. The residual soil has higher water holding capacity than the fully weathered soil, the detail of comparison and analysis can be seen [23]. Figure 5 is the soil-water characteristic curve (SWCC) of the whole range of suction of the granite weathered soil obtained by the pressure plate method and the salt solution method [23]. The soil-water characteristic curve is a functional curve describing the constitutive relation between soil suction and water content (saturation), which is often used to calculate the strength of the soil [1]. The residual soil has higher water holding capacity than the fully weathered soil, the detail of comparison and analysis can be seen [23]. Simulation Results of Existing Strenth Model The experimental results and calculated values are shown in Figure 6. At low suction, the deviation between the experimental results and the predicted value is small, while at high suction, the deviation is massive, especially for dehydrated residual soil. This is consistent with Alonso et al.'s belief that the strength equation in the form of Bishop and Fredlund will be significantly large in the high suction section. This is consistent with Alonso et al.'s view [26] that the strength equation of Bishop and Fredlund will appear to be severely large in the high suction section. On the other hand, at high suction, the strength of the prepared specimen does not increase with the increase of suction. On the contrary, there is a decrease in strength, but the strength equation does not reflect it. Although the suction value used in the calculation is the soil-water characteristic curve of the dehydrated sample and not the actual curve of the prepared sample, there is a certain deviation in the suction value, which does not affect the regular trend in Figure 6. Therefore, this type of equation is not suitable for the calculation of the shear strength of granite weathered soil containing cementation. It is necessary to establish a new model to simulate such soils as weathered granite soil. Simulation Results of Existing Strenth Model The experimental results and calculated values are shown in Figure 6. At low suction, the deviation between the experimental results and the predicted value is small, while at high suction, the deviation is massive, especially for dehydrated residual soil. This is consistent with Alonso et al.'s belief that the strength equation in the form of Bishop and Fredlund will be significantly large in the high suction section. This is consistent with Alonso et al.'s view [26] that the strength equation of Bishop and Fredlund will appear to be severely large in the high suction section. On the other hand, at high suction, the strength of the prepared specimen does not increase with the increase of suction. On the contrary, there is a decrease in strength, but the strength equation does not reflect it. Although the suction value used in the calculation is the soil-water characteristic curve of the dehydrated sample and not the actual curve of the prepared sample, there is a certain deviation in the suction value, which does not affect the regular trend in Figure 6. Therefore, this type of equation is not suitable for the calculation of the shear strength of granite weathered soil containing cementation. It is necessary to establish a new model to simulate such soils as weathered granite soil. high suction section. This is consistent with Alonso et al.'s view [26] that the strength equation of Bishop and Fredlund will appear to be severely large in the high suction section. On the other hand, at high suction, the strength of the prepared specimen does not increase with the increase of suction. On the contrary, there is a decrease in strength, but the strength equation does not reflect it. Although the suction value used in the calculation is the soil-water characteristic curve of the dehydrated sample and not the actual curve of the prepared sample, there is a certain deviation in the suction value, which does not affect the regular trend in Figure 6. Therefore, this type of equation is not suitable for the calculation of the shear strength of granite weathered soil containing cementation. It is necessary to establish a new model to simulate such soils as weathered granite soil. Effect of Particle Size Gradation and Mineral Composition on Shear Strength According to the interaction of soil particles in different mineral compositions and water holding states, it is assumed that there are three different forms of contact between particles in the soil as shown in Figure 7: Figure 7a displays that when the saturation is high, the pores are filled with water, and a small amount of meniscus mainly provides the shear strength. When the saturation is low, as Effect of Particle Size Gradation and Mineral Composition on Shear Strength According to the interaction of soil particles in different mineral compositions and water holding states, it is assumed that there are three different forms of contact between particles in the soil as shown in Figure 7: Figure 7a displays that when the saturation is high, the pores are filled with water, and a small amount of meniscus mainly provides the shear strength. When the saturation is low, as shown in Figure 7b, there are three modes of inter-particle contact: (1.) The soil particles are in direct contact with the adsorbed water film, due to the relative largely density, the adsorbed water cannot flow, which can be regarded as a solid or semi-solid [28]. (2.) There is a meniscus contact by the capillary between the soil particles. shown in Figure 7b, there are three modes of inter-particle contact: (1.) The soil particles are in direct contact with the adsorbed water film, due to the relative largely density, the adsorbed water cannot flow, which can be regarded as a solid or semi-solid [28]. (2.) There is a meniscus contact by the capillary between the soil particles. (1) When there is no cementation substance and the saturation is low, the contact types between the soil particles are mainly direct contact and meniscus contact. The content of the capillary meniscus increases first and then decreases as the saturation increases. The optimal saturation occurs when there is the largest soil-water contact area in the soil, with the most meniscus contact, thereby increasing the shear strength of the soil. On the other hand, the pore size and distribution in the soil body will affect the characteristics of the meniscus, while the particle size of the soil particles and the mineral composition affect the pore size and distribution of the soil (1) When there is no cementation substance and the saturation is low, the contact types between the soil particles are mainly direct contact and meniscus contact. The content of the capillary Appl. Sci. 2020, 10, 6615 8 of 14 meniscus increases first and then decreases as the saturation increases. The optimal saturation occurs when there is the largest soil-water contact area in the soil, with the most meniscus contact, thereby increasing the shear strength of the soil. On the other hand, the pore size and distribution in the soil body will affect the characteristics of the meniscus, while the particle size of the soil particles and the mineral composition affect the pore size and distribution of the soil [23]. According to the Young-Laplace equation, small pores have a more substantial capillary effect than large pores [29]. Therefore, the particle size gradation and the mineral composition in the soil control the contact form and relative content of the soil particles, as well as their intensity of action, which in turn affects the law of shear strength. (2) When there are cementation materials in the pore fluid, the cement contact forms between soil particles in addition to direct contact and meniscus contact. Part of the iron oxide in the soil will be distributed on the whole or part of the surface of the clay particles in the form of "envelope", and it will be cemented together in the form of "bridge" to increase the strength of the soil, and the other part will exist as free iron oxide. The key to the "envelope" formation lies in the saturation of the soil and the content of free iron oxide. With low saturation, the solid particles of iron oxide in the soil cannot be dissolved in the pore fluid to form a sol; with high saturation, the water film between the soil particles is so thick that it cannot create cementation but exists in a free state. The free iron oxide will gradually develop a colloidal "envelope" as the saturation decreases from the wet state to the dry state. It is known from Table 1 that the more severely weathered residual soil contains higher residual Fe 2 O 3 , Al 2 O 3 , and organic matter than the fully weathered soil (other oxides cannot be dissolved to form free oxides), which makes the residual soil contain more cementation. On the other hand, the larger specific surface area and cation exchange capacity of the residual soil can make the soil particles adsorb more ions and increase the soluble salt in the pore fluid, which can produce the same cementation effect as free iron oxide [30]. The cementation makes the shear strength of the residual soil higher than that of the fully weathered soil and makes the shear strength of the dehydrated specimen higher than that of the prepared sample. Figure 8 is a 2000-times scanning electron micrograph of the prepared specimens and dehydrated specimens of granite weathered soil when S r = 20%. It is known from Figure 8 that the content of coarse particles (blocks or flakes with smooth surface in the picture) in fully weathered soil is relatively high, and the skeleton of the soil body is mainly composed of coarse particles. The clay particles are filled in the large pores of the coarse particles with a floc structure (the surface looks rough and the volume is large or small) to control the micropore structure; The residual soil has a high content of clay particles, and the soil skeleton is mainly composed of floc structure and has more micropore structure, with some cracks or large pores interspersed. Therefore, at low saturation, the residual soil contains more meniscus than the fully weathered soil. On the contrary, the fully weathered soil contains more direct contact. Macroscopically, it has a larger friction angle. The residual soil has greater cohesion due to more meniscus contact and cement contact. It is known from Figure 8a,b that the fully weathered soil contains more flaky and coarse particles, and the particle pores of the dehydrated sample contain more floc structure than the prepared sample. It is also known from Figure 8c,d that the flocculation structure of the dehydrated sample of residual soil is more compact and uniform than that of the prepared sample; that is, the dehydrated sample forms more cement contacts. Therefore, the structural composition and mineral composition of the soil determine the contact form between the particles, and the dominant contact form under different hydraulic paths determines the law of the shear strength. At the same time, these factors also affect the water-holding characteristics of the soil. In unsaturated soil mechanics, suction is usually used as a representative variable, and the influence of suction on strength will be quantitatively studied in the following part. determine the contact form between the particles, and the dominant contact form under different hydraulic paths determines the law of the shear strength. At the same time, these factors also affect the water-holding characteristics of the soil. In unsaturated soil mechanics, suction is usually used as a representative variable, and the influence of suction on strength will be quantitatively studied in the following part. Effect of Suction on Shear Strength and Model Establishment At present, the mechanism of matric suction on strength is mainly aimed at capillary pressure caused by surface tension. According to the microcosmic study of soil particle contact model above, the matric suction not only includes the suction generated by the capillary phenomenon, but also includes capillary and adsorptive suction. Other scholars also hold unanimous views [31,32]. The adsorptive part is a complex interaction between soil and water, including short-range van der Waals forces, long-range electrostatic forces, and other hydration, namely physico-chemical interactions [16,33]. Cement contact caused by cementation substances such as free oxides, soluble salts, and organic matter is a typical part of adsorptive suction. At high saturation, the adsorptive part also occupies a large proportion; at low saturation, the capillary effect is no longer effective, and the dominant is the clay mineral and the soil-water interaction that adsorbs water [25,31]. In the existing unsaturated strength equation [14,26], the effect of capillary and suction on strength is confused, and the mechanism of capillary action is relatively clear, and the mathematical treatment is relatively simple. Therefore, in the quantitative research of unsaturated soil mechanics, matric suction is often equated with the capillary part and the adsorptive part is ignored. The constitutive models established for unsaturated soils are mostly based on the capillary mechanism, so these models are not suitable for high plastic clays, soils containing cementation, or low water content. According to the previous micro research and the existing problems in the current model, this paper divides the shear strength of soil into capillary and adsorptive parts as shown in Equation (3). Konrad and Lebeau [34] used the capillary model to clarify the relationship between the capillary strength generated by the capillary water and the matric suction. A capillary model of a typical silty soil was obtained, and the capillary strength was first increased and then decreased with the matric τ is the total shear strength, cap f τ is the shear strength of the capillary effect, ads f τ is the shear strength of the adsorptive effect, n σ is net stress, , r r S is residual saturation, r S is total saturation, and λ is the parameter fitted according to the experimental data. For special soils, such as granite weathered soil containing cementation, the contact form between the soil particles under the dehumidification path includes not only meniscus contact, but also cement contact. Equation (4) is for soils dominated by capillary models such as low plastic silty For special soils, such as granite weathered soil containing cementation, the contact form between the soil particles under the dehumidification path includes not only meniscus contact, but also cement contact. Equation (4) is for soils dominated by capillary models such as low plastic silty soil or sandy soil, so it is necessary to introduce the influence of adsorptive suction. Tang et al. [35] believed that the adsorptive part is mainly a variable structure suction that changes with saturation. The variable structure suction includes the cementation between the soil particles, the electric double layer suction, the bite force, and the physical-chemical force and other forces. Therefore, the strength of the adsorptive part is generally expressed as a function of the strength of the cementation. It gradually increases as the saturation decreases, and the growth rate will vary with soil properties. Kong et al. [36] summarized the cementation function as Equation (6) based on a large number of test data on the influence of saturation on soil strength. Figure 10 indicates the characteristic curves of the cementation function under different parameters. From Figure 10, it can be seen that the cementation function curve can be either a concave function or a convex function, which can simulate different growth laws of cementation strength. The effect of the cementation strength of structural soil is related to stress [37], and considering the degradation model of the cementation strength function under the condition of no stress, the cementation strength derived in this paper is shown as Equation (5). The overall strength equation is shown as Equation (7), which is the sum of the capillary effect and adsorptive effect. τ f = c + σ n tan ϕ + S r − S r,r 1 − S r,r 0.55 λ s tan ϕ + (σ n + s 0 )a(1 − S b r ) in the equation, C(S r ) is the cementation strength function related to saturation, s 0 is the intake value, and a,b are the fitted parameters according to the test data. in the equation, C(Sr) is the cementation strength function related to saturation, s0 is the intake value, and a,b are the fitted parameters according to the test data. Figure 10. Cementation function characteristic curve. Figure 9 is a simulation curve of the relationship between suction and strength in soil, clarifying the contribution and mechanism of capillary and adsorptive effects on strength. The soil strength is considered as an ideal capillary model when the viscosity absorption strength caused by cementation substances such as clay content, free iron oxide, and organic matter is ignored; that is, the contribution of capillary action to the strength is considered only. As shown in Figure 9, the strength of the soil increases first with the increase of saturation, and there is an optimal saturation to achieve the maximum strength of the soil. Then, as the saturation increases, the strength of the soil begins to decrease, and the variation law of the shear strength of the prepared specimen of weathered granite soil is like this. When there is cementation material in the pore solution, as the saturation decreases, the cementation material dissolved in the pore fluid continuously precipitates to increase the cementation strength of the soil, which explains the law of the shear strength of the dehydrated specimen of granite weathered soil well. For simplification, considering that the adsorptive of the Figure 9 is a simulation curve of the relationship between suction and strength in soil, clarifying the contribution and mechanism of capillary and adsorptive effects on strength. The soil strength is considered as an ideal capillary model when the viscosity absorption strength caused by cementation substances such as clay content, free iron oxide, and organic matter is ignored; that is, the contribution of capillary action to the strength is considered only. As shown in Figure 9, the strength of the soil increases first with the increase of saturation, and there is an optimal saturation to achieve the maximum strength of the soil. Then, as the saturation increases, the strength of the soil begins to decrease, and the variation law of the shear strength of the prepared specimen of weathered granite soil is like this. When there is cementation material in the pore solution, as the saturation decreases, the cementation material dissolved in the pore fluid continuously precipitates to increase the cementation strength of the soil, which explains the law of the shear strength of the dehydrated specimen of granite weathered soil well. For simplification, considering that the adsorptive of the prepared specimen contributes little to the strength of the soil, so only the capillary effect is considered during the calculation of the model. Since only the soil-water characteristic curve test results of the dehydrated soil samples are available, the capillary action in the dehydrated samples will be assumed to be consistent with that of the prepared samples in the calculation. Figure 11 indicates the comparison between the experimental results of the shear strength of granite weathered soil and the simulation results of Equation (7), and Table 3 presents the relevant parameters calculated by the model. As can be seen from Figure 11, this model has a high degree of fitting to the test results, indicating the correctness and validity of the model. The calculation after the simplified model assumption has a high degree of fit. Still, it is worth noting that: the soil-water characteristic curve used in the model calculation is the soil-water characteristic curve of the dehydrated specimen, and the prepared specimens with different initial water content will produce different pore size distributions, which makes the simulated strength of medium and high suction lower than the actual test strength. The strength simulation of the dehydrated specimen directly uses the capillary effect parameters of the prepared specimen, which is one of the reasons that the simulation results are low when its suction is high. On the other hand, although there may be some changes in the saturation and suction of the specimen during the direct shear test, there is a certain deviation that cannot accurately describe the strength characteristics of unsaturated soil under the three-dimensional stress state. However, the direct shear test can accurately and simply reflect the strength characteristics of soil under different hydraulic paths to a certain extent, which provides a valid test basis for the establishment of the model. As can be seen from Figure 11, this model has a high degree of fitting to the test results, indicating the correctness and validity of the model. The calculation after the simplified model assumption has a high degree of fit. Still, it is worth noting that: the soil-water characteristic curve used in the model calculation is the soil-water characteristic curve of the dehydrated specimen, and the prepared specimens with different initial water content will produce different pore size distributions, which makes the simulated strength of medium and high suction lower than the actual test strength. The strength simulation of the dehydrated specimen directly uses the capillary effect parameters of the prepared specimen, which is one of the reasons that the simulation results are low when its suction is high. On the other hand, although there may be some changes in the saturation and suction of the specimen during the direct shear test, there is a certain deviation that cannot accurately describe the strength characteristics of unsaturated soil under the three-dimensional stress state. However, the direct shear test can accurately and simply reflect the strength characteristics of soil under different hydraulic paths to a certain extent, which provides a valid test basis for the establishment of the model. Conclusions In this paper, the shear strength test of two weathered granite soils with different saturations under the influence of different hydraulic paths is carried out. The effect of mineral composition and particle size gradation on shear strength are further discussed, and a shear strength model of unsaturated soil considering capillary and adsorption effect is established. The main conclusions are as follows: (1) The shear strength of the prepared specimen increases first and then decreases with the increase of saturation. It reaches the peak strength at the optimal saturation Sr = 40%, showing an "arch" change. The shear strength of the dehydrated specimen increases linearly with the decrease of saturation. The cohesion of weathered granite soil varies significantly with saturation, while the amplitude of the internal friction angle with saturation varies little. (2) There are three different contact forms: direct contact, meniscus contact, and cement contact between soil particles. Different granite weathering degree makes the residual soil and fully weathered soil different in particle size gradation and mineral composition, the contact form between the soil particles is jointly determined under the effect of the hydraulic path, which ultimately affects the strength characteristics of the soil under different saturations. (3) A strength model which can consider both the capillarity and adsorptive effect is established, which can thoroughly explain the action mechanism and influence law of capillarity and Conclusions In this paper, the shear strength test of two weathered granite soils with different saturations under the influence of different hydraulic paths is carried out. The effect of mineral composition and particle size gradation on shear strength are further discussed, and a shear strength model of unsaturated soil considering capillary and adsorption effect is established. The main conclusions are as follows: (1) The shear strength of the prepared specimen increases first and then decreases with the increase of saturation. It reaches the peak strength at the optimal saturation S r = 40%, showing an "arch" change. The shear strength of the dehydrated specimen increases linearly with the decrease of saturation. The cohesion of weathered granite soil varies significantly with saturation, while the amplitude of the internal friction angle with saturation varies little. (2) There are three different contact forms: direct contact, meniscus contact, and cement contact between soil particles. Different granite weathering degree makes the residual soil and fully weathered soil different in particle size gradation and mineral composition, the contact form between the soil particles is jointly determined under the effect of the hydraulic path, which ultimately affects the strength characteristics of the soil under different saturations. (3) A strength model which can consider both the capillarity and adsorptive effect is established, which can thoroughly explain the action mechanism and influence law of capillarity and adsorptive on soil strength, and well explain and simulate the evolution law of shear strength of granite weathering soils with different saturation under different hydraulic paths. According to the deficiencies of the existing strength model, this model provides a clear theoretical basis for the strength research of special soils such as weathered granite soil, and it is easy to generalize and apply to other simple soil types in unsaturated soils. The prediction and verification of other soils will be improved in the next step.
9,907.8
2020-09-22T00:00:00.000
[ "Geology" ]
On the Chern-Gauss-Bonnet Theorem and Conformally Twisted Spectral Triples for $C^*$-Dynamical Systems The analog of the Chern-Gauss-Bonnet theorem is studied for a $C^*$-dynamical system consisting of a $C^*$-algebra $A$ equipped with an ergodic action of a compact Lie group $G$. The structure of the Lie algebra $\mathfrak{g}$ of $G$ is used to interpret the Chevalley-Eilenberg complex with coefficients in the smooth subalgebra $\mathcal{A} \subset A$ as noncommutative differential forms on the dynamical system. We conformally perturb the standard metric, which is associated with the unique $G$-invariant state on $A$, by means of a Weyl conformal factor given by a positive invertible element of the algebra, and consider the Hermitian structure that it induces on the complex. A Hodge decomposition theorem is proved, which allows us to relate the Euler characteristic of the complex to the index properties of a Hodge-de Rham operator for the perturbed metric. This operator, which is shown to be selfadjoint, is a key ingredient in our construction of a spectral triple on $\mathcal{A}$ and a twisted spectral triple on its opposite algebra. The conformal invariance of the Euler characteristic is interpreted as an indication of the Chern-Gauss-Bonnet theorem in this setting. The spectral triples encoding the conformally perturbed metrics are shown to enjoy the same spectral summability properties as the unperturbed case. Introduction In noncommutative geometry [13,14], C * -dynamical systems (A, G, α) have been long studied from a differentiable point of view starting with extending the basic notions of differential geometry and differential topology to a differential structure on a C * -algebra A endowed with an action α : G → Aut(A) of a Lie group G. That is, the notion of a connection, a vector bundle, and Chern classes were introduced for such a dynamical system, a pseudodifferential calculus was developed and the analog of the Atiyah-Singer index theorem was proved in [12]. The noncommutative two torus T 2 θ has been one of the main motivating examples for these developments. In [55], this line of investigation has been taken further focusing on general compact Lie groups and index theory. Following the seminal work of Connes and Tretkoff on the Gauss-Bonnet theorem for T 2 θ [18] and its extension in [25] concerning general translation invariant conformal structures, local differential geometry of non-flat noncommutative tori has been a subject of increasing interest in recent years [3,17,26,27,28,45]. A Weyl conformal factor may be used to perturb a flat metric on noncommutative tori, and Connes' pseudodifferential calculus [12] can be employed along with noncommutative computational methods to carry out calculation of scalar curvature and to investigate the related differential geometric statements, see also [20] for an asymmetric perturbation of the metric. The idea and the techniques were indeed initiated in a preprint [11], where with the help of complicated modified logarithmic functions and a modular automorphism, an expression for the value ζ(0) of the spectral zeta function of the Laplacian of a curved metric on T 2 θ was written. The vanishing of this expression is interpreted as the Gauss-Bonnet theorem [18], which was suggested by the developments in the following intimately related theories. In fact, the spectral action principle [9], in particular the related calculations in the presence of a dilaton [10], and the theory of twisted spectral triples, which arise naturally in noncommutative conformal geometry [16,49], indicate independence of ζ(0) from the conformal factor. Connes' index formula for Fredholm modules, which involves cyclic cohomology, is quite broad [13]. It asserts that given a finitely summable Fredholm module over an algebra, the analytic index, given by pairing a K-homology and a K-theory element of the algebra, coincides with the topological index, which pairs the corresponding elements in periodic cyclic cohomology and homology obtained by the Chern-Connes characters. The local index formula of Connes and Moscovici [15] gives a local formula based on residue trace functionals, which is in the same cyclic cohomology class as the Chern-Connes character, and has the advantage that one can perform explicit computations with it (see also [34]). The residue trace functionals are intimately related to the spectral formulation of Wodzicki's noncommutative residue [59,60]. In fact, the formulation of the noncommutative residue as an integration over the cosphere bundle of a manifold also is important for explicit computations with noncommutative geometric spaces, see [22,28,29] for a related treatment on noncommutative tori. The notion of a twisted spectral triple introduced by Connes and Moscovici [16] allows to incorporate a variety of new examples, in particular type III examples in the sense of the Murrayvon Neumann classification of operator algebras. They have shown that the Chern-Connes character of a finitely summable twisted spectral triple is an ordinary cyclic cocycle and enjoys an index pairing with K-theory. Also, they have constructed a local Hochschild cocycle, which indicates that the ground is prepared for extending the local index formula to the twisted case. This was carried out in [47] for a particular class of twisted spectral triples; the analog of Connes' character formula was investigated in [24] for the examples. For treatments using twisted cyclic theory, in particular for relations of the theory with Cuntz algebra [19] and quantum groups, we refer to [6,7,8], see also [37,38]. More recent works related to the twisted version of spectral triples reveal their connections with the Bost-Connes system, Riemann surfaces and graphs [33], and with the standard model of particle physics [21]. Twisted spectral triples associated with crossed product algebras are studied in [16,36,47], see also [23] for an algebraic treatment. Ergodic actions of compact groups on operator algebras are well-studied in the von Neumann setting (see, e.g., [56,57,58]) and in a C * -algebraic context. They were first introduced for C * -algebras by E. Størmer [54] and this initial effort was expanded in various articles. Let us just mention two of them: • In their article [1], Albeverio and Høegh-Krohn investigate in particular ergodic actions on commutative C * -algebras A = C(X) and prove that they correspond to continuous transitive actions on X. • The article [35] by Høegh-Krohn, Landstad and Størmer proves that if G acts ergodically on a unital C * -algebra A, its unique G-invariant state is actually a trace. The article [52] was the first to suggest in 1998 that ergodic actions give rise to interesting spectral triples. This article proceeds with studying the metric induced on state spaces by ergodic actions. More recently, the article [30] produced a detailed construction of a so called Lie-Dirac operator on a C * -algebra A, based on an ergodic action of a compact Lie group G on A. It also investigated the analytic properties of these Lie-Dirac operators, proving in particular that they are finitely summable spectral triples. In the present article, we elaborate on the techniques used in [30] in order to prove quite different results. Indeed, in [30] the focus was on a Dirac operator for a "noncommutative spin manifold", whereas here the emphasis is on a sort of Hodge-de Rham operator associated with a conformally perturbed metric, construction of twisted spectral triples and the analog of the Chern-Gauss-Bonnet theorem. The Hodge-de Rham operators constructed here are (in general) not Lie-Dirac operators in the sense of [30]. In this previous article, the algebra structure of A played only a minor role in the analytical properties of the spectral triple. Here, the multiplication of A has a central importance. For a recent approach of Hodge theory using Hilbert modules, we refer to the recent article [44]. See also [42,43]. This article is organized as follows. In Section 2, we recall the necessary statements from representation theory and operator theory, and the notion of ordinary and twisted spectral triples along with their main properties that are used in our arguments and concern our constructions. We associate a complex of noncommutative differential forms to a C * -dynamical system (A, G, α) in Section 3. In the ergodic case, the analog of the Hodge-de Rham operator is studied when the complex is equipped with a Hermitian structure determined by a metric in the conformal class of the standard metric associated with the unique G-invariant trace on A. Inspired by a construction in [18], we construct in Section 4 a spectral triple on A and a twisted spectral triple on the opposite algebra A op , which encode the geometric information of the conformally perturbed metric. We study the Dirac operator of the perturbed metric carefully and prove that it is selfadjoint and enjoys having the same spectral dimension as the non-perturbed case. It should be stressed that ergodicity plays a crucial role for the latter to hold. The existence of an analog of the Chern-Gauss-Bonnet theorem is studied in Section 5 by proving a Hodge decomposition theorem for our complex and showing that its Euler characteristic is independent of the conformal factor. Combining this with the McKean-Singer index formula and small time asymptotic expansions, which often exist for noncommutative geometric spaces, we explain how the analog of the Euler class or the Pfaffian of the curvature form can be computed as local geometric invariants of examples that fit into our setting. Indeed, such invariants depend on the behavior at infinity of the eigenvalues of the involved Laplacians and the action of the algebra. Finally, our main results and conclusions are summarized in Section 6. Preliminaries We start by some reminders about results and notations from various anterior articles. Definition 2.1. Given a strongly continuous action α of a compact group G on a unital C *algebra A, we say that it is ergodic if the fixed algebra of G-invariants elements is reduced to the scalars, i.e., if ∀ g ∈ G, α g (a) = a, then a ∈ C1 A . Among the important results obtained with this notion of ergodic action, let us quote the following [35, Theorem 4.1, p. 82]: Theorem 2.2. Let A be a unital C * -algebra, G a compact group and α a strongly continuous representation of G as an ergodic group of * -automorphisms of A, then the unique G-invariant state ϕ 0 on A is a trace. Another result that will play an important role in our article is [35, Proposition 2.1, p. 76], which we adapt slightly in the following: Proposition 2.3. Let A be a unital C * -algebra, G a compact group and α a strongly continuous representation of G as an ergodic group of * -automorphisms of A. Let V be an irreducible unitary representation of G, A(V ) the spectral subspace of V in A and m(V ) the multiplicity of V in A(V ). Then we have Among our main results, we prove the finite summability of certain spectral triples (ordinary and twisted), we therefore define those terms: An even spectral triple is given by the same data, but we further require that a grading γ be given on H such that (i) A acts by even operators, (ii) D is odd. Remark 2.5. For a selfadjoint operator D, condition (i) of the definition above is actually equivalent to ∃ λ ∈ R\{0} s.t. (D + iλ) −1 is a compact operator. To define finitely summable spectral triples, we now need a brief reminder regarding trace ideals (also known as symmetric ideals), for which we follow Chapter IV of [14]. For more details concerning symmetrically normed operator ideals and singular traces we refer the reader to [53] and [46]. Definition 2.6. For p > 1, the ideal L p + (also denoted L (p,∞) in [14] and J p,ω in [53, p. 21]) consists of all compact operators T on H such that where σ k is defined as the supremum of the trace norms of T E, when E is an orthonormal projection of dimension k, i.e., Equivalently, σ k (T ) is the sum of the k largest eigenvalues (counted with their multiplicities) of the positive compact operator |T | := (T * T ) 1/2 . The definition extends to the case of p = 1: L 1 + is the ideal of compact operators T s.t. A spectral dimension for spectral triples is defined as follows Finally, we will consider twisted spectral triples (also called σ-spectral triples) as introduced in [16, Definition 3.1]. This is a spectral triple just like in Definition 2.4, but for a fixed automorphism σ of A, the bounded commutators condition (denoted (ii) above) is replaced by (ii) the subalgebra A of all a ∈ A such that • π(a)(Dom(D)) ⊆ Dom(D), • Dπ(a) − π(σ(a))D extends to a bounded map on H is dense in A. In this paper, we will need the subalgebras A k for k 0, corresponding to the C k -differentiable class. Following [5, Section 2.2], we introduce the space Let us fix a basis (∂ i ) of the Lie algebra g. For such a choice of basis, the infinitesimal generators ∂ i act as derivations A m → A m−1 . According to [ is a Banach algebra with ab m a m b m . In particular, if h ∈ A 1 , then e λh ∈ A 1 , for all complex number λ -see also Lemma 3.3 below for a more precise estimate. Following the density properties established in [5] (see, e.g., Definition 2.2.15, p. 47), the intersection A ∞ = ∞ j=0 A j is a dense * -subalgebra of the C * -algebra A, which is stable under the derivations ∂ i . 3 Hodge-de Rham Dirac operator and C * -dynamical systems In this article, we consider a fixed A, a C * -algebra with an ergodic action α of a compact Lie group G of dimension n. We write A for A ∞ , the "smooth subalgebra" of A, which can alternatively be defined as The Chevalley-Eilenberg cochain complex with coefficients in A provides a complex that we interpret as "differential forms" on A. For the reader's convenience and to fix notations, we provide a reminder of this construction. For all k ∈ N, where g * denotes the linear forms on g, the Lie algebra of the Lie group G. Given a scalar product on g * (e.g., obtained from the Killing form), we can extend it to a scalar product on k g * by setting i.e., the determinant of the matrix of scalar products. We fix an orthonormal basis (ω j ) j=1,...,n of g * for this scalar product and consider its dual basis (∂ j ) j=1,...,n in g. Following [41, the model of (4.6), p. 157], we write the exterior derivative of the complex c k ij ∂ k -the c k ij are called the structure constants of the Lie algebra g. A lengthy but straightforward computation proves that this exterior derivative satisfies Remark 3.1. The Chevalley-Eilenberg complex is available even for noncompact groups G and nonergodic actions. In other words, the square d 2 actually vanishes even when G is not a compact Lie group and when the action of G on A is not ergodic. i.e., the product of a k-form with a k -form is a k + k -form. In particular, for all k, Ω k is an A-bimodule. The exterior derivative d is compatible with the right module structure in the following sense Since we want to treat conformal deformations of the original structure, we follow [18] and fix a positive invertible element e h ∈ A 1 , where h is a smooth selfadjoint element in A 1 . Then we define a scalar product on Ω k by the formula where ϕ 0 is the unique G-invariant state on A, which is actually a trace according to [1, Theorem 3.1, p. 8]. We set the scalar product of two forms of different degrees to vanish. The scalar product obtained for h = 0 is the one we call the natural scalar product on forms. We define the Hilbert space H ϕ as the completion of Ω • for the scalar product (3.3). In the particular case of h = 0, we obtain our reference Hilbert space H . We will also need the Hilbert spaces H 0,ϕ := GNS(A, ϕ) and H 0 := GNS(A, ϕ 0 ) as well as the Hilbert spaces H k := H 0 ⊗ k g *i.e., the completion of k-forms -and H k,ϕ . To understand why we choose the form (3.3) for the conformal deformation, we compare with the commutative case of a n-dimensional compact manifold M , where we have the following property: if the Riemannian metric is transformed by g λg (for λ > 0), then the (pointwise) norm of all vectors is multiplied by λ 1/2 and thus the pointwise norm of 1-forms is multiplied by λ −1/2 . This in turn implies that the pointwise norm of k-form is multiplied by λ −k/2 . Finally, the (global) scalar product of k-forms is the integral of the pointwise scalar products. Since under the conformal deformation, the total volume of the manifold M is multiplied by λ n/2 , the (global) scalar products of k-forms are multiplied by λ n/2−k . In particular, if n is even and k = n/2, then the scalar product on n/2-forms is left invariant under the conformal deformation. In order to study d and its adjoint, we introduce the degree 1 maps T j : Let R x denote the right multiplication operator for any x ∈ A: R x (a) = ax, and let B i k αβ be the bounded operator on • g * defined using the basis (ω j ) by We can now give an explicit form to the operator d and its (formal) adjoint for the unperturbed metric. Proof . The only point that is not self-explanatory is the behavior of ∂ j with respect to the trace ϕ 0 where we used the relations ∂ j (a) * = −∂ j (a * ) and ϕ 0 (∂ j (a)) = 0. Let h be an element of A 1 and ∂ be an infinitesimal generator of G, acting as a derivation on A 1 , then ∂(e h ) is in the C * -algebra A and satisfies In particular, for a scalar parameter v → 0, ∂(e vh ) → 0. Proof . As an operator from A 1 to A, the derivation ∂ is continuous, therefore we can estimate ∂(e h ) by using an A 1 -converging sequence, like the partial sums of e h . For this sequence, using the derivation property, we get The property ∂(e vh ) → 0 as v → 0 follows immediately. We call d ϕ the operator defined on H ϕ by the formula (3.1). Once the scalar product (3.3) is defined, we want to define an adjoint d * ϕ to d ϕ for this scalar product. The Hodge-de Rham operator that we would like to study in fine is d ϕ + d * ϕ . However, the unbounded operator d ϕ is a priori arbitrary, so it is not clear that it admits a densely defined adjoint. To clarify the relations between H and H ϕ , we introduce the following lemma: Of course, L extends to L ⊗ Id : H → H ϕ which is still a continuous and invertible map. We denote its adjoint by H : H ϕ → H , whose explicit form is Finally, H and H ϕ are related by the unitary map U : H → H ϕ given on degree k forms by Remark 3.5. The map U defined above is unitary, but it does not intertwine the G-structures on H and H ϕ . Proof . Since the sum H ϕ = k H k,ϕ is finite, it suffices to check that V g is continuous on each H k,ϕ separately. Since V g does not act on • g * , it is enough to prove continuity on GNS(A,φ) whereφ(a) = ϕ 0 (ae −h k ) and h k = −(n/2 − k)h ∈ A 1 (corresponding to forms of degree k). We get for a constant K = e h k /2 α g −1 (e −h k )e h k /2 . The above (scalar) inequality follows from the inequality of operators which is valid since e h k /2 α g −1 (e −h k )e h k /2 is a positive operator. It then suffices to apply the positive functional ϕ 0 . Once we know that the map V g is defined on the full Hilbert space, proving that it is compatible with the left A-module structure is a formality. The process is similar for L: it is clear from the definition that if the map L exists, then it intertwines the G-equivariant left A-module structures on H and H ϕ . It remains to prove that L is well-defined and invertible. We first evaluate by the same argument as above. To prove that L is invertible, consider the norm of its inverse The evaluation of the adjoint H of L and of the unitary map U : H → H ϕ is an easy exercise. With the previous notations, we see that d ϕ = (L⊗Id)d(L⊗Id) −1 and thus (at least formally) d * ϕ = H −1 d * H. However, in order to facilitate the comparison between d ϕ + d * ϕ acting on H ϕ and d + d * acting on H , we "push" d ϕ + d * ϕ to H using the unitary U . This leads us to the operators D u studied in the Proposition 3.6 below. But first, for h = h * , we need to introduce the operators K u : H → H , where u ∈ R, defined by This is a one-parameter group of invertible selfadjoint operators. Moreover, following our remark on A 1 at the end of Section 2, for all u ∈ R, the operators K u preserve the space A 1 ⊗ • g * . In the proof below, we consider the orthogonal projections Π k : H → H k onto the completion of the k-forms, for all k ∈ {0, . . . , n}. Proposition 3.6. For all u ∈ [0, 1], we consider two unbounded operators defined on the dense We have: (1) if E +,u and E −,u are, respectively, the closures in H of the images of the operators d u and d * u , then E +,u and E −,u are orthogonal in H ; we denote by Π +,u and Π −,u , respectively, the orthogonal projections on these spaces; (2) the operator is essentially selfadjoint on a common core domain C = A 1 ⊗ • g * . The family of operators D u satisfies the estimate 5) where ω is any vector in the common selfadjointness domain and following Landau's notations, o v (1) stands for functions of v which tend to 0 for v → 0. We can choose these two functions independently of the parameter u ∈ [0, 1]. Proof . Regarding point (1), we start by proving the property for u = 0, i.e., for the untwisted case. There, following Lemma 3.4 the trace ϕ 0 is G-invariant and therefore the action V g of G on H 0 is unitary. Consequently, the G-representation can be decomposed into a direct sum of finite-dimensional G-representations. Let us denote by V one of these finite-dimensional spaces. It is clear from the definition (3.1) that both V ⊗ k g * ⊆ H and its orthogonal are stable under the action of d. Thus the restriction of d to the finite-dimensional space V ⊗ k g * is bounded and admits an adjoint d * whose form is given by Lemma 3.2. Varying the space V , we see that d * is defined on D, the algebraic direct sum of V ⊗ k g * , which is a dense subset of H . If we restrict to the case of forms ω, ω in the space V ⊗ k g * ⊆ H , we have dω, d * ω = d 2 ω, ω = 0, there are no considerations of domains for d and d * , since we consider finite-dimensional spaces. The same argument applied to different finite vector spaces V proves that E +,0 (the image of d) and E −,0 (the image of d * ) are orthogonal. Regarding point (2), let us start by giving a sketch of the proof: we first prove that D 0 = D is essentially selfadjoint on the requested domain. Using the estimate (3.5), we then apply Kato-Rellich theorem to show that if D u is essentially selfadjoint for the domain C, then so is the operator D u+v for all |v| ε, where ε is independent of the point u ∈ [0, 1] chosen. As a consequence, all operators D u are essentially selfadjoint for the fixed domain. We first prove that D 0 is selfadjoint. This is done by using the Peter-Weyl decomposition of H 0 for the unitary action V g of G on H 0 . As mentioned in point (1), the restriction of d to this finite-dimensional space V ⊗ k g * ⊆ H is well-defined, as is its adjoint d * . Varying the space V , we consider D, the direct sum of V ⊗ k g * , which is a dense subset of H . In this situation, we can define D = d + d * on D. If we restrict D to a component V ⊗ k g * ⊆ H , it is formally selfadjoint by definition. It therefore admits an orthonormal basis of eigenvectors with real associated eigenvalues. It follows that Ran(D + i) and Ran(D − i) are dense in H , and this is enough to prove that D is essentially selfadjoint on the domain C (see [50,Corollary,p. 257]). Let us now consider an arbitrary u ∈ [0, 1]. We want to find ε > 0 uniform in u and small enough so that for all v with |v| ε, the operator D u+v is essentially selfadjoint. By definition, If we introduce R v = K v − 1, then we can write It is clear that both R v and R −v are bounded with R v → 0, R −v → 0 for v → 0 and by definition, for all v ∈ R, R v commutes with K u for u ∈ R. We write The sum of the terms K u dK −u and K −u d * K u gives back D u . Since E +,u and E −,u are orthogonal, we have Both D u+v and D u are symmetric operators, so their difference (namely the sum Σ of the six remaining terms) is a symmetric operator. By hypothesis, D u is selfadjoint. According to Kato-Rellich theorem as stated in [51, Theorem X.12, p. 162], it therefore only remains to prove that C is also a domain for Σ and that for all ω ∈ C, where both real numbers a, b are positive and a < 1. It is clear from the definition of K ±u and R v that their actions preserve the core C of C 1 -functions on G and thus Σ(ω) has a welldefined meaning for all ω ∈ C. We decompose ω ∈ C into a sum ω = k ω k of C 1 -forms of degree k and start by an estimate of the different terms Σω k for any fixed k. Remember from Lemma 3.2 that d can be written where the different B i k αβ are bounded operators. We remark that the B i k αβ commute with right multiplications, like the one appearing in the definition of K u acting on an element of given degree. For ω = a ⊗ v of degree k, we write Taking a linear combination to treat the case of a sum ω = ω k , we get In this equality, j,k (R ∂ j (e −(n/2−k)hv ) ⊗ T j ) • Π k is a finite sum of bounded operators. As a consequence of Lemma 3.3, the norm of these operators tend to 0 for v → 0. We already know that R −v tends to 0 in norm for v → 0, we therefore get the estimate The two functions o v (1) can be taken uniform in u ∈ [0, 1], since [0, 1] is a compact. The term K u R v dK −u is easily treated: and then the estimate (3.7) enables us to write As a result, we get Lemma 3.2 affords a similar treatment of the term K −u d * R v K u , just replacing T j by T * j , B i k αβ by (B i k αβ ) * and c i k αβ by c i k αβ . We get an estimate Using the equation (3.6), which ensures that d u ω D u ω and d * u ω D u ω , we can combine (3.8) and (3.9) to show that the relation (3.5) is satisfied. We can therefore apply the Kato-Rellich theorem for all u ∈ [0, 1] and this proves that all D u (including D 1 ) have selfadjoint extensions with the same core C = A 1 ⊗ • g * . Remark 3.7. It appears from the proof of point (2) that we could also take A ∞ ⊗ • g * as core for the operator D 0 (using the Peter-Weyl decomposition). If we further assume h ∈ A ∞ , the rest of the proof applies verbatim and shows that all D u have a common core, namely A ∞ ⊗ • g * . Corollary 3.8. For all selfadjoint elements h ∈ A 1 and all parameters u ∈ [0, 1], the operators D u are n + -summable. Proof . In the untwisted case, i.e., for D 0 , we can follow the argument of Theorem 5.5 of [30] to prove that d + d * is n + -summable, where n is the dimension of G. Indeed, according to Proposition 2.3 from [35], as G-vector spaces, we have Moreover, the operator d + d * := D ref on this space is just the Hodge-de Rham operator on G and therefore it is n + -summable. Since D ref also preserves the finite-dimensional spaces V ⊗ k g * obtained by Peter-Weyl decomposition, the eigenvalues of |D| coincide with those of |D ref | except that they may have lower (and possibly zero) multiplicities. Consequently, the same computation as in [30] proves that D is n + -summable. To extend this property to all D u for u ∈ [0, 1], we first note that to prove D u is n + -summable, it suffices to show that the operator (D u + i) −1 is in the symmetric ideal L n + -as mentioned in Remark 2.5. The existence of the operator (D u + i) −1 is a consequence of Proposition 3.6. The discussion above proves that (D 0 + i) −1 is in this ideal. We then use [39,Theorem 1.16,p. 196] to prove that if (D u +i) −1 ∈ L n + then for some ε > 0 small enough but independent of u ∈ [0, 1], and for any v in |v| ε, then (D u+v + i) −1 ∈ L n + . For all u, v, and to apply Kato's stability property, we need to give a relative bound on D u+v −D u , expressed in terms of D u + i. We are going to obtain this using the relation (3.5). Indeed, since we know that D is selfadjoint, Dξ, ξ = ξ, Dξ and thus which shows that Dξ (D + i)ξ . From this fact and (3.5), we deduce which let us apply [39,Theorem 1.16,p. 196] to D u + i and D u+v − D u , leading to the expression This expression shows that (D u+v + i) −1 is a product of (D u + i) −1 in the ideal L n + and a bounded operator. It is therefore itself in the ideal L n + and this completes the proof. The operator d u of Proposition 3.6 induces a cochain complex: Proof . We first treat the case of d (for h = 0). In this case, if x n → x and y n → x while both dx n and dy n converge, we want to prove that lim dx n = lim dy n . Consider any z ∈ H which lives in a finite-dimensional vector space V ⊗ • g * obtained from the Peter-Weyl decomposition. This ensures that Π + z is in V ⊗ • g * and thus in the domain of D. We then have z, Π + Dx n = DΠ + z, x n → DΠ + z, x ← z, Π + Dy n . Since we know that both dx n and dy n converge in H and that D, the algebraic direct sum of all V ⊗ • g * is dense, it is necessary that lim dx n = lim dy n and this proves that d is closable. It follows that the kernel ker(d) is closed. Since A 1 ⊗ • g * is a core for D, any x in the domain Dom(d) can be approximated by x n ∈ A 1 ⊗ • g * such that x n → x and dx n → dx. The density of A ∞ inside A 1 (as discussed at the end of Section 2) then provides an approximation of the original x ∈ dim(d) by y n ∈ A ∞ ⊗ • g * = Ω • . For this sequence y n , we know from Section 3 that d 2 y n = 0. By density, we obtain that (3.10) is a cochain complex. Similarly, for d u = K u dK −u if x n → x and y n → x while both d u x n and d u y n converge, we have K −u x n → K −u x ← K −u y n and K −u d u x n = dK −u x n , K −u d u y n = dK −u y n . Since d is closable, we get lim K −u d u x n = lim K −u d u y n , which suffices to prove that d u is also closable. The cochain property then follows from d 2 In the rest of this section, we will be interested in the reduced cohomology of the complex (3.10), namely the cohomology groups For any u ∈ [0, 1], let us write E 0,u for the kernel of D u . We have the following Hodge decomposition theorem for the conformally perturbed metric: Theorem 3.11. Let G be a compact Lie group of dimension n acting ergodically on a unital C * -algebra A. With the notations introduced previously, for any parameter u ∈ [0, 1], there is a decomposition of H into a direct sum of orthogonal Hilbert spaces Proof . The operator D u is selfadjoint with compact resolvent, as a consequence of Proposition 3.6 and Corollary 3.8. Thus, we have an orthogonal sum H = E 0,u ⊕ Ran(D u ). Following Proposition 3.6, Ran(D u ) = E −,u ⊕ E +,u and the sum is orthogonal, which proves the result. We call the restriction of D 2 u to H k the Laplacian on H k and denote it by ∆ k , which is thus an unbounded operator on H k , defined on the domain A ∞ ⊗ k g * . Note that ∆ k actually depends on our choice of conformal perturbation h ∈ A 1 . Corollary 3.12. Let H k (d u , H k ) be the cohomology groups introduced in (3.11), they identify naturally with the kernel of ∆ k , i.e., Remark 3.13. This Corollary implies in particular that these cohomology groups are finitedimensional, since ker(∆ k ) = ker(D u ) and D u has compact resolvent by Corollary 3.8. Proof . The cohomology group H k (d u , H k ) is defined as ker(d u,k )/Ran(d u,k−1 ). The Hodge decomposition Theorem 3.11 can be combined with the projections Π ± and Π k on H k to prove that H k = ker(∆ k ) ⊕ Ran(d u,k−1 ) ⊕ Ran(d * u,k ). We know that ker(d u,k ) = Ran(d * u,k ) ⊥ . Therefore ker(d u,k ) = Ran(d u,k−1 ) ⊕ ker(∆ k ) from which it follows immediately that H k (d u , H k ) = ker(d u,k )/Ran(d u,k−1 ) ker(∆ k ). Proposition 3.14. The cohomology groups H k (d u , H k ) are abstractly isomorphic to the nonperturbed (h = 0) cohomology groups H k (d, H k ). Proof . It is easy to check that ker(d u ) = K u ker(d) and E +,u = K u E +,0 . Thus, as abstract vector space, ker(d u )/E +,u = K u ker(d 0 )/K u E +,0 is finite-dimensional, with the same dimension as ker(d 0 )/E +,0 . Remark 3.15. The dimensions of ker(d u )/E +,u and ker(d 0 )/E +,0 are the same, but there are not "concretely isomorphic" for the scalar product we consider. The concrete realisation of ker(d u )/E +,u is {ω ∈ ker(d u ) : ∀ ω ∈ E +,u , ω, ω = 0}. However, K u does not preserve scalar products and therefore, ker(d u )/E +,u is not realised concretely by K u E 0,0 . In other words, K u E 0,0 is not the space of harmonic forms for D u . Conformally twisted spectral triples for C * -dynamical systems In the following theorem, we use the selfadjoint operator D u to construct spectral triples for the natural actions of the algebra A (with its left action on H ) and the algebra A op (acting on the right of H ). • the unbounded operator D u is the unique selfadjoint extension of defined on the core C = A 1 ⊗ • g * , the operator K u being defined by (3.4); • the grading operator γ is defined on degree k forms by For any fixed h ∈ A 1 and any u ∈ [0, 1], the data (A op , H 0 ⊗ • g * , D u ) with grading γ defines an even n + -summable twisted spectral triple, with the automorphism β on A given by β(a) = e hu ae −hu -we use this β to define an automorphism on A op . Remark 4.2. The morphism β defined above preserves the multiplication of A op . It also satisfies the relation unitarity condition (see [16, equation (3.4 Proof . It is clear from the definition of π that A is represented on H by bounded operators. The existence and uniqueness of the selfadjoint extension of D u is proved in Proposition 3.6, while the compact resolvent and finite summability properties are shown in Corollary 3.8. We now prove that the commutator of D u with a ∈ A 1 is bounded. To this end, we use the notations of Lemma 3.2 to decompose the operator D u . We call • Part (0) is the "bounded part" of D u , that is the terms • Part (I) consists of the terms • Part (II) consists of the terms Part (0) commutes with the left multiplication by a ∈ A 1 , and thus it does not contribute to the commutator. We therefore only need to estimate Parts (I) and (II) of D u (a ω) for It follows from these considerations that which is clearly a bounded function of ω for any a ∈ A 1 . Moreover, such a ∈ A 1 sends the core C of our selfadjoint operator D u to itself and following [48,Proposition A.1,p. 293], this suffices to ensure that a ∈ A 1 sends the domain of D u to itself. The algebra A of Definition 2.4 thus contains A 1 and is dense in the C * -algebra A. This completes the proof that (A, H , D u ) is a n + -summable spectral triple. It remains to study its parity: it is clear from the definition that γ sends the core C to itself and thus it leaves the full domain of the selfadjoint operator D u stable. Clearly, γ distinguishes only between H even := A ⊗ even g * and H odd := A ⊗ odd g * and π(a) leaves both spaces invariant, while D u is an odd operator. This proves that (A, H , D u ) with γ is an even spectral triple. The parity paragraph above applies verbatim to the spectral triple constructed from the right action of A op . The summability property is also conserved. It remains to investigate the bounded twisted commutators. Notice first that if a , h ∈ A 1 then both right multiplications by a and by β(a ) leave the core C of D u invariant and therefore the domain of D u is also stable under these right multiplication. In these two sums, the only terms that could lead to an unbounded contribution are those containing ∂ j (a), but these two terms cancel. At this point, we must perform the same computation on Part (II) to make sure that the automorphism β is also suitable for this case. A very similar computation proves that this it is indeed the case -the key property is that e −(n/2−k)hu e (n/2−(k+1))hu = e −hu = e (n/2−k)hu e −(n/2−(k−1))hu -and thus the operator (defined a priori only on C) Existence of a Chern-Gauss-Bonnet theorem for conformal perturbations of C * -dynamical systems In this section we show that the Hodge decomposition theorem proved in Section 3 indicates the existence of an analog of the Chern-Gauss-Bonnet theorem for the C * -dynamical systems studied in the present article. Let us explain the classical case before stating the statement for our setting. Indeed, because of the natural isomorphism between the space of harmonic differential forms and the de Rham cohomology groups, for a classical closed manifold M , the index of the operator d + d * : Ω even M → Ω odd M is equal to the Euler characteristic of M . On the other hand the McKean-Singer index theorem asserts that the index is given by where i = d * d + dd * is the Laplacian on the space of i-differential forms on M , and t is any positive number. This formula, furthermore, contains local geometric information as t → 0 + , since there is a small time asymptotic expansion of the form The coefficients a 2j ( i ) are local geometric invariants, which depend on the high frequency behaviour of the eigenvalues of the Laplacian and are the integrals of some invariantly defined local functions a 2j (x, i ) against the volume form of M . Independence of the index from t implies that the alternating sum of the constant terms in the above asymptotic expansions for i gives the index. Hence, using the Hodge decomposition theorem, In fact, the integrand in the latter coincides with the Pfaffian of the curvature form, which is a remarkable and difficult identification [2]. With notations and assumptions as in Section 3, we obtain the following result which indicates the existence of an analog of the Chern-Gauss-Bonnet theorem in the setting of C * -dynamical systems studied in this article. Theorem 5.1. The Euler characteristic χ of the complex (d u , H k ) is related to the odd index defined in (4.1) and is independent of the conformal factor e −h . Proof . The first equality is actually the definition of the Euler characteristic χ. The second equality is an immediate consequence of Corollary 3.12. The third equality and the last statement can be justified by using Remark 3.13 and Proposition 3.14. That is, ω ∈ ker(∆ k ) means in particular that ω is in the domain of ∆ k , which is included in the domain of D u (by definition). We then have which proves that ω ∈ ker(D u ). For a k-form ω, the converse is obvious. It follows that ker(D + u ) = k 0 ker(∆ 2k ) and ker(D − u ) = k 0 ker(∆ 2k+1 ), which yields The dimension of these groups are independent of the conformal factor e −h as a consequence of Proposition 3.14. 2. An alternative proof of the index property using only bounded operators can be obtained using Sobolev spaces. For a clear account of these spaces and their analytic properties in our setting, we refer the reader to the paper [55]. Also, in order to have a complete analog of the Chern-Gauss-Bonnet theorem, one needs to find a local geometric formula for the index, which is proved above to be a conformal invariant. The heat kernels of Laplacians of conformally perturbed metrics on certain noncommutative spaces such as the noncommutative n-tori T n Θ admit asymptotic expansions of the form In fact, for noncommutative tori, each Laplacian ∆ k is an elliptic selfadjoint differential operator of order 2, and asymptotic expansions of this form can be derived by using the heat kernel method explained in [31] while employing Connes' pseudodifferential calculus [12]. This method was indeed used in [17,18,25,26,28], for calculating and studying the term in the expansion that is related to the scalar curvature of noncommutative two and four tori. Going through this process for noncommutative tori T n Θ , one can see that the odd coefficients in the latter asymptotic expansion will vanish, since in their explicit formula in terms of the pseudodifferential symbol of ∆ k , there is an integration over the Euclidean space R n of an odd function involved (see [31, p. 54 and Theorem 1.7.6, p. 58]). Thus in the case of the noncommutative torus T n Θ we can write (5.1) as where the local geometric invariants R k are derived from the pseudodifferential symbols of the Laplacian ∆ k , by a heat kernel method. This method was used for example in [17,26,28] for computation of scalar curvature for noncommutative two and four tori. The alternating sum of the R k gives a noncommutative analog of the local expression for the Euler class. Summary and conclusions The Chern-Gauss-Bonnet theorem is an important generalization of the Gauss-Bonnet theorem for surfaces, which states that the Euler characteristic of an even-dimensional Riemannian manifold can be computed as the integral of a characteristic class, namely the Pfaffian of the curvature form, which is a local invariant of the geometry. In particular, it shows that the integral of this geometric invariant is independent of the metric and depends only on the topology of the manifold. The results obtained in this paper show that the analog of this theorem holds for a general ergodic C * -dynamical system, whose algebra and Lie group are not necessarily commutative. To be more precise, the family of metrics considered for a dynamical system is obtained by using an invertible positive element of the C * -algebra to conformally perturb a fixed metric defined via the unique invariant trace, and our result is about the invariance of a quantity, which is a natural analog of the Euler characteristic, from the conformal factor. This type of results were previously proved for the noncommutative two torus T 2 θ . That is, the analog of the Gauss-Bonnet theorem was proved in [18] and extended to general translation invariant complex structures on these very important but particular C * -algebras in [25], where a conformal factor varies the metric. The differential geometry of C * -dynamical systems were developed and studied in [12], where the noncommutative two torus T 2 θ played a crucial role. However the investigation of the analog of the Gauss-Bonnet theorem for T 2 θ , when the flat metric is conformally perturbed was pioneered in [11], where after heavy calculations, some noncommutative features seemingly indicated that the theorem does not hold. However, studying the spectral action in the presence of a dilaton [10], the development of the theory of twisted spectral triples [16], and further studies of examples of complex structures on noncommutative manifolds [40], led to convincing observations that the Gauss-Bonnet theorem holds for the noncommutative two torus. Then, by further analysis of the expressions and functions of a modular automorphism obtained in [11], Connes and Tretkoff proved the desired result in [18] for the simplest translation invariant conformal structure, and the generalization of their result was established in [25] (where the use of a computer for the heavy computations was inevitable). It is remarkable that, a non-computational proof of the Gauss-Bonnet theorem for the noncommutative two torus is given in [17], which is based on the work [4], where the conformal index of a Riemannian manifold is defined using properties of conformally covariant operators and the variational properties of their spectral zeta functions. Therefore, since computations are enormously more involved in dimensions higher than two, it is of great importance to use spectral methods to show the existence of the analog of the Chern-Gauss-Bonnet theorem, which is presented in this article, not only for nocommutative tori, but for general C * -dynamical systems. We have also paid special attention to the spectral properties of the analog of the Hodgede Rham operator of the perturbed metric: we have proved its selfadjointness and shown that the spectral dimension is preserved. We have then shown that this operator gives rise to a spectral triple with the unitary left action of the algebra, and gives a twisted spectral triple with the unitary action of the opposite algebra on the right, generalizing the construction in [18] on the noncommutative two torus and providing abstractly a large family of twisted spectral triples.
11,852.4
2015-06-25T00:00:00.000
[ "Mathematics" ]
Data report: reconnaissance of bulk sediment composition and clay mineral assemblages: inputs to the Hikurangi subduction system This report provides a reconnaissance-scale assessment of bulk mineralogy and clay mineral assemblages in sediments and sedimentary rocks that are entering the Hikurangi subduction zone, offshore North Island, New Zealand. Samples were obtained from three sites drilled during Leg 181 of the Ocean Drilling Program (Sites 1123, 1124, and 1125) and 38 piston/gravity cores that are distributed across the strike-length of the margin. Results from bulkpowder X-ray diffraction show large variations in normalized abundances of total clay minerals and calcite. The typical lithologies range from clay-rich hemipelagic mud (i.e., mixtures of terrigenous silt and clay with lesser amounts of biogenic carbonate) to calcareous mud, muddy calcareous ooze, and nearly pure nannofossil ooze. Basement highs (Chatham Rise and Hikurangi Plateau) are dominated by biocalcareous sediment, whereas most deposits in the trench (Hikurangi Trough and Hikurangi Channel) and on the insular trench slope are hemipelagic. Clay mineral assemblages (<2 μm) change markedly as a function of geographic position. Sediment entering the southwest side of the Hikurangi subduction system is enriched in detrital illite (>60 wt%) relative to chlorite, kaolinite, and smectite. Normalized proportions of detrital smectite increase significantly toward the northeast to reach values of 40–55 wt% offshore Hawkes Bay and across the transect area for Expeditions 372 and 375 of the International Ocean Discovery Program. Introduction Expeditions 372 and 375 of the International Ocean Discovery Program (IODP) drilled five sites on the overriding and subducting plates of the Hikurangi convergent margin, offshore North Island, New Zealand (Figure F1). The project’s overarching goal is to understand the behavior and spatial distribution of slow slip events (SSE) along the plate interface (Saffer et al., 2017). Drilling focused on recovery of sediments, rocks, and pore fluids, acquisition of logging-while-drilling data, and installation of long-term borehole observatories. Interpretations of new compositional results from the transect area are challenging, however, because comparable information is almost nonexistent from other sectors along the strikelength of the margin. The closest Ocean Drilling Program (ODP) sites (1123, 1124, and 1125) are located far seaward of the Hikurangi Trough (Figure F1). Shipboard X-ray diffraction (XRD) measurements were not completed during that expedition, and only one published report contains postcruise compositional data (Winkler and Dullo, 2002). Far to the southwest, XRD data from the Canterbury Basin and Canterbury slope (Land et al., 2010; Villaseñor et al., 2015) are of limited value to Hikurangi studies because those sites capture a different system of detrital sources and dispersal routes off the South Island of New Zealand (Figure F1). The motivation for regional-scale reconnaissance of sediment composition is to provide better context for forthcoming interpretations of detrital provenance, sediment dispersal, and temporal evolution of sedimentary systems. Quantitative compositional data are also important for several ancillary reasons. The geologic hosts for slow slip events near the Hikurangi IODP transect likely include lithified and variably altered volcaniclastic sediments of Late Cretaceous age, but incorporation of other rock types in the fault zone (e.g., siliciclastic mudstone, altered basalt, marl, and nannofossil chalk) is also possible (Davy et al., 2008; Barnes et al., 2010; see the Expedition 372B/375 summary chapter [Saffer et al., 2019]). If chalk and marl are volumetrically significant at the depths where slow slip occurs, then the subducting carbonates might modulate fault-slip behavior by crystal plasticity (Kennedy and White, 2001) or diffusive mass transfer (Rutter, 1976). The purity of the Introduction Expeditions 372 and 375 of the International Ocean Discovery Program (IODP) drilled five sites on the overriding and subducting plates of the Hikurangi convergent margin, offshore North Island, New Zealand ( Figure F1). The project's overarching goal is to un-derstand the behavior and spatial distribution of slow slip events (SSE) along the plate interface (Saffer et al., 2017). Drilling focused on recovery of sediments, rocks, and pore fluids, acquisition of logging-while-drilling data, and installation of long-term borehole observatories. Interpretations of new compositional results from the transect area are challenging, however, because comparable information is almost nonexistent from other sectors along the strikelength of the margin. The closest Ocean Drilling Program (ODP) sites (1123, 1124, and 1125) are located far seaward of the Hikurangi Trough ( Figure F1). Shipboard X-ray diffraction (XRD) measurements were not completed during that expedition, and only one published report contains postcruise compositional data (Winkler and Dullo, 2002). Far to the southwest, XRD data from the Canterbury Basin and Canterbury slope (Land et al., 2010;Villaseñor et al., 2015) are of limited value to Hikurangi studies because those sites capture a different system of detrital sources and dispersal routes off the South Island of New Zealand ( Figure F1). The motivation for regional-scale reconnaissance of sediment composition is to provide better context for forthcoming interpretations of detrital provenance, sediment dispersal, and temporal evolution of sedimentary systems. Quantitative compositional data are also important for several ancillary reasons. The geologic hosts for slow slip events near the Hikurangi IODP transect likely include lithified and variably altered volcaniclastic sediments of Late Cretaceous age, but incorporation of other rock types in the fault zone (e.g., siliciclastic mudstone, altered basalt, marl, and nannofossil chalk) is also possible (Davy et al., 2008;Barnes et al., 2010; see the Expedition 372B/375 summary chapter ). If chalk and marl are volumetrically significant at the depths where slow slip occurs, then the subducting carbonates might modulate fault-slip behavior by crystal plasticity (Kennedy and White, 2001) or diffusive mass transfer (Rutter, 1976). The purity of the IODP Proceedings 2 Volume 372B/375 marl/chalk is a critical variable, however, as is the extent of replacement of primary volcaniclastic constituents by expandable clay minerals (smectite group). How much clay is present in these lithologies, and which clay minerals are dominant? How variable are the lithologies along the strike-length of the margin? This report provides some preliminary compositional information to help answer those important questions. To build an archive of relevant compositional data, samples were acquired for XRD analyses from ODP Sites 1123, 1124, and 1125 ( Figure F1) plus a representative distribution of piston/gravity cores along the strike-length of the Hikurangi margin. Prominent bathymetric features targeted by the sampling include (1) the Bounty Channel, which heads off the southeast coast of South Island and directs gravity flows toward the southeast (Lawver and Davey, 2005); (2) submarine canyons emanating from the Cook Strait sector between South Island and North Island (Mountjoy et al., 2009); (3) the Hikurangi Channel, which funnels gravity flows down the axis of Hikurangi Trough (toward the north-northeast) before bending sharply to the east (Lewis and Pantin, 2002); (4) the Ruatoria debris avalanche, which remobilized accreted trench sediments and slope deposits along the northernmost Hikurangi margin (Collot et al., 2001); and (5) two prominent basement highs on the subducting plate, Chatham Rise and the Hikurangi Plateau (Wood and Davy, 1994;Davy et al., 2008). In addition, the array of sampling sites encompasses the region's most influential ocean current, the Pacific Deep Western Boundary Current (DWBC) (Shipboard Scientific Party, 1999a;McCave et al., 2004). Another reason for reconnaissance-scale sampling was to calibrate shipboard and shore-based XRD computations. The method used during IODP Expeditions 372 and 375 (see the Expedition 372B/375 methods chapter ) depends on anal-yses of standard mineral mixtures (Fisher and Underwood, 1995;Underwood et al., 2003). Data from the standards were used to calculate a matrix of normalization factors with singular value decomposition (SVD), as well as a suite of regression equations that relate values of integrated peak area to mineral abundance (weight percent). Because of differences in XRD hardware, tube fatigue, and software, each individual instrument requires calibration and computation of its own set of normalization factors and/or regression equations. It is also important to blend the "correct" mineral mixtures using individual standards that match as close as possible to the natural mineral assemblages of a particular study area. The "wrong" blend of clay minerals, for example, or the "wrong" crystallinity of calcite will exacerbate errors in their calculated values of weight percent. Underwood et al. (in press) provided details regarding the standard mineral mixtures (both bulk powder and clay size), along with intralaboratory and interlaboratory tests of precision and comparisons of accuracy. The results reported herein were used to inform choices for the Hikurangi-specific standards. Samples A total of 61 specimens from Sites 1123, 1124, and 1125 were acquired from the Gulf Coast Repository in College Station, TX (USA). The lithologies range from hemipelagic mud (defined here as silty clay to clayey silt, or lithified equivalent, with subordinate biogenic carbonate) to marl (defined here as muddy calcareous ooze to calcareous mud or lithified equivalents) and chalk (>75% carbonate). Sample spacing for those specimens was designed to cover a representative spread of burial depths and ages for each site. A total of 91 specimens from 38 piston and gravity cores were acquired Figure F1. Index map of the offshore eastern New Zealand region with major bodies of sediment accumulation and likely pathways of sediment dispersal. Table T1 provides the geographic information for each coring station. The lithologic name assigned to each sample (e.g., lutite, clay-bearing nannofossil ooze, and silty clay) was taken from the original shipboard core descriptions (e.g., Shipboard Scientific Party, 1999c) without regard to potential inconsistencies in terminology or classification scheme. Shipboard descriptions of the piston/gravity cores were downloaded from the National Oceanic and Atmospheric Administration (NOAA) Index to Marine and Lacustrine Geological Samples (https://www.ngdc.noaa.gov/geosamples). Specific sample intervals in those cores cover a representative spread of lithologies, which is indicated by the shipboard descriptions of color, carbonate content, texture, and clay content. The NIWA cores and specific sample intervals in those cores were chosen by NIWA personnel (Lisa Northcote and Philip Barnes). Bulk-powder XRD The specimens of bulk sediment were freeze-dried, and splits were crushed to a fine powder using a mechanical mortar and pestle for 2.5-3.0 min. Those specimens were analyzed at the New Mexico Bureau of Geology and Mineral Resources as back-loaded random powders using a Panalytical X'Pert Pro diffractometer with Cu anode. Continuous scans were run at generator settings of 45 kV and 40 mA over an angular range of 5°-70°2θ. The scan step time was 5.08 s, the step size was 0.008°2θ, and the sample holder was spinning. Slits were fixed at 0.25 mm (divergence) and 0.1 mm (receiving), and the specimen length was 10 mm. Raw data files were processed using MacDiff software (version 4.2.5) to establish a baseline of intensity, smooth counts, and correct peak positions (relative to quartz) and to calculate peak intensities and peak areas. Figure F2 shows representative diffractograms with identification of the diagnostic peaks for total clay minerals, quartz, feldspar, and calcite. Values of integrated peak area were used to compute relative weight percent for total clay minerals, quartz, feldspar, and calcite following two approaches: (1) a set of polynomial regression equations that relate weight percent to peak area in standard mineral mixtures and (2) a matrix of SVD normalization factors also calibrated from analyses of standard mineral mixtures (Table T2). If a natural sample comes close to matching the compositional character of the standards, then the sum of the values of relative weight percent will be close to 100%. Sums greater than 100% usually mean mismatches of mineral crystallinity. Sums significantly less than 100% are usually caused by an abundance of solids (amorphous Figure F2. Representative X-ray diffractograms for random bulk-powder specimens. Diagnostic peaks for computation of weight percent are indicated for total clay minerals (Cl), quartz (Q), feldspar (F), and calcite (Cc). and/or crystalline) not included in the standard mix (e.g., volcanic glass, zeolites). All of the relative abundance values were normalized to total clay minerals + quartz + feldspar + calcite = 100%. The same analytical and computational approach was used for shipboard data acquisition during Expeditions 372 and 375 (see the Expedition 372B/375 methods chapter ). Underwood et al. (in press) provide a thorough assessment of interlaboratory precision between the R/V JOIDES Resolution and New Mexico Tech. Errors of accuracy for the standard mineral mixtures are smaller using the regression equations, as opposed to SVD normalization factors. The average errors (computed weight percent − true weight percent) are total clay minerals = 1.7%, quartz = 1.2%, feldspar = 1.6%, and calcite = 1.2% (Underwood et al., in press). Clay-size XRD The clay-sized fraction (<2 μm) was isolated from representative bulk samples of hemipelagic mud and mudstone. Carbonate-rich specimens (chalk, marl, etc.) were excluded from the clay-mineral reconnaissance because they would have required time-consuming acid digestion steps. Splits of the freeze-dried bulk sediment were transferred to 600 mL beakers, treated with 2% hydrogen peroxide to remove organic matter, and suspended in ~250 mL of sodium hexametaphosphate solution (4 g/1000 mL of distilled H 2 O). Beakers with suspended clay were inserted into an ultrasonic bath for several minutes to promote dispersion and retard flocculation. Suspensions were washed of solutes by two passes through a centrifuge (8200 rpm for 25 min; ~6000× g) with resuspension in distilled-deionized water after each pass. The suspended particles were then transferred to 125 mL plastic bottles and resuspended by vigorous shaking plus insertion of an ultrasonic cell probe for ~2 min. Claysize splits of each suspension (<2 μm equivalent spherical diameter) were separated by centrifugation (1000 rpm for 2.4 min; ~320× g). Preparation of oriented clay aggregates for the XRD scans followed the filter-peel method (Moore and Reynolds, 1989b) using 0.45 μm filter membranes and glass discs. Discs were placed in a closed vapor chamber at room temperature for at least 24 h to saturate the clay aggregates with ethylene glycol. This last step expands the interlayer of smectite to minimize overlap between the smectite (001) and chlorite (001) reflections ( Figure F3). The oriented glycol-saturated clay-size specimens were analyzed at the New Mexico Bureau of Geology and Mineral Resources using the same Panalytical X'Pert Pro diffractometer. Those scans were run at generator settings of 45 kV and 40 mA over an angular range of 2°-28.0°2θ and using a scan step time of 1.6 s, a step size of 0.01°2θ, and a stationary sample holder. Slits were fixed at 0.5 mm (divergence) and 0.1 mm (receiving), and the specimen length was 10 mm. Raw data files were processed using MacDiff software (version 4.2.5) to establish a baseline of intensity, smooth counts, correct peak positions (relative to quartz), and calculate peak intensities and peak areas. Representative diffractograms are shown in Figure F3 with identification of the diagnostic peaks for smectite, illite, undifferentiated chlorite + kaolinite, and quartz. Values of integrated peak area were used to compute relative and normalized abundance values for each of the common claysized minerals following three approaches: (1) Biscaye (1965) peakarea weighting factors, which are equal to 1× smectite, 4× illite, and 2× undifferentiated chlorite + kaolinite; (2) a set of regression equations where smectite + illite + undifferentiated (chlorite + kaolinite) + quartz = 100%; and (3) a matrix of SVD normalization factors, where smectite + illite + undifferentiated (chlorite + kaolinite) + quartz = 100% (Table T2). To permit comparisons among the three computational approaches, the weight percent values were also normalized to a clay-only assemblage of smectite + illite + undifferentiated (chlorite + kaolinite) = 100%. As described more fully by Underwood et al. (in press), errors of accuracy for the standard mineral mixtures are largest using the Biscaye (1965) weighting factors (as high as 18.6%), but those values are tabulated here to permit direct comparisons with data from previous studies (e.g., Winkler and Dullo, 2002). The average errors using the regression equations are illite = 3.0%, undifferentiated (chlorite + kaolinite) = 5.1%, and smectite = 3.9% (Underwood et al., in press). Bulk-powder XRD To illustrate spatial changes in the bulk-powder XRD results (Table T3), the data are grouped by geographic sector; all of the values on such plots are normalized abundances computed using the regression equations (Table T2). Figure F4 shows the sector offshore South Island, New Zealand. This sector includes the transect area for IODP Expedition 317 (Canterbury Basin), ODP Site 1119, Deep Sea Drilling Project (DSDP) Site 594, and distal reaches of the Bounty Channel. Samples from Cores VM16-122 and RR0503-06 display the greatest amount of compositional variability with calcite contents of 0.9 to 86.7 wt% and total clay contents of 2.2-38.8 wt%. Judging from those compositional ranges, the cores appear to contain both biocalcareous ooze and hemipelagic mud from Bounty Channel spillover. The remaining samples are typical hemipelagic muds (i.e., mixtures of terrigenous silt and clay plus subordinate biogenic carbonate). Their values of total clay content are 28.0-39.1 wt%; quartz content is 17.0-38.1 wt%, feldspar content is 20.6-36.3 wt%, and calcite content is 0.8-32.1 wt%. Those compositions are in general agreement with bulk-powder XRD data from the Canterbury Basin (Villaseñor et al., 2015), although quantitative comparisons should be made with caution because of differences in methodology. Samples analyzed from the Chatham Rise sector ( Figure F5) include those from nine piston/gravity cores and two ODP sites. The one specimen from Core RC09-111, located in the distal northeast corner of the sector, is nearly devoid of calcite and contains 58.1 wt% total clay. Water depth at that coring station is 4777 m below sea level, so the provisional interpretation is depletion of carbonate due to deposition (and dissolution) below the calcite compensation depth (CCD). Core samples from shallower water depths along the crest and northern flank of Chatham Rise are consistently enriched in calcite, with normalized percentages of 34.3-88.7 wt%. The total clay content is 5.5-34.7 wt%. Normalized percentages of quartz range 3.1-16.5 wt%, and values for feldspar are 2.6-14.5 wt%. These compositions are consistent with a continuum of mixing between biogenic carbonate ooze and subordinate siliciclastic silt + clay (i.e., calcareous mud to nearly pure nannofossil ooze). Cores from the southern flank of Chatham Rise (RR0503-10, RR0503-11, and RC09-108) are more variable in composition, with calcite contents of 4.1-83.7 wt% and total clay contents of 4.8-39.9 wt%. The occurrences of mud with low percent calcite values are consistent with spillover from the distal reaches of Bounty Channel, which meanders across the southern edge of the sector (Figure F5). Site 1123 was drilled on the northeastern flank of Chatham Rise at a water depth of 3290 m ( Figure F5). The main purpose for drilling there was to document the long-term effects of the DWBC on sedimentation (Shipboard Scientific Party, 1999b). Coring extended to a total depth of 632.8 meters below seafloor (mbsf ), and the strata range in age from late Eocene to Quaternary (Shipboard Scientific Party, 1999b). The lithologies include clay-rich nannofossil ooze, nannofossil chalk, nannofossil-rich mudstone, and micrite ( Figure F6). Beds are typically 1-1.5 m thick and distinguished by color variations (greenish gray to white) caused by differing proportions of biogenic carbonate and terrigenous clay. Values of CaCO 3 (from shipboard coulometric measurements) are highly scattered throughout the section and range 10%-84% (Shipboard Scientific Party, 1999b). XRD analyses of 20 samples (Table T3) confirm the compositional variability ( Figure F6); normalized calcite abun-dances are 34.8-85.5 wt% (average = 69.3 wt%). Contents of total clay minerals range 6.5-42.0 wt% (average = 17.5 wt%), and the proportions of both quartz and feldspar remain consistently <12 wt%. Site 1125 is located on the north flank of Chatham Rise at a water depth of 1360 m (Figure F5). This site lies beneath a zone of high primary productivity (the Subtropical Convergence), but sedimentation is also modulated by two surface currents, the East Cape Current, which carries suspended terrigenous sediment from eastern North Island, and the Southland Current, which flows up the east coast of South Island before turning east to parallel the crest of Chatham Rise (Shipboard Scientific Party, 1999d). The cores recovered from Site 1125 comprise a Miocene to Quaternary succession of clay-rich nannofossil ooze and chalk alternating with layers more enriched in terrigenous silt and clay (Shipboard Scientific Party, 1999d). Coulometric measurements of CaCO 3 were not completed at this site, but XRD analyses of 18 samples (Table T3) show consistently high concentrations of calcite (58.0-86.6 wt%; average = 73.9 wt%). Normalized proportions of total clay minerals range 4.6-19.9 wt% (average = 11.0 wt%), whereas the contents of both quartz and feldspar are consistently <11 wt%. The contributions of terrigenous silt and clay are slightly higher in the lower 170 m of the section ( Figure F7). The Hikurangi Channel (Figure F8) is the primary conduit for turbidity currents and related gravity flows entering Hikurangi Trough from the southwest (Lewis et al., 1998;Lewis and Pantin, 2002). The channel extends >2000 km from feeder canyons that head in the Cook Strait and along the southeast coast of South Island (Lewis, 1994;Mountjoy et al., 2009). Six hemipelagic mud samples were analyzed from four cores in the channel-levee complex ( Figure F8). The mud specimens contain 33.4-43.7 wt% total clay minerals, 20.4-31.0 wt% quartz, 20.6-28.7 wt% feldspar, and 6.6-14.8 wt% calcite (Table T3). These bulk compositions are similar to the values documented in Cores VM16-123 and RC08-76 offshore southeast South Island ( Figure F4). The Hawkes Bay sector offshore North Island lies farther to the northeast along the subduction margin ( Figure F9). A total of 10 samples were analyzed from giant piston Core MD2121, on the landward trench slope, to test for compositional variability within the Holocene and latest Pleistocene. Normalized concentrations of total clay minerals in that core range from 35.2 to 41.1 wt%. The content of quartz is 23.6-28.2 wt%, and feldspar varies between 20.1 and 23.3 wt%. Values for calcite range from 12.3 to 19.1 wt%. Nearby Cores RR0503-52 and RR0503-56 yielded similar bulk compositions (Table T3). One sample from Core MD06-2997, on the outer continental shelf, contains slightly more quartz (30.1 wt%) balanced by a reduction of calcite (8.0 wt%). Two samples from Core VM18-231, recovered from the seaward levee of Hikurangi Channel, contain 42.1-42.4 wt% total clay, 23.5-27.5 wt% quartz, 19.5-23.7 wt% feldspar, and 6.7-14.6 wt% calcite (Figure F9), which is similar to samples from the upstream reaches of the Hikurangi Trough ( Figure F8). The IODP transect region ( Figure F10) includes the five sites drilled during Expeditions 372 and 375 (see the Expedition 372B/375 summary chapter ), the Ruatoria debris avalanche, and a transverse submarine canyon in the Poverty reentrant. Three giant piston cores from this sector (MD06-3003, Table T3. Results of bulk-powder X-ray diffraction and computations of mineral abundance. See Table T1 MD06-3008, and MD06-3009) were described and interpreted in detail by Pouderoux et al. (2012). With one exception (Site U1751; 52.3 wt% calcite), the bulk compositions from these cores mirror those of sediments in the Hawkes Bay sector to the southwest (Figure F9). Normalized abundance is 20.6-43.1 wt% total clay, 23.1-39.1 wt% quartz, 19.1-33.9 wt% feldspar, and 4.1-14.7 wt% calcite ( Table T3). Samples from the Ruatoria debris avalanche and vicinity reveal no significant differences in bulk composition compared to undeformed slope sediments to the southwest ( Figure F10). Core MD06-3004, on the continental shelf, contains less total clay (20.6- Figure F4. South Island sector of the reconnaissance study area with positions of sample sites and normalized proportions of dominant minerals in random bulk-powder specimens (computed using regression equations). Deep Sea Drilling Project (DSDP) Site 594 was cored during Leg 90. Ocean Drilling Program (ODP) Site 1119 was cored during Leg 181 (Shipboard Scientific Party, 1999b). The transect across Canterbury Basin was cored during Integrated Ocean Drilling Program Expedition 317. Geographic information for the piston and gravity cores is listed in Table T1. X-ray diffraction results are tabulated in Table T3. 24.2 wt%) and higher proportions of quartz (37.8-39.1 wt%) and feldspar (31.9-33.9 wt%) ( Table T3). It's reasonable to surmise that relatively small contrasts in composition compared to nearby slope and trench-floor deposits are probably related to the grain size distribution because quartz and feldspar are expected to increase in the silt and fine sand fractions. Hikurangi Plateau is the dominant bathymetric feature seaward of the IODP transect sector (Figure F11). The Hikurangi Channel meanders toward the east across the plateau before bending north in the vicinity of Site 1124 (Lewis, 1994;Davy et al., 2008). Six cores were sampled from along the southern flank of the plateau. With two clay-rich exceptions (Cores RR0503-31 and RR0503-48), these Figure F5. Chatham Rise sector of the reconnaissance study area with positions of sample sites and normalized proportions of dominant minerals in random bulk-powder specimens (computed using regression equations). Geographic information for Ocean Drilling Program (ODP) Sites 1123 and 1125 and the piston-gravity cores is listed in Table T1. X-ray diffraction results are tabulated in Table T3. 178°E 180°178°W sediments cover the spectrum from calcareous mud to muddy calcareous ooze. Contents of total clay minerals range from 15.0 to 40.2 wt%, and the proportion of calcite ranges from 25.4 to 67.1 wt%. Contents of quartz and feldspar are <20 wt%. One core from the southernmost Kermadec Trench was also sampled ( Figure F11). Mud from that core contains 37.9-40.8 wt% clay minerals, 24.4-29.4 wt% quartz, 21.7-26.1 wt% feldspar, and 6.7-13.0 wt% calcite. Site 1124 is located on a north-south-trending ridge of drift sediment at a water depth of 3978 m (Shipboard Scientific Party, 1999c). The main goal of drilling at Site 1124 was to obtain a record of Miocene sedimentation under the influence of the DWBC. In addition, a sequence of Pleistocene turbidites laps onto the drift sediments from the west, having spilled over the right bank of Hikurangi Channel (Shipboard Scientific Party, 1999c). Coring reached a total depth of 473 mbsf with recovery of sedimentary rocks as old as Late Cretaceous (Figure F12). The common lithologies include clay-bearing nannofossil ooze, nannofossil-bearing mud, chalk, mudstone, zeolitic mudstone, and chert. Coulometric measurements revealed large variations in the abundance of CaCO 3 ; values range 1.5-88.3 wt% with considerable scatter throughout the section (Shipboard Scientific Party, 1999c). Bulk XRD data from 23 samples are generally consistent with the heterogeneous nature of core descriptions (Figure F12). The normalized abundance of total clay minerals ranges 5.8-60.7 wt%, and the abundance of calcite varies between 3.0 and 91.3 wt% (Table T3). Quartz values range 1.5-17.9 wt%, and feldspar content ranges 1.4-19.6 wt%. In general, the content of total clay increases downsection from the seafloor to ~300 mbsf, with a corresponding decrease in calcite ( Figure F12). Conversely, the lower 125 m of the section is generally more enriched in calcite. Figure F6. Generalized stratigraphic column for Ocean Drilling Program Site 1123 (modified from Shipboard Scientific Party, 1999c) with normalized values of major minerals in random bulk-powder specimens computed using regression equations and singular value decomposition (SVD) normalization factors (Table T2). X-ray diffraction results are tabulated in Table T3 Clay mineral assemblages A smaller subset of the total suite of bulk sediment specimens was used to generate initial results for the clay-sized fraction. This effort concentrated on lithologies with relatively high proportions of total clay minerals (Table T4) to reveal possible first-order changes in clay along the strike length of the Hikurangi margin. Figure F13 shows the geographic distribution of clay-size results plotted as normalized proportions of clay minerals (i.e., where smectite + illite + undifferentiated [chlorite + kaolinite] = 100%). Proportions of smectite overall range 5.0-54.2 wt%. Weight percent values for illite range 35.7-67.4 wt%, and the proportion of undifferentiated chlorite + kaolinite ranges 9.4-27.8 wt%. The spatial changes, however, are noteworthy. The region offshore South Island and continuing into the proximal reaches of Hikurangi Channel is dominated by detrital illite, with normalized proportions of 61-71 wt% (or 53%-64% using Biscaye weighting factors; Table T4). Proportions of smectite begin to increase significantly near 41°S (Core VM18-230), and smectite remains the dominant clay throughout the Hawkes Bay sector and the IODP transect area (Figure F13), with most values >40 wt%. The crystallinity index for illite is consistently between 0.42Δ°2θ and 0.59Δ°2θ ( Table T4). The illite-rich clay assemblage (e.g., South Island sector) tends to contain more crystalline illite (i.e., narrower peak) as compared to smectite-rich specimens (broader peak). The range of crystallinity values spans the domain from advanced diagenesis to anchimetamorphism (i.e., incipient greenschist facies). Given the near-seafloor position of the cored intervals, these results should be regarded as indicators of geologic conditions in the detrital source areas rather than in situ burial diagenesis. Smectite expandability ranges 46%-86% (Table T4). In a generic sense, the lower values (less expandability) are consistent with higher proportions of detrital I/S mixed-layer clay in the assemblage, whereas higher values are indicative of more discrete smectite in the assemblage from altered volcanic sources. Percentages of illite within the I/S mixed-layer phase range 1.1%-21.8% (Table T4). Those measurements were possible only for smectite-rich specimens with intensities high enough to resolve the I/S 002/003 peak. Again, these results should be regarded as indicators of geologic conditions in the detrital source areas rather than in situ burial diagenesis. Comparisons between these reconnaissance results for clay minerals and data from published XRD studies in the region are unreliable because of differences in methodology. For example, Land et al. (2010) analyzed the clay mineral assemblages from ODP Site Figure F7. Generalized stratigraphic column for Ocean Drilling Program Site 1125 (modified from Shipboard Scientific Party, 1999e) with normalized values of major minerals in random bulk-powder specimens computed using regression equations and singular value decomposition (SVD) normalization factors (Table T2). X-ray diffraction results are tabulated in Table T3. 1119 in the Canterbury Basin ( Figure F4), but their computations of relative abundance were based on proportions of raw peak-area values without application of weighting factors. Qualitatively, those results show that the mineral assemblage in Canterbury Basin is dominated by illite and chlorite with minor to trace amounts of smectite. In terms of qualitative temporal trends, the data from Site 1119 reveal higher smectite values in the lower Pliocene with consistent decreases upsection into the upper Pliocene and Pleistocene (Land et al., 2010). Accuracy errors in the reported percentages, however, are probably >25%. Computations without weighting fac-tors result in underestimated values for illite, and the values for chlorite are overestimated. Bulk-powder data from Villaseñor et al. (2014) also reveal low concentrations of detrital smectite in the Canterbury Basin sediments relative to illite, muscovite, and chlorite. Carbonate-rich samples from ODP Site 1123 ( Figure F5) were analyzed by Winkler and Dullo (2002) after treating the specimens with acetic acid to remove CaCO 3 . Their computations of relative abundance utilized Biscaye (1965) weighting factors (Table T2), so semiquantitative comparisons are possible with the results reported Figure F8. Proximal Hikurangi Channel sector of the reconnaissance study area with positions of sample sites and normalized proportions of dominant minerals in random bulk-powder specimens (computed using regression equations; Table T2). Geographic information for the piston-gravity cores is listed in Table T1. X-ray diffraction results are tabulated in Table T3. (Winkler and Dullo, 2002). Smectite content decreases from maximum values of ~80% at 33 Ma to ~20%-30% by 1-2 Ma. Conversely, proportions of illite and chlorite both increase over the same timespan. Robert et al. (1986) documented a similar temporal trend at DSDP Site 594 (Figure F5;Shipboard Scientific Party, 1986). In sediments younger than 6 Ma, the content of illite ranges 41%-57% (Winkler and Dullo, 2002), overlapping the range of values reported here for offshore South Island and proximal Hikurangi Channel (Table T4). Figure F9. Hawkes Bay sector of the reconnaissance study area with positions of sample sites and normalized proportions of dominant minerals in random bulk-powder specimens (computed using regression equations; Table T2). Geographic information for the piston-gravity cores is listed in Table T1. X-ray diffraction results are tabulated in Table T3. Table T2). Geographic information for the piston-gravity cores is listed in Table T1. X-ray diffraction results are tabulated in Table T3. 177°178°179°E 180°T AN0810 Table T2). Geographic information for Ocean Drilling Program (ODP) Site 1124 and the piston-gravity cores is listed in Table T1. X-ray diffraction results are tabulated in Table T3. Figure F12. Generalized stratigraphic column for Ocean Drilling Program Site 1124 (modified from Shipboard Scientific Party, 1999d) with normalized values of major minerals in random bulk-powder specimens computed using regression equations and singular value decomposition (SVD) normalization factors (Table T2). X-ray diffraction results are tabulated in Table T3. Site 1124 Quat. Table T4. Results of clay-size X-ray diffraction and computations of mineral abundance. See Table T1 for sample locations and lithologies. Download table in CSV format. Conclusions Reconnaissance-scale XRD analysis of bulk sediment reveals large variations in proportions of biogenic calcite and terrigenous clay minerals across the Hikurangi subduction system. Most specimens from subducting bathymetric highs (Chatham Rise, Hikurangi Plateau) are composed of fine-grained biocalcareous sediment (calcareous mud, muddy calcareous ooze, and nannofossil ooze), whereas clay-rich hemipelagic mud typifies the Hikurangi Trough and landward trench slope. Locally, the carbonate-rich and clayrich sediments are interbedded. The clay mineral assemblage changes significantly along the strike length of the margin. Detrital illite is the dominant clay mineral from offshore South Island into the southwest portion of the Hikurangi Trough. Smectite becomes the dominant clay mineral in the trench-forearc domain between 41°S latitude and the Kermadec Trench.
7,686
2020-03-03T00:00:00.000
[ "Geology" ]
Single mode quadrature entangled light from room temperature atomic vapour We analyse a novel squeezing and entangling mechanism which is due to correlated Stokes and anti-Stokes photon forward scattering in a multi-level atom vapour. Following the proposal we present an experimental demonstration of 3.5 dB pulsed frequency nondegenerate squeezed (quadrature entangled) state of light using room temperature caesium vapour. The source is very robust and requires only a few milliwatts of laser power. The squeezed state is generated in the same spatial mode as the local oscillator and in a single temporal mode. The two entangled modes are separated by twice the Zeeman frequency of the vapour which can be widely tuned. The narrow-band squeezed light generated near an atomic resonance can be directly used for atom-based quantum information protocols. Its single temporal mode characteristics make it a promising resource for quantum information processing. Abstract: We analyse a novel squeezing and entangling mechanism which is due to correlated Stokes and anti-Stokes photon forward scattering in a multi-level atom vapour. Following the proposal we present an experimental demonstration of 3.5 dB pulsed frequency nondegenerate squeezed (quadrature entangled) state of light using room temperature caesium vapour. The source is very robust and requires only a few milliwatts of laser power. The squeezed state is generated in the same spatial mode as the local oscillator and in a single temporal mode. The two entangled modes are separated by twice the Zeeman frequency of the vapour which can be widely tuned. The narrow-band squeezed light generated near an atomic resonance can be directly used for atom-based quantum information protocols. Its single temporal mode characteristics make it a promising resource for quantum information processing. Introduction Quadrature entangled states of light occupy an important place in modern quantum information processing, with examples inluding quantum cryptography [1], teleportation [2], computation [3] and last but not least, improving Quantum Non-Demolition (QND) measurements [4]. Although continuos-variable entanglement and squeezing is a fragile resource easily washed out by optical losses, proposals for continuous-variable error correction [5,6], entanglement purification [7] and distillation [8] have been developed. A sub-threshold optical parametric amplifier (OPA) first used to generate quadrature entangled light in [9] remains the mainstream approach [10,11]. This approach offers design flexibility but at the same time requires relatively complicated and sensitive setup consisting of two or more stabilized cavities. The other possibilities include the usage of four wave mixing (4WM) [12]. This and similar methods however generate multimode temporal and/or spatial entanglement, that is the output light of the amplifier contains pairs of frequencies which are independently quadrature entangled. This may have adverse consequences for some applications. For example, if multiple modes are used in quantum cryptography, modes not detected by the legitimate addressee of the information can carry the same signal. In principle they can be detected unnoticed, compromising the safety of the protocol. Also frequently a multimode character of the Fig. 1. Off-resonant double Λ interaction in Caesium in the presence of the magnetic field. The interaction is driven by a strong, linearly polarised probe beam detuned by ∆ = 0.85GHz from the extreme atomic resonance. The ground state levels are split by Ω L = 322kHz. The collective scattering leads to weak atomic excitation to the m F = 3 state described by an annihilation operatorb and Ω L sidebands of the probe light described by annihilation operatorsâ + andâ − . squeezed light is a disadvantage for the protocols involving photon counting, such as entanglement purification [8] or preparation of squeezed single-photon states [13,14,15]. In this paper we report the proposal and experimental observation of a high purity entangled state which at the same time is a pure two-mode squeezed state. The source is based on an interaction of off-resonant driving light with the room-temperature spin polarised caesium vapour placed in dc magnetic field in relaxation-protected environment. The squeezed field is naturally compatible with atomic memories based on the same alkali atom as the source. The non-classical state of light is generated in a pair of frequency sidebands shifted by the Zeeman frequency around the driving field (the carrier) with a unique temporal envelope and the same spatial mode as the driving light beam. The frequency of the entangled modes can be tuned by magnetic field and the bandwidth of squeezing can be adjusted by changing the driving light power or the optical depth of the atomic sample. In the present experiment an ultra-narrow band squeezing with approximately 1 kHz bandwidth has been generated. These features make it an attractive alternative to existing sources for the applications in quantum information processing (QIP), especially those vulnerable to the spurious modes. Our method requires almost no alignment and three diode lasers, each of a few mW power, two for optical pumping and one for off-resonant driving. Model of the interaction Let us consider an ensemble of atoms ( Fig. 1) with the total ground state angular momentum F > 1/2 driven by a strong off-resonant linearly-polarized probe light. Prior to the interaction we optically pump all atoms into the extreme magnetic sublevel m F = F in the electronic ground state. The orientation is thus orthogonal to both the propagation and polarization direction of the driving field. The probe light is seen as a superposition of σ + and σ − polarizations in the quantisation basis. Qualitatively generation of squeezed light by such system can be explained via correlations between forward scattered π-polarized photons. The driving light leads to the anti-Stokes scattering into m F = F − 1 followed by Stokes scattering back to m F = F. In the alkali atoms, for the polarization of the driving field perpendicular to the orientation, the second transition is more probable and therefore almost each anti-Stokes photon is accompanied by the Stokes twin with anticorrelated phase. This property is mediated by the collective atomic excitation as in the quantum repeater utilizing DLCZ protocol [16]. The correlation persists over a period of several milliseconds, the typical life time of the atomic ground state coherences T 2 in our experiment. If the entire process happens on a timescale shorter than T 2 , then the emitted light is in a single mode. The more photon-atom pairs are created and the more atomic excitations are released into light, the higher is the degree of squeezing which is defined by the number of scattered photons. A general theory of off-resonant interaction of this type neglecting atomic motion has been discussed in [17] and broadband squeezing was predicted in [18]. The interaction between light and atoms in our system can be described using two creation operators -â † for the scattered photons andb † for the collective atomic excitation into the magnetic sublevel m F = F − 1. The Hamiltonian of interaction can be written as:Ĥ where χ a and χ p are the coupling constants, describing Stokes and anti-Stokes scattering. At the elementary interaction level χ aâ †b † + H.c. describes the active part of the interaction, that is photon-atom entanglement, while χ pâ †b + H.c. describes the passive part of the interaction, that is a beam splitter -like exchange of excitations between photons and atoms. In case of realistic multilevel atoms the passive and active coupling χ p and χ a in Eq. (1) can have substantially different magnitudes. The interaction Hamiltonian can be written asĤ int =hχ(p apb + ξ 2x axb ) where χ = χ p + χ a , ξ 2 = (χ p − χ a )/(χ p + χ a ) = 14a 2 /a 1 , a 1 , a 2 are the vector and tensor parts of the atomic polarizability, and being p quadrature operators associated with light and atoms (for details of the Hamiltonian derivation see the Appendix). For short interaction times for alkali atoms with F > 1/2 the second term inĤ int can be neglected and the Hamiltonian reduces to the quantum nondemolition (QND) Hamil-tonianĤ QND = 2hχp apb extensively used for spin squeezing, quantum memory and teleportation protocols [4,19]. For atoms with F = 1/2 the passive and active couplings in Eq. (1) have equal magnitudes χ a = χ p and the Hamiltonian is always of the QND type. However for the stronger coupling/longer interaction time with alkali atoms the complete Hamiltonian Eq. (1) has to be applied which leads to new attractive dynamics. If the passive part of the interaction prevails χ p > χ a , which is the case analyzed in this paper, ξ is real and the interaction leads to swapping of the quantum states between light and atoms as discussed below. On the contrary the case of χ a > χ p , i.e. of imaginary ξ leads to entanglement between light and atoms. For a rigorous derivation of the atom-photon dynamics we start with the Hamiltonian in the presence of the magnetic field. As illustrated in Fig. 1 the ground state sublevels experience Zeeman splitting with the Larmor frequency Ω L . Therefore the Stokes scattering produces a photon in an upper sideband of the probe, described by the creation operatorâ † + , while the anti-Stokes scattering couples to a lower sideband of the probe, described by the creation operatorâ † − . Thus the interaction Hamiltonian is reexpressed in the following way:Ĥ L is the length of the cell and we have omitted the space and time dependence of operators for brevity. The operatorb annihilates an atomic excitation in an atomic slice around a given z: where N a is the total number of atoms in the cell, n is the number of atoms in the slice and k indexes the atoms. In the limit of almost all atoms residing in the m F = 4 stateb is a bosonic operator, [b(z),b † (z)] = δ (z − z ′ ). Analogously,â + andâ − denote an annihilation operator for the upper or lower sideband photonic mode around a given z at a certain time with [â + (z,t),â † + (z,t ′ )] = δ (t − t ′ ). We derive the input-output relations from this Hamiltonian under two assumptions. We assume that the light passes through the cell on the timescale much shorter than the total time of the interaction or the atomic transient time. Next we assume that the atoms see an average over the volume polarization of the light since they move fast compared to the evolution of their internal state. Therefore the collective atomic mode interacting with light is uniform across the cell with the corresponding annihilation operatorb = L 0 dzb(z)/ √ L. We then obtain for the light modes: denote the operators at the input/output plane of the cell, and we assume that the passage is instantaneous. Note, that the state of light inside the cell changes linearly with the coordinate z fromâ ± (t) toâ ′ ± (t). To calculate the atomic state evolution we integrate the Heisenberg equation for the local atomic polarisationb over the length of the cell to obtain an equation describing the evolution of the entire ensemble: where the factor γ sw = |χ p | 2 /2 − |χ a | 2 /2 takes into account the linear change of a + and a − along the cell, as follows from Eq. (4) . We will see that γ sw is the rate at which the initial state of the atoms decays and is replaced by the state of the incoming light, hence we refer to it as a swap rate. This equation can be readily solved and yields the single cell input-output relations. However the result is complicated and not useful for our current purpose, because we have disarrayed the simple structure of the Hamiltonian by applying the magnetic field. Fortunately, similarly to the QND case this can be rectified by letting the light interact with two atomic ensembles in series [20,21]. The ensembles are placed in equal magnetic fields, with the atoms pumped into opposite extreme magnetic states. Then the upper and lower sidebands interchange their roles. To obtain the combined two-cell input-output relations we write the Eqs. (4) and (5) for both cells, using the output light of the first cell as the input for the second cell. Instead of sideband operators a + and a − we shall use the sine and cosine combinationŝ For the atoms we use collective annihilation operators defined asb c = (b 1 +b 2 )/ √ 2 andb s = i(−b 1 +b 2 )/ √ 2 whereb 1 andb 2 are annihilation operators for cell 1 and 2 respectively. We arrive at: where ′′ denotes the output face of the second cell. We also get an identical pair of equations forâ s andb s operators. Below we shall focus on the equations for c operators implicitly using the fact, that the other solutions will be identical. Integrating the above equations gives: Those equations can be cast into a simple form once suitable temporal modes for the light are introduced. For probe pulses of duration T we choose: whereX L andP L are quadrature operators for the new modes of light, N in and N out are normalization factors while the dots stand for contribution from modes orthogonal to N in e γ sw t and N out e −γ sw t respectively. We also find it convenient to introduce the quadrature operatorsX A andP A for the atomic state: With these notations the input-output relations reduce to: where κ = 1/ξ 1 − exp(−2γ sw T ) is the coupling constant. These input-output relations have a simple and interesting form. They describe a beam splitter transformation with transmission 1 − ξ 2 κ 2 and squeezing of the inputs by a factor of 1/ξ . This is schematically depicted in Fig. 2. In particular after a long enough interaction time T ≫ γ −1 sw the initial state of the atoms is squeezed and mapped onto the light, and vice versa. If the atoms are initially in the coherent spin state (CSS), the output light will be in a squeezed state with a squeezing factor equal 1/ξ , i.e. Var(P ′ L ) = ξ 2 . Note that the above equations describe either the cosine combination of the light sidebands and the symmetric excitation of the two ensembles or the sine combination and the antisymmetric excitation. Therefore both theP ′ L,c andP ′ L,s will be squeezed after the interaction. This is equivalent to generating quadrature entanglement between upper and lower sideband of the probe pulse since 2Var(P ′ L,c ) + 2Var(P ′ L,s ) = Var(P + +P − ) + Var(X + −X − ) < 2 where + and -denote upper and lower sidebands respectively and the last inequality is exactly the quadrature entanglement criterion [24]. When tensor polarizability effects can be neglected, χ p = χ a and ξ = 0, the inputoutput relations reduce to the QND case [21] and entanglement driven by the swap interaction disappears. The duration of the light mode which contains squeezing can be manipulated by changing the driving power and/or the optical depth of our sample. The output mode can be also shaped by varying the driving field intensity during the interaction, so that γ sw becomes time dependent, which is of particular importance for applications for atomic memories. A closer examination of Eq. (7a) reveals that the output mode in this case has a mode function u(t) proportional to u(t) ∝ γ sw (t) exp − t 0 dt ′ γ sw (t ′ ) . In particular, one can shape the driving field pulse such that squeezing is produced in a flat top temporal mode. Experiment The experimental setup is depicted schematically in Fig. 3. We use two 22 mm long cubic paraffin-coated cells containing about 3.6 × 10 11 caesium atoms each. Both cells are placed inside magnetic shields in uniform magnetic fields oriented along x axis. Each measurement cycle begins with optical pumping of the atoms in the two cells into oppositely oriented CSS, as described in [22,23,19]. Next a 15 ms long driving pulse blue detuned by 855 MHz from the 6S 1/2 , F=4→6P 1/2 , F'=5 transition is turned on. Prior to the interaction it is spatially shaped into a circular flat top beam 20 mm in diameter using a telescope beam shaper in order to make the coupling strength uniform across the beam. The driving field polarized along the y axis passes through both cells along the z-direction. Finally the beam goes through half-or quarter wave plates, a polarizing beam splitter (PBS) and onto a pair of balanced detectors. Depending on the settings of the wave plates we can either measure the X L or the P L quadrature of the light. The signals from the detectors are subtracted, sent to a lock-in amplifier and digitized with an integrating A/D converter at a 12.5 kHz rate. The number of pumped atoms in the cells is monitored by measuring the Faraday rotation of very weak auxiliary probe beams propagating through the cells along the magnetic field. As a first test, we check the coupling of the mean value of the atomic spin operators to the light. The initial spin polarized state (CSS) of atoms corresponds to the vacuum state in (X A ,P A ) phase state. A 150 µs long RF-pulse applied orthogonally to the dc magnetic field in one of the cells displaces the atomic spins equally inX A andP A creating several tens coherent atomic excitations. After that we detect firstX L and thenP L in two series of 200 experimental cycles. From Eq. (7a) we can find the expected mean values of the output light operators assuming they have zero mean values at the input: where the time-dependent light quadrature operators x ′ L (t)+ip ′ L (t) = √ 2a ′′ c (t) are written with small letters to distinguish them from operators associated with exponential modes. The experimental data showing the exponential decay of both mean values as predicted by Eq. (11) is presented in Fig. 4. The ratio x L (0) / p L (0) found from the figure yields ξ −2 = 6.3. It agrees very well with the theoretical value at our detuning, which itself is not very sharp due to the Doppler broadening. Notice that for a pure QND interaction, p L (t) would be zero independently of either measurement time or input mean values of the atomic operators. Next we proceed to the demonstration of the squeezing of the output light. We prepare the atoms in the CSS, send a 15 ms probe pulse and measure the noise inp L,c (t) andp L,s (t), averaging over typically 10000 cycles. Squeezing can be seen in the power spectrum of the signal (i.e. p 2 L,c (Ω) +p 2 L,s (Ω) ) from the lock-in amplifier as shown in Fig. 5. When the magnetic field is shifted such that the atomic contribution to the noise lies outside the detection bandwidth, we measure the shot noise level, with the spectral shape corresponding to the gain/sensitivity function of our detection system. When the magnetic fields in both cells are adjusted so that the atoms precess at exactly 322 kHz (the center frequency of the detection range), we see an apparent dip in the noise power at that frequency and around it in the bandwidth of a few hundred hertz. The atoms are initialised to the CSS state and then a 15 ms long probe pulse is shined through them. Solid curve was taken with atoms in both cells tuned to the 322 kHz Larmor frequency, while dash-dot curve is the shot noise level reference, taken with atoms detuned far away from the detection bandwidth. The Gaussian shape of the reference spectrum is due to the detection bandwidth, mainly limited by the lock-in amplifier used to demodulate the homodyne signal, while the dip in the middle is the fingerprint of the ultranarrowband squeezing produced in the interaction with the atoms. Analysis of the temporal modes of the squeezing The probe light emerging from the cells has a strong vertically polarised component accompanied by a squeezed state in a horizontally polarised component. The latter is typically contained in a temporal exponentially falling mode. Since all the squeezing is contained in a narrow frequency bandwidth readily detectable by our homodyne setup, it is possible to find the excited modes directly from the experimental data. In each experimental cycle we sample the output of the homodyne detector at a rate much higher than necessary to capture the bandwidth in which the squeezing occurs. This way we obtain values ofp quadratures of light as a function of time,p L,c (t) andp L,s (t). We focus on the cosine quadrature with the results for the sine quadrature being very similar. The two-time correlation function p L,c (t)p L,c (t ′ ) yields the amount of noise in any temporal mode or a correlation between pairs of modes that are contained within the detection bandwidth. Therefore it is natural to ask whether any mode basis is favoured in such case. The answer is provided by the Karhunen-Loéve theorem as detailed in the Appendix. For each correlation matrix one can find a unique basis of modes u n (t) which have no cross-correlations. Each of them can be in principle measured separately and it will exhibit a variance of ξ 2 n = Var(P (n) L,c ) that is also found from the p L,c (t)p L,c (t ′ ) . In Fig. 6 we plot the sum of sine and cosine quadrature variances 2Var(P − )/2 obtained from the measured correlation matrices. It is expressed as a noise reduction below shot noise. We can see that there is one dominant squeezed mode, in which the noise is reduced by 3.5(1)dB. Superscript (n) denotes n-th characteristic mode function u n (t). Several other modes are squeezed by about 1 dB. This is due to atomic decoherence taking place during the interaction. The equations (7) describe a perfectly coherent evolution and predict pure single mode squeezing. However, in reality the transverse decoherence time T 2 is comparable to the duration of the fundamental mode. Thus, figuratively speaking, while the squeezed light leaves the cells, the state of atoms in the cells is driven back to the initial CSS state. This starts the process of squeezing of the light anew in an incoherent manner. This is confirmed by the shape of the characteristic mode functions obtained from the experiment shown in Fig. 7. The most squeezed mode has an exponentially decaying shape, which agrees with the theoretical model. The decay rate is found to be (5.5ms) −1 , which is virtually equal to the decay rate of the mean values measured to be γ sw = (5.7ms) −1 . The next mode is rising, supporting the explanation that as the atomic state is brought back to the CSS, an independent squeezing process starts. Conclusions We have developed a model of an off-resonant interaction between light and spin polarized multi-level atomic vapour. We find, that for long interaction times and/or small detunings, the interaction leads to the swap of states between the light and atoms accompanied by two-mode squeezing (entanglement) transformation of the two sidebands of light and the two cells. In particular if the two atomic ensembles are in a CSS prior to the interaction, this state will be mapped with entanglement onto the state of the Ω L sidebands of the outgoing light. This state of light is emitted in an exponentially decaying temporal mode. At the same time a portion of incoming light, which comes in an exponentially rising temporal mode is mapped onto the atoms. We have confirmed experimentally that a room temperature Cs vapour generates a single temporal light mode in the quadrature entangled state with 3.5(1)dB of entanglement. As predicted by theory the temporal mode in which we find entanglement is decaying exponentially. Losses of light and decoherence of the atomic state populate a few other uncorrelated modes weakly squeezed by about 1 dB. We find experimentally that the rate of the leading decoherence process scales with optical power and the density of atoms approximately in the same way as the rate of the coherent interaction γ sw . We attribute this decay which currently limits the degree of entanglement to light-induced collisions. Applying antireflection coating on the cell windows to reduce the losses and reducing atomic decoherence due to collisions and magnetic dephasing should allow to generate even more pure entangled state with a higher degree of squeezing. We expect that the single mode squeezing and two mode quadrature entanglement can be beneficial in QIP protocols where security is a concern. We also expect that it can be very useful in protocols where discrimination of a single mode at the detection stage is not possible. Note, that the squeezing produced is readily compatible with the atomic memories [22] and the mode in which it is produced can be shaped by changing the intensity of the local oscillator during the interaction. This research was funded by EU grants COMPAS, QAP, and HIDEAS. C.M. acknowledges support from the Elite Network of Bavaria, QCCC. Paraffin coating of the cells was skillfully performed by M. Balabas. A. Details of the derivation It is possible to derive the Hamiltonian from Eq. (1) by evaluating the Clebsh-Gordon coefficients and adiabatically eliminating excited levels. Without further approximations this leads to a Hamiltonian describing the scalar, vectorial and tensorial polarizability of the atoms, with coupling constants of a 0 , a 1 and a 2 respectively [19]: Above z is the direction of the propagation of the light beam, A is the cross section of the beam, γ is the natural linewidth while ∆ is the detuning. The Hamiltonian (12) can be simplified if the atoms are in a state very close to the CSS oriented along x axis. The result is different depending on the F number. For concretness we will assume here F = 4 with almost all atoms in m F = 4 state with respect to the x-axis, only some in m F = 3 and none in the other states. In this situation we can approximate j 2 z , j 2 + and j 2 − by c-numbers and components ofĵ j j. This way we arrive at: The Hamiltonian above may appear complicated, but in fact only two first terms under the integral a 1 S z j z − 14a 2 S y j y are nontrivial. The other terms generate classical rotations, in particular: the S x j x term in our settings, when both S x and j x are macroscopic merely causes a rotation in the x-p plane for both light and atoms. The total angle of this rotation is typically of the order of a few miliradians and we neglect it.φ is a constant of motion in our system, thus the termφ j x only shifts the Larmor frequency -it represents the Stark shift. The last term, S x , adds to a rotation of in the x c -p c plane for light but is still negligible. Finally we can rewrite nontrivial terms from the above Hamiltonian a 1 S z j z − 14a 2 S y j y using bosonic operators,â for light andb for atoms. This is accomplished using the relations: where Φ is the photon flux per unit time and N a is the total number of atoms and the sign in S z is a consequence of negative sign of S x in our settings. This way we can approximate the interaction Hamiltonian in the form: where The expressions for the vector a 1 and a 2 tensor polarizabilities for Cs D2 line can be found, for example in [21]. It can be directly verified, that the above Hamiltonian is identical with Eq. (2), with χ a = γ sw /2(1/ξ − ξ ) and χ p = γ sw /2(1/ξ + ξ ). Let us note that the sign of ξ 2 can be flipped, that is the χ a and χ p can be interchanged. This is accomplished by rotating the polarisation of the driving field by 90 • , S x → −S x or by switching from blue to red detuning ∆ → −∆. The input-output relations given in Eq. (10) remain valid for both signs of ξ 2 . However for a real ξ they describe the entanglement between two sidebands of the light field, whereas for imaginary ξ they entail the entanglement between the light and atoms. B. Eigenmode decomposition In the experiment we measure directly the covariance matrix of the homodyne signal p c (t): C(t,t ′ ) = p c (t)p c (t ′ ) − p c (t) p c (t ′ ) where p c (t) is a properly scaled signal directly form the lock-in amplifier. After the measurement we can calculate the amount of noise in any temporal mode characterised by the modefunction u n (t), where n indexes a set of modes we are interested in. A Pquadrature operator for the nth mode is simplyQ n = u n (t)p c (t). We can calculate both the variance of any Q n and correlation between any two of them: According to the Karhunen-Loéve theorem one can find a set of mutually uncorrelated modes u n (t) by simply performing a spectral decomposition of measured C(t,t ′ ). In this way we obtain eigenvalues ξ n and eigenfunctions u n (t): The eigenvalues ξ n are equal to the quadrature variances, ξ n = Q 2 n − Q n 2 , while the eigenfunctions give the quadrature mode functions. The same procedure can be repeated for the sine component of the homodyne signal, p s (t) and it yields results identical to within the experimental uncertainties.
7,008.4
2009-07-01T00:00:00.000
[ "Physics" ]
From Schrödinger Equation to Quantum Conspiracy Schrödinger ’ s quantum mechanics is a legacy of Hamiltonian ’ s classical mechanics. But Hamiltonian mechanics was developed from an empty space paradigm, for which Schrödinger ’ s equation is a timeless (t = 0) or time-independent deterministic equation, which includes his fundamental principle of superposition. When one is dealing Schrödinger equation, it is unavoidable not to mention about Schrödinger ‘ s cat. Which is one of the most elusive cats in modern science since disclosed the half-life cat hypothesis in 1935. The cat is alive or not had been debated by score of world renounced scientists it is still debating. Yet I will show Schrödinger ‘ s hypothesis is not a physically realizable hypothesis, for which it has nothing for us to debate about. But quantum communication and computing rely on qubit information algorithm, I will show that qubit information logic is as elusive as Schrödinger ’ s cat. It exists only within an empty space, but not exists within our temporal (t > 0) universe. Since there is always a price to pay within our universe, I will show that every physical subspace needs a section of time Δ t and an amount of energy Δ E to create and it is not free. Although, double slit hypothesis had been fictitiously confirmed that superposition principle exists, but I will show that double-slit postulation is another non-physically realizable hypothesis that had let us to believing superposition principle is actually existed within our time – space. Yet one of the worst coverup must be particles behaved differently within a micro space to justify the spooky superposition principle, which is one of greatest quantum conspiracy in modern science. Nevertheless, the art of quantum mechanics is all about a physically realizable equation, we see that everything existed within our universe, no matter how small it is, it has to be temporal (t > 0) which includes all the laws, principles, and equations. Otherwise, it is virtual as mathematics is since Schrodinger equation is mathematics, but mathematics is not equaled to science. science virtual reality for solution it is not a reliable answer. answer it is a reliable solution. Introduction In modern physics there are two most important pillars of disciplines: It seems to me one is dealing with macro scale objects of Einstein [1] and the others is dealing with micro scale particle of Schrödinger [2]. Instead of speculating that micro and macro-object behaves differently, but they share a common denominator; temporal (t > 0) subspace. In other words, regardless how small the particle is it has to be temporal (t > 0), otherwise it cannot exist within our temporal (t > 0) universe. Nevertheless, as science changes from Newtonian [3] mechanics to statistical [4], to relativistic [1], and to quantum mechanics [2], time had always been regarded as an independent variable with respect to substance or subspace. And this is precisely what modern physics had been used the same empty space platform, which they had have treated time as an independent variable for centuries. Since Heisenberg was one of the earlier starters in quantum theory [5], I have found his principle was derived on the same empty space platform as depicted in Figure 1 which is in fact the "same" platform used for developing Hamiltonian classical mechanics [6]. For which this is the same reason why Schrödinger's quantum mechanics is timeless (t = 0) or time independent because quantum mechanics is the legacy of Hamiltonian. And this is the same reason that Heisenberg uncertainty principle is time independent, instead of changes with time [7]. Nevertheless, Figure 1 is not a physically realizable paradigm by virtue of temporal exclusive principle. In other words, emptiness and temporal (t > 0) are mutually exclusive. Strictly every substance or subspace has to be temporal (t > 0) within our temporal (t > 0) universe. For simplicity we assumed momentarily that mass m is a constant and I shall come for this temporal issue in a subsequent discussion. Yet, total energy of a Hamiltonian particle in motion is equal to its kinetic energy plus the particle's potential energy as given by [6], which is the well-known Hamiltonian equation, where p and m represent the particle's momentum and mass respectively, V is the particle's potential energy. Equivalently Hamiltonian equation can be written in the following form as applied for a subatomic particle. which is the well-known "Hamiltonian Operator" in classical mechanics. Where h is the Planck's constant, m and V are the mass and potential energy of the particle and ∇ 2 is a Laplacian operator; Figure 1. Shows a particle in motion within a timeless (t = 0) subspace. v is the velocity of the particle. By virtue of "energy conservation", Hamiltonian equation can be written as, where ψ is the wave function that remains to be determined, E and V are the energy factor and potential energy that need to be incorporated within the equation. And this is precisely where Schrödinger's equation was derived from, by using the energy factor E = hν (i.e., a quanta of light energy) adopted from Bohr's atomic model [8], Schrödinger equation can be written as [6]; In view of this Schrödinger's equation, but it is essentially identical to the Hamiltonian equation. Where ψ is the wave function has to be determined, m is the mass of a photonic-particle (i.e., photon), E and V are the dynamic quantum state energy and potential energy of the particle, x is the spatial variable and h is the Planck's constant. Since Schrödinger's equation is the core of quantum mechanics, but without Hamiltonian's mechanics it seems to me; we would not have the quantum mechanics. The fact is that quantum mechanics is essentially identical to Hamiltonian mechanics. The major difference between them is that; Schrödinger used a dynamic quantum energy E = hν as obtained from a quantum leap energy of Bohr's hypothesis which changes from classical mechanics to quantum leap mechanics or quantum mechanics. In other words, Schrödinger used a package of wavelet quantum leap energy hν to equivalent a particle (or photon) as from wave-particle dynamics of de Broglie's hypothesis [9], although photon is not actually a real particle [10]. Nevertheless, where the mass m for a photonic particle in the Schrödinger's equation remains to be "physically reconciled", after all science is a law of approximation. Furthermore, without the adoption of Bohr's quantum leap hν, quantum physics would not have started. It seems to me that; quantum leap energy E = hν has played a viable role as transforming from Hamiltonian classical mechanics to quantum mechanics which Schrödinger had done to his quantum theory. Timeless (t = 0) Schrödinger equation Nevertheless, Schrödinger equation is a point singularity approximated deterministic time-independent equation, for which we see that any solution and principle come out from Schrödinger equation will be deterministic time-independent. But science is supposed to change naturally with time or approximated. And this is precisely the reason that quantum scientists had have committed for decades without knowing that solution or principle as obtained from Schrödinger equation is not physically realizable. For which his fundamental principle of superposition is one of them. The reason why Schrödinger equation is not a physically realizable equation is trivial; firstly, since Schrödinger equation is the legacy of Hamiltonian, which is a timeless (t = 0) or time independent classical machine. Secondly, the quantum leap E = hν is not a time limited physically realizable assumption, since Bohr's atomic was developed from an empty subspace platform, which has no time and no space. And this empty virtual subspace had been using it for centuries. Although Schrödinger equation has given scores of viable solutions for practical applications but at the same time it had also produced a number of fictitious and irrational principles and theories that are not actually existed within our temporal (t > 0) universe, such as the paradox of Schrödinger's Cat [11], string theory [12], superposition principle, and others. In order to understand why Schrödinger equation is a timeless (t = 0) or timeindependent equation, we have to understand what is a temporal (t > 0) space paradigm since physically realizable solution comes from a physically realizable subspace. For which every physically realizable subspace must be a subspace within our temporal (t > 0) universe, which changes naturally with time. This includes all the laws, principles, and theories must changes naturally with time, as from strictly physical realizability standpoint. Particularly we are in the era of asking our science to response as instantaneously, for instance as the fundamental principle of Schrödinger equation. For which let me epitomize the nature of our temporal (t > 0) universe as depicted in Figure 2. It shows that our universe was started from a big bang creation theory about 14 billion light years ago. Since past certainty's consequences (i.e., memory subspaces) were happened at specified time within the negative time domain (i.e., t < 0), we see that every specific past time event has been determined with respect to a precise past certainty subspace. For which time can be treated as an independent variable with respect to the past certainty consequences within the pass-time domain (t < 0) as from mathematical standpoint. Which is precisely where Schrödinger equation is, as well all the laws and theories were developed. However, it is reasonable to predict any hypothesis and principle based on our past certainty knowledges, but it is the nature of our time-space tells us that prediction cannot be absolute deterministic, since every physical aspect changes with time. In other words, a deterministic Schrödinger equation should not be used to predict future reality without the constrain of temporal (t > 0) condition, since future physical reality changes naturally with time. And this is the timeless (t = 0) or time-independent past-time certainty subspace that many scientists had used to predict the future out-come with absolute certainty, even though consciously they knew it is incorrect. Although this was the issue that Einstein and his colleagues were strongly opposed Schrödinger's fundamental principle of superposition [13], but Einstein had also committed the same error as Schrödinger did, his general and special theory of relativity are also deterministic theories. Nevertheless, the major difference between Schrödinger's fundamental principle and Einstein's theories is that, Schrödinger's principle is essentially to stop the time, such as applied to quantum computing and communication [14,15]. While Einstein's theory is basically to move ahead or behind the pace of time, for instance as applied to wormhole time traveling [16]. Nevertheless, Schrödinger equation is a non-physically realizable equation which is not encouraged to be used without the constrain of temporal (t > 0) condition, particularly as applied on instantaneously and simultaneously supersession position. Since the fundamental principle exists only within an empty space, but not within our temporal (t > 0) space where empty space is not an inaccessible subspace within our temporal universe. From which we see that those application of Schrödinger equation to quantum space-time would have problem to prove that they exist within our temporal (t > 0) universe, since Schrödinger equation is a time-independent equation. Although using past certainties to predict future outcome is a reasonable method that had have been used for centuries, but it is physically wrong if we treated time as an independent variable within our temporal (t > 0) universe. And this is the reason scores of irrational and fictitious solutions emerged, that has already been dominated the world-wide scientific community. This includes Schrödinger 's fundamental principle of superposition, Einstein's special and general relativity theories, and many others, since they were all based on past certainties to predict a deterministic future, which is not a temporal (t > 0) solution that changes with time (i.e., non-deterministic). Nevertheless, the section of time Δt shown in Figure 2 represents an incremental moment after instant t = 0 moved to a new t = 0 + Δt. In which Δt can be squeeze as small as we wish (i.e., Δt ⟶0), but it cannot be squeezed to zero (i.e., Δt = 0) even we have all the energy ΔE to pay for it. In fact, this is the section of time that cannot be delay or moved ahead the pace of time (i.e., t < 0 + Δt or t > 0 + Δt). From which the possibility for time traveling either ahead or behind the pace of time is not conceivable, since we are coexisted with time. Since our temporal (t > 0) universe shows that science is supposed to be approximated but not exact or deterministic, any deterministic solution is not physically real as from absolute certainty of the present. In other words, further away from the absolute certainty the more ambiguous the prediction or uncertainty is. And this exactly why uncertainty principle should have developed based on temporal (t > 0) standpoint, instead Heisenberg principle was derived by observation which is independent from time [7] Temporal (t > 0) Schrodinger equation As any physical substance or subspace requires to be temporal (t > 0), otherwise it cannot be existed within our temporal universe, this includes all the laws, principles, and theories, otherwise those principles and theories would be as virtual as mathematics. For example, as we had shown in the preceding section. Schrödinger equation is essentially the legacy of Hamiltonian, where Hamiltonian is a timeless (t = 0) or time-independent equation. To avoid the ambiguity of timeless and timeindependent equation, that means that timeless and time independent are equivalent, since within a virtual empty space it has no time and no physical space. Which is precisely why we had hijacked by an empty space inadvertently for centuries, for not knowingly that empty space paradigm is not a physically realizable paradigm. Since the application of all those timeless (t = 0) principles and theories were never encountered with serious irrationality, it was because we had never thought that temporal (t > 0) issue of those timeless (t = 0) principles, although we knew science is approximated. Which was in part due to our own analytical incline that paradoxes can be alleviated by rigorous mathematics that all theoretical scientists adored. For which we felt that without complicated mathematics it has no theoretical physics. But mathematics is not equaled to science, although science needs mathematics. It turns out to be wrong with theoretical physicists, physically realizable science depends on a physically realizable platform but not on the severity of mathematics. Nevertheless, as we have seen it is mathematics currently leads the theoretical physics, but not science directs mathematics. In other words, if it not how rigorous mathematics is, but it is the physically realizable science that we are searching for. Nevertheless, it must be the demand for instantaneous informationtransmission and simultaneous computing, that had motivated me found that the fundamental principle of Schrodinger had violated the nature of temporal (t > 0) condition of our universe. Since every subspace within our universe changes with time, but not the subspace stops the time. In other words, it is time changes us yet we are coexisted with time. Since time changes subspace, then the respond from subspace cannot be instantaneously (t = 0), but it takes a section of time Δt no matter as small it is (i.e., Δt ! 0), but never able to make it to zero (i.e., Δt = 0), to response. Which is a well-known causality constraint [17], that we may have forgotten. Since Schrödinger equation is one of my typical examples to shown that flaw and limitation as it is implemented within our temporal (t > 0) time-space. Firstly, Schrodinger equation is a time-independent deterministic equation, which is precisely why superposition is a timeless (t = 0) principle. Nevertheless, if we imposed a temporal (t > 0) constraint on the equation as given by, From which we see that any solution comes out from this equation will be temporal (t > 0), since temporal equation produces temporal solution. Nevertheless, as from strict temporal (t > 0) standpoint, mass m, quantum leap energy E = hν, and potential energy V should be temporal. Nevertheless, (t > 0) imposition is showing that solution or principle as derived from this equation should be temporal. For example, fundamental principle of superposition is one of the evidences, since the principle was not constrained by temporal condition. In other words, the adopted quantum leap energy E = hν is not a physically realizable assumption to be used, since it is not a time limited quantum leap. This means the wave function ψ as obtained from Schrödinger equation without the temporal constraint is given by [6]; Which is the well-known Schrödinger wave equation, where ψ 0 is an arbitrary constant, ν is the frequency of the quantum leap hν and h is the Planck's constant. As anticipated, Schrödinger wave equation is also a time unlimited solution with no bandwidth, which is not a physical realizable solution. Yet many quantum scientists had used this wave solution to pursuing their dream for quantum supremacy computing and communication [14,15]. But not knowing the dream they are pursuing is not a physical realizable dream. It is trivial where the source of the unlimited quantum leap came from, it is from Bohr atomic model as depicted in Figure 3. Where an atomic model is embedded within a non-physically realizable empty space paper paradigm, it has no time and no space. Yet quantum physicists can implant virtual time and coordinates within the paradigm but not knowing that piece of paper does not actually represents a physically real subspace. From which we see that Bohr's model strictly speaking it is not a physically realizable paradigm should be used. Firstly it is an empty subspace paradigm, secondly E = hν is not a physically realizable quantum leap energy. On the other hand, if we put a temporal (t > 0) constraint on the time unlimited wave equation as given by, From which we have, where t > 0 denotes equation is subjected to temporal (t > 0) condition (i.e., exists only within positive time domain). From which we see that a narrow package of wavelet as shown in Figure 4 is temporal (t > 0) and time limited. Thus, we see that it is unlikely simultaneous wavelets will instantaneously occur at same time. From which we have shown that Schrödinger's fundamental principle of superposition fails to exist within our temporal (t > 0) universe. Nevertheless, major problem of Schrödinger equation is its time-independent or timeless issue, since the equation was derived from an empty space platform as Hamiltonian. From which we see that, Schrodinger equation is not a physically realizable equation, which is precisely why quantum world behaves weirdly as within a timeless wonderland. Since string theory [12] in part was developed from Schrödinger equation, it is trivial to see that string theory is deterministic which is not a physically realizable theory. From which we see that it is not how sophisticated a theory is, but it is the temporal (t > 0) subspace platform that produces physically realizable theories. There is however another essential physical limit cannot be ignored. Within our temporal (t > 0) universe every aspect has a price to pay; a section of time Δt and an amount of energy ΔE [i.e., Δt, ΔE], where ΔE(t) is temporal. In other words, every physically realizable theory or principle needs a section of time Δt to spare and an amount of energy ΔE to realize or to transmit. For instance, every bit of information needs a section of time Δt to create. But without an amount of energy ΔE it is impossible to physically realize a bit of information. For which we have the following by uncertainty relationship as given by [18], where h is the Planck's constant. From which we see that we need to pay a higher amount of energy ΔE for a narrower section of Δt for every bit of informationtransmission. On the other hand, if we want to curve a particle into a string-like shape within our quantum world [12], which is not a physically realizable theory since string theory is a deterministic principle while our universe is temporal (t > 0). Yet, my question is that how long it will take to change a particle to string like equivalent, even though assume we have all the energy (i.e., ΔE) we need. And this is a trivial question that we have to answer, since every physical aspect within our universe has a price (i.e., Δt, ΔE) to pay. In other words, particle-string dynamic is a mathematical equivalent, but physically they are not equaled since every particle is a temporal (t > 0) particle, which has a mass with time. What timeless space does to wavelets? On the other hand, if we take a set physically realizable wave functions as given by, Which are depicted respectively in Figure 5(a), where we see that wavelets are physically separated. However, if this set of wavelets are submerged within an empty subspace, although physically not realizable as illustrated in Figure 5(b), we see that the wavelets superimposed at t = 0 within an empty space, since within an empty space it has no time and no distance. And this is precisely what a virtual empty space can do for all substances as from mathematical standpoint. Before we move on, let me stress that wave-particle duality is a non-physical realizable dynamic, since it is from statistical mechanics standpoint that a package wavelet energy is equivalent to a particle in motion where momentum of a particle p = h/λ is conserved [6]. However, one should not treat wave or a package of wavelet energy hΔν as a particle or particle as wave. But it is a package of wavelet energy equivalent to a particle dynamic (i.e., photon), but they are not equaled. Similar to mass to energy equation, mass is equivalent to energy and energy is equivalent to mass, but mass is not equaled to energy and energy is not mass. For which a quantum of hν or a photon is a virtual particle. From which we see that a photon has a momentum p = h/λ but no mass, although many quantum scientists regard a photon as a physical real particle. Similarly, we can show that a set of separated particles in motion is situated within a temporal (t > 0) subspace as depicted in Figure 6(a). Since they are embedded within a time-space platform, their locations can be precisely determined. However, if this set moving particles are situated within an empty space as illustrated in Figure 6(b), then particles lost their temporal (t > 0) identities (e.g., such as size, location, and motion), since within an empty space it has no time and no space. For which all the particles' dynamic energy converged at t = 0. From which we see that empty space is a virtual space which does not exist within our temporal (t > 0) universe. But we had used this virtual space for ages since the dawn of our science. And this reason that why we need to change to temporal (t > 0) science otherwise we will forever be trapping within the empty wonderland of timeless (t = 0) science, which does not need to pay a price (i.e., Δt, ΔE). Nevertheless, Schrödinger equation is a non-physical realizable equation, which can be traced back to the development of Hamiltonian mechanics. From which we see that it is the background subspace (i.e., a piece of paper) that we had inadvertently treated as an empty space paradigm. And it is also the same empty space paradigm that Bohr's atomic model was embedded, from which we see that quantum state energy hν is not a physically physical assumption. From which I had shown any application of Schrödinger equation has to be constrained within the temporal (t > 0) condition. Otherwise, the solution would be virtual and fictious, which cannot be implemented within our time-space. From which I had shown that it is not how rigorous mathematics is, it is the physical realizable paradigm determines her solution is physical realizable. Schrödinger's cat When we are dealing with quantum mechanics, it is inevitable not to mention Schrödinger's cat since it is one of the most elusive cats in the modern science since Schrödinger's disclosed it in 1935 at a Copenhagen forum. Since then, his half-life cat has intrigued by a score of scientists and has been debated by Einstein, Bohr, Schrödinger, and many others as soon Schrödinger disclosed his hypothesis. And the debates have been persisted for over eight decades, and still debating. For example, I may quote one of the late Richard Feynman quotations as: "After you have leaned quantum mechanics, you really "do not" understand quantum mechanics … ". It is however not the fate of the Schrödinger's half-life cat, but it is the paradox that quantum scientists had have treated the fate of the cat as a physically realizable paradox. In other words, many scientists believed the paradox of Schrödinger's cat is actually existed within our universe, without any hesitation. Or literally accepted superposition is a physically realizable principle, although fictitious and irrational solutions had emerged, it seems like looking into the Alice wonderland. In order to justify some of their believing some quantum scientists even come-up with their own logic; particle behaves weirdly within a microenvironment as in contrast within a macro space. Yet some of their potential applications, such as quantum computing and quantum entanglement communication are in fact in macro subspace environment. Nevertheless, I have found many of those micro behaviors are not existed within our universe, from which paradox of Schrödinger's cat is one of them, as I shall discuss. Let us start with the Schrödinger's box as shown in Figure 7. Inside the box we have equipped a bottle of poison gas and a device (i.e., a hammer) to break the bottle, triggered by the decaying of a radio-active particle, to kill the cat. Since the box is assumed totally opaque of which no one knows that the cat will be killed or not, as imposed by the Schrödinger's superposition principle until we open his box. From which we see that the fate of Schrödinger 's cat is dependent upon the beholder, or consciousness. Nevertheless, as we investigate Schrödinger 's hypothesis, immediately we see that his hypothesis is not a physical realizable postulation, since within the box it has a timeless (t = 0) or time independent radioactive particle in it. As we know that; any particle within our universe subspace has to be a temporal (t > 0) particle or has time with it, otherwise the proposed radioactive particle cannot be existed within Schrödinger's temporal (t > 0) box. It is therefore, the paradox of Schrödinger's cat is not a physical realizable hypothesis and we should not have treated Schrödinger's cat as a physically real paradox. Since every problem has multi solutions, I can change the scenarios of Schrödinger's box a little bit, such as allow a small group of individuals take turn to open the box. After each observation close the box before passing on to the next observer. My question is that; how many times the superposition has to collapse? With all those apparent contradicted logics, we see that Schrödinger 's cat is not a paradox after all. And the root of timeless (t = 0) superposition principle as based on Bohr's quantum leap hν, represents a time unlimited radiator, which is a singularity approximated wave solution. But time-unlimited quantum leap is a non-physically realizable radiator that cannot exist within our universe. Figure 7. Shows Paradox of Schrodinger's Cat: Inside the box we equipped a bottle of poison gas and a device (i.e., hammer) to break the bottle, triggered by the decaying of a radio-active particle, to kill the cat. Micro space coverup Two of the important pillars in modern physics must be Einstein's relativity and Schrödinger's Quantum theory; one is dealing with very large object, and the other is dealing with small particles. Since both of Einstein's theories and Schrödinger's mechanics were developed from an empty subspace, they are not physically realizable principles. But it was those theories that had given us the fantasy promises that had led us to believe that physical behaves within a macro and a micro are different, otherwise relativistic theory and quantum mechanics cannot be reconciled. Nevertheless, either was inadvertently or not, it remains to be found. Nevertheless, this is the objective that I will show that particles behave within a macro and a micro space are basically the same regardless of their sizes. From which I wonder that particle behaves differently within a micro space must be a major cover up but not inadvertently in modern scientific history. Although Einstein was strongly opposing Schrödinger's quantum theory [13], but his relativity theory had also committed the same error for using the same empty space paradigm. For which I will show that particle behaves basically the same within a macro and a micro space, regardless of their size. Nevertheless, the major difference between Einstein's theory and Schrödinger 's principle is that, one is to move ahead or behind the pace of time and the other is to stop the time. Yet neither move ahead nor stop time is possible, since our universe changes with time, but not change the time. As commonly agreed, that a picture is worth more than a thousand words, then a viable diagram is worth more hundreds of equations. Once again let me epitomize the creation of our temporal (t > 0) universe as summarized in Figure 8. > 0) space. In which we see that our universe, subspace, galaxy, planet, particle regardless the size changes naturally with time. From which we see that the behaviors within micro and macro are basically the same. Shows our universe was originated by a big bang explosion from a singularity temporal mass m(t) triggered by her own intensive gravitational force within a preexisted temporal (t In which it shows that the origin of our temporal (t > 0) universe was started by a big bang explosion within a preexisted temporal (t > 0) space that allows a singularity mass M(t) to exist and to grow over time. Such that her induced gravitational pressure will eventually trigger the thermo-nuclei explosion of mass M that enables creation of our universe. From which we see that every substance regardless the size changes with time. Where time is the only invisible real variable runs at a constant pace, for which nothing can move ahead or even stop time. And this a physically realizable time-space we live in. Which is different from the Einstein's space-time continuum where he had treated time as an independent variable [1]. The fact is that temporal (t > 0) universe is a newly discovered realizable timespace that closer to truth. From which I would anticipate temporal (t > 0) space will eventually take over the time-independent universe of Einstein. For which we would have a viable physically realizable paradigm for years to come, because principle and theory developed from a temporal (t > 0) space platform will be physically realizable. In view of our temporal (t > 0) universe, it is not possible for particle behavior differently within a micro space, since every particle is temporal that changes naturally with time. Since it is time changes the particle, but not particle changes time, time is neither can be stop momentarily as superposition principle stated or changed momentarily as relativistic theory promised. In other words, every substance regardless of the size needs a section of time Δt and an amount of energy ΔE to create. And it cannot allow micro-space behaves like a timeless space since every subspace within our universe has to be temporal, by virtue of temporal exclusive principle. Qubit information conspiracy Qubit information-transmission is basically exploiting Wiener's communication strategy for the purpose of qubit transmission [19]. For which the receiver would anticipate a more ambiguous digital signal (e.g., either 0 or 1) from an anticipated sender. In other words, qubit communication has treated at receiving end entropy H(B) as a source entropy H(A) to determine the intended signal was sent. Since signal was originated by the sender, by maximizing entropy H(B) under noiseless condition the receiver can interpret the received signal (e.g., 0 or1) as equals to a qubit information. And this is precisely the qubit information principle that currently is using for quantum communication and computing. For example, a receiver is not certained about an enclosed message is either yes or no, until the receiver opens the envelope to find out is yes or no message but not both. Which is a similar the scenario to the paradox of Schrödinger 's cat before opening his box. But the fate of Schrödinger's cat or the information within the envelope had been determined before we look into the Schrödinger's box or the receiver opens the envelope. From which we see that it is not our consciousness changing the outcome of the enclosed message or the fate of the cat, as superposition principle had implied. For which to guarantee that the envelope will not be contaminated during transmission, if and only if the transmission time is instantaneously (i.e., Δt = 0) which is equivalently that message is sent within timeless (t = 0) channel, that has no time. Therefore, it is the physically realizable qubit information whether it exists within our temporal (t > O) universe. Since everything within our universe has a price to pay, namely a section of time Δt and an amount of energy ΔE, for which qubit information transmission cannot be the exception. Firstly, quantum communication relies on fundamental principle of superposition, but we had shown that superposition principle cannot exist within our temporal (t > 0) universe. Then it has no sense to talk about all the possible capability of qubit information can offer. Nevertheless, let us assume a quantum communication channel which is situated within an empty space paradigm shown in the Figure 9, where a binary source ensemble of A = {0, 1} is capable of transmitting 0 and 1 instantaneously and simultaneously within an empty space. Notice that this is precisely the same subspace platform that Schrödinger's fundamental principle of superposition derived from. From which we see that qubit information can only exist within an empty space platform which is not a physically realizable information hypothesis, since platform has no time to represent a transmitting signal. The fact is that every temporal information (i.e., 0 or 1) needs a section of time (i.e., Δt) to presents a time-signal. In other words, if a time-signal has no section of time, it has no carrier to represent and to transmit within our temporal (t > 0) universe since qubit information is timeless (t = 0) space transmission algorithm. Aside it is not a physically realizable paradigm, let me show how a qubit information channel works as depicted by a block box diagram shown in Figure 10, which is a timeless (t = 0) noise free channel. Where A = {0, 1} represents an input binary source, H(A) = 1 bit is the input entropy, B{qubit} is output quantum bit, H (B) = qubit is the output entropy. Since quantum qubit information transmission has treated the input binary source A = {0, 1} and the output ensemble as qubit B = {qubit}, such that at the receiving ending information can be presented in quantum bit (i.e., qubit). But qubit channel is embedded within a timeless (t = 0) subspace, it has no noise and no time, we see that it has no channel noise entropy [i.e., H(A/B) = 0]. From which mutual information of the qubit channel can be written as, where the output end entropy H(B) is equaled to the input entropy H(A) [i.e., H (B) = H(A)]. Thus, the intended sent signal either 1 or 0, but not by both, is receiving at the receiving end. This is equivalently to recovering the intended input signal that was corrupted within a noisy channel of Wiener's informationtransmission, but in this case is a noiseless channel. In fact, a noiseless channel is a virtual channel only exists within an empty virtual space, which cannot be existed within our temporal (t > 0) universe. Since quantum information is dependent on Schrodinger's superposition principle such that binary transmission of 0 and 1 can be transmitted instantaneously and simultaneously. This presents a quantum bit or a qubit to determine the input source ensemble of either 1 or 0. But quantum information channel is assumed within an empty space paradigm, we see that the operation is instantaneous and simultaneous but only exists within timeless (t = 0) space. Since qubit information is the anchor principle for quantum computing and communication, but unfortunately qubit information cannot exist within our temporal (t > 0) universe, by virtue of temporal exclusive principle. A similar scenario to qubit information transmission is the paradox of Schrodinger's cat, where a received signal is dependent upon on observation. For example, the observer (i.e., the receiver) did not know the cat within the Schrödinger's box is either alive or dead until the observer opens up the box. In which we see that it is the observer confirms the outcome after the observation. But the physical fact is that the cat is alive, or dead had been determined before the observer opens up Schrödinger' s box. Similarly, we never know a boiled egg is either hard or soft-boiled until we crack open it. But hard-or soft-boiled egg had been determined before we crack the egg. Although paradox of Schrödinger's cat had been debated since the disclosure of the hypothesis in 1935, it seems to me that no one had have found the real reason where the paradox comes from until recent discovery of the temporal (t > 0) universe [20,21]. From which I had shown that paradox came from an empty subspace (i.e., a piece commonly used paper) where Schrödinger's equation was derived from. From which I had shown that his fundamental principle of superposition is timeless (t = 0), fails to exist within our universe. On the other hand, if qubit information channel is situated within a temporal (t > 0) subspace as shown in Figure 11, then the responds of a supposed qubit channel is subjected to the boundary condition within temporal (t > 0) space. For which simultaneous and instantaneous superposition of binary digital transmission (i.e., 0, 1) fails to exist. Thus, output entropy H(B) at the transmitted end cannot be treated as a qubit information since superposition principle does not hold within our temporal (t > 0) space. Of which output ensemble is B = {0, 1} that is identical to a conventional noisy binary channel, instead of B = {qubit}. Before departing this section, I would stress that within our universe everything needs a price to pay, a section of time Δt and an amount of energy ΔE and it is not free. However, quantum qubit information pays no price since it does not have a section of time Δt. Yet, qubit information had created a worldwide qubit conspiracy, from which it is hard to tell when this conspiracy would be ended. But I am confidence to say that this fictious qubit information supremacy would be ended soon since information-transmission is supposed to be physically realizable. Double slit paradox Instead of getting into the argument of simultaneous existence particles at double-slit using Young's experiment, which is a non-physical realizable paradigm as from temporal exclusive principle standpoint. Particle-wave dynamics is a mathematical equivalent duality principle as described; particle in motion is equivalent to wave dynamics or wave propagation is equivalent to particle dynamics. However, particle is not equaled to wave and wave is not equal to particle. Particularly as from De Broglie-Bohm theory as I quote: particles have "precise locations" at all times … [9]. But, in contrast within a temporal (t > 0) subspace, particle changes with time but not at precise location since future prediction is not deterministic. As we have shown earlier particle existed within a temporal (t > 0) space is quite difference as assumed within a virtual non-physically realizable subspace. For example, particle existed within our temporal (t > 0) universe, no matter how small it is, it has to be temporal (t > 0). Since temporal subspace is not empty, from which we see that particle cannot be totally isolated. For example, mass particle induces gravitational field, charged particle induces electric field, and others which cannot be ignored. Without the preexistent substances such as permittivity and permeability, wave dynamics has no way to exist. From which we see that particle-wave dynamics is a mathematical postulation existed only within an empty timeless (t = 0) or time independent virtual mathematical subspace, since the assumption of wave dynamics is not a time and band limited physically realizable wavelet. Nevertheless, let me show a double slit set-up as depicted in Figure 12(a), which is a commonly accepted paradigm that has been used in decades, but it is not a physically realizable paradigm. Yet a photonic particle can be shown simultaneously and instantaneously existed at the double slits, since within an empty space it has no time and no distance. And this is precisely the same subspace that Schrödinger's superposition principle derived from, but we had shown that superposition principle can only exist within an empty timeless (t = 0) virtual subspace. However, if the double-slit hypothesis is situated within a temporal (t > 0) subspace as depicted in Figure 12(b), then it is very unlikely two particles will be instantaneously and simultaneously existing at both slits because time is distance and distance is time. Since wave is equivalent to particle as from particle-wave dynamics standpoint, but within our temporal (t > 0) universe any physical wave dynamic has to be time and band limited otherwise it is a virtual wave-dynamic. From which we see that it is very unlikely two wavelets (or particles) will be simultaneously arrived at both slits at the same time. Yet, a question remains to be asked, why it works for a continuous emitting laser. It is apparently that a continuous light emitter has a longer time-limited duration. For example, if we assume that human has a 300-year life expectance, then it has a good chance that we may coexist with Einstein, Schrödinger, and may be coexisted with Newton at some time, but may not at the same place. On the other hand, if our universe is a time-independent (i.e., timeless) space, then in principle we can time-traveling back to visit them. What I have just given is that within our temporal (t > 0) universe everything has a price; an amount of energy ΔE and a section of time Δt (i.e., ΔE, Δt) to pay. But this is the necessary cost, and it is not sufficient. From which we see that superposition principle is limited by a section of time Δt, although ΔE and Δt are coexisted. Nevertheless, we can hypothetically show that instantaneously and simultaneously superposition phenomenon does not hold by a postulated set-up shown in Figure 13, which is a physically realizable paradigm since substance and temporal (t > 0) space are mutually inclusive. However, if the difference path length between d 1 and d 2 is beyond the coherence length D of the coherent illuminator (i.e., laser) as given by. where d s are the distances, Δt s are the incremental times and c is the velocity of light. Then interference pattern cannot be observed at the diffraction screen of P. This means that photonic-particles (i.e., photons) emitted from the laser are not simultaneously and instantaneously arriving at the double-slit as from the coherence theory standpoint. Let me further note that if one submerges any scientific model within a temporal (t > 0) subspace, then it is rather easy to find out any paradox as observed within an empty subspace is not existed. Notice that whenever a scientific model is submerged within a temporal (t > 0) subspace, the model becomes a part of the temporal (t > 0) space for analysis, from which many of the timeless (t = 0) paradoxes can be resolved rather easily, for instance such as Schrödinger's Cat and Einstein's theories. Nonetheless this is an inadvertently error that all scientists had have committed for centuries. For instance, all the laws, principles, theories, and paradoxes were developed from the same empty timeless subspace. For which most of the scientists believe that we can travel ahead and behind the pace of time, as Einstein's special theory has suggested. Similarly, we can simultaneously and instantaneously exploit photonic particles for computing and communication as Schrödinger's fundamental principle of superposition has indicated. For example, if one plunge two moving spaceships within an empty space, we cannot tell which one is moving with respect to the other. However, if we submerge the same scenario within a temporal (t > 0) subspace, inevitably we can figure out the relative position between them, since time is space, and space is time within a temporal (t > 0) subspace while within an empty space there has no time and no distance to distinguish. And this is precisely why Einstein's special theory is relativistic-directional independent and as well his general theory of relativity is a deterministic principle. From which it is trivial for us to submerge a pair of entangled particles within a temporal (t > 0) subspace, then we would find out the instantaneous (i.e., Δt = 0) entanglement is not existed, since within our universe there is always a section of time Δt to pay aside an amount of energy ΔE, and there are not free. Let me further stress that time speed is one of the most esoteric variables existed with our universe that cannot be changed, but it is the section of time Δt we have to spend that can somewhat manipulate. From which we see that the section of Δt that we will spend can be squeezed as small as we wish yet we can never be able to squeeze it to zero (i.e., t = 0), even we have all the energy ΔE (i.e., ΔE ⟶ ∞) willing to pay for. And this is the well-known causality constraint within our temporal (t > 0) universe that cannot be violated. Furthermore, a question remains to be asked; if the width of Young's experiment is smaller than the wavelength of the illuminator, would you able to observe the diffraction pattern. If the answer is no, then we see that wave dynamics is equivalent to particle in motion but not equaled to particle since photonic particle has no size. From which we see that particle in motion is equivalent to wave-dynamic, but wave-dynamic is not particle and particle is not wave. Finally, I would say that when science turns to virtual reality for solution it is not a reliable answer. But when science turns to physical reality for an answer it is a reliable solution. Conclusion I would conclude that quantum scientists used amazing mathematical analyses added with their fantastic computer simulations provide very convincing virtual evidences. But mathematical analyses and computer animations are virtual and fictitious, and many of their animations are not physically realizable for example such as superimposing principle for quantum computing is not actually existed within our universe. One of the important aspects within our universe is that one cannot get something from nothing there is always a price to pay; an amount of energy ΔE and a section of time Δt and they are not free! Since science within our universe is temporal (t > 0), in which we see that any scientific law, principle, theory, and paradox has to comply with temporal (t > 0) condition within our universe, otherwise it is unlikely be physically realizable. Since science is mathematics but mathematics is not equaled to science. Yet, Schrödinger equation is a legacy of Hamiltonian classical mechanics, I had shown that Schrödinger equation is a timeless (t = 0) or time-independent formula which includes his superposition is not a physical realizable principle. Since Schrödinger's cat is one of the most controversial paradoxes in modern science, I had shown that the paradox of Schrödinger's cat is not a physical realizable paradox, which should not have been postulated. Nevertheless, the most esoteric nature of our universe must be time, for which every fundamental law, principle, and theory is associated with a section of time Δt. I had shown that it is the section of Δt we had expended that cannot bring it back. For which I had shown that we can squeeze a section of time Δt closes to zero (i.e., Δt ! 0) but it is not possible reach zero (i.e., Δt = 0) even though that we have all the energy ΔE to pay for it. In which we see that we can change a section of Δt, but we cannot change the pace of time. Since quantum computing and communication rely on qubit information logic, but qubit information can only exist within a timeless (t = 0) subspace. I had shown that qubit information is virtual and illusive as Schrödinger' s cat. Which is not a physically realizable qubit information that can be used for quantum supremacy communication and computing. Although double-slit hypothesis is a well-accepted postulation for showing the superposition principle holds, but unfortunately the postulation only holds within empty space paradigm, and it is not existed within our temporal (t > 0) universe. What I meant is that double-slit postulation is another false hypothesis aside the Schrödinger's cat that had led us to believing superposition is actually existed within our universe. Since quantum supremacy relies on qubit information-transmission, which has caused a worldwide quantum conspiracy. I hope this conspiracy will be ended soon, otherwise we will forever trap within a timeless wonderland of quantum supremacy. From which we see that it is not how rigorous the mathematics is, it is the temporal (t > 0) subspace paradigm that produces viable realizable solution.
11,690
2021-08-12T00:00:00.000
[ "Physics" ]
Effective binning of metagenomic contigs using contrastive multi-view representation learning Contig binning plays a crucial role in metagenomic data analysis by grouping contigs from the same or closely related genomes. However, existing binning methods face challenges in practical applications due to the diversity of data types and the difficulties in efficiently integrating heterogeneous information. Here, we introduce COMEBin, a binning method based on contrastive multi-view representation learning. COMEBin utilizes data augmentation to generate multiple fragments (views) of each contig and obtains high-quality embeddings of heterogeneous features (sequence coverage and k-mer distribution) through contrastive learning. Experimental results on multiple simulated and real datasets demonstrate that COMEBin outperforms state-of-the-art binning methods, particularly in recovering near-complete genomes from real environmental samples. COMEBin outperforms other binning methods remarkably when integrated into metagenomic analysis pipelines, including the recovery of potentially pathogenic antibiotic-resistant bacteria (PARB) and moderate or higher quality bins containing potential biosynthetic gene clusters (BGCs). MetaHIT MetaHIT (10-sample) Fig. S3 COMEBin recovers more known and unknown bins with >50% completeness and <5% contamination on the species level.The "known" genomes refer to bins that can be annotated at the species level using GTDB-Tk, and "unknown" otherwise.Fig. S4 Comparison of the number of bins with F1-score>0.9 recovered by the binning algorithm."Unique" denotes the unique strains (genomes with an average nucleotide identity (ANI) of less than 95% to any other genome) introduced in the benchmark paper [1], and "common" otherwise.Fig. S5 Comparison of the number of bins with F1-score>0.9 recovered by the binning algorithm on the Strain madness GSA dataset."Unique" denotes the unique strains (genomes with an average nucleotide identity (ANI) of less than 95% to any other genome) introduced in the benchmark paper [1], and "common" otherwise.COMEBin The results annotated with an asterisk (*) represent the total runtimes or memory usage across all ten samples in VAMB's multi-sample mode."BATS (average)" represents the average running time or memory usage across the ten BATS samples.We ran each tool on each dataset three times and reported the mean scores and the respective standard deviations.✓ LeakyReLU "#hidden layers" denotes the number of hidden layer; "#hidden units" denotes the number of hidden units; "#sequencing samples" denotes the number of sequencing samples.Genome Analyzer II The term "Q20 (%)" represents the fraction of reads with an average quality > 20. Algorithm S1 The contrastive learning training process of COMEBin Input: Batch size N bs ; the number of views V ; Neural Networks f cov and f combine ; features of contigs X (com) and X (cov) .Output: for i ∈ {1, 2, . . ., N bs } do for all v ∈ {1, 2, . . ., V } do 4: end for 6: end for 7: Update network parameters of f cov and f combine to minimize L. L is given in Equation 12 in the main text.8: end for 9: return f cov and f combine 2 Supplementary Note Estimating completeness and contamination of the bins Similar to MetaBinner [2], we utilized CheckM1 [3] to analyze one binning result and identify contigs containing single-copy genes of bacterial or archaeal domains.Subsequently, we employed the scoring strategy provided by CheckM1 [3] to estimate the contamination and completeness of each bin in all the clustering results, leveraging the obtained information. 2.2 The binning performance of COMEBin on the long-read data. We conducted additional testing to evaluate COMEBin's performance on four long-read datasets.We included SemiBin2, SemiBin2 (long-read mode), MetaDecoder, and MetaBAT2 for comparison.Three of these datasets were previously used in SemiBin2's evaluation.Long-read assemblies were generated using flye (version 2.9.2) with the options "-pacbio-hifi" and "-meta".More details about the long-read datasets can be found in Table S7.These datasets are publicly available in the National Genomics Data Center (NGDC) under the study accession PRJCA007414 (Runs: CRR344871 and CRR344872), in the ENA under the run accession SRR10963010, and in the NCBI under the run accession ERR9769275.It's worth noting that long-read sequencing typically produces highly contiguous assemblies, resulting in fewer contigs and smaller bins (measured by the number of contigs) [4].According to the results shown in Supplementary Fig. S10, SemiBin2 (long-read mode) performs best, followed by COMEBin. Comparison of variants of COMEBin using different clustering methods We conducted experiments with different variants of COMEBin, replacing the Leiden-based clustering method with InfoMap, as implemented in SemiBin1.Additionally, we employed k-means and weighted k-means for clustering, utilizing the embeddings as features, and determined bin numbers based on single-copy genes.In "weighted k-means", we assigned the weight for each contig based on its length.For Infomap, we used the same graphs converted from the embeddings as inputs, following the same methodology for automatically selecting the final result as in COMEBin.The parameters used to generate the graphs included σ in Formula 13 with values of 0.05, 0.1, 0.15, 0.2, and 0.3, along with edge ratios(proportions of edges kept for clustering) with values of 50%, 80%, and 100%.Our comparative analysis revealed that COMEBin outperforms its variants, as illustrated in Supplementary Fig. S13. Fig. S1 f Fig. S1 Comparison of binning methods on four simulated datasets based on the F1-score (bp), Adjusted Rand Index(bp), percentage of binned bp, and accuracy (bp) metrics.a, CAMI Gt dataset; b, CAMI Airways dataset; c, CAMI Skin dataset; and d, CAMI Mouse gut dataset. a D e c o d e r S e m iB in 1 S e m iB in 2 Fig. S6 Fig. S6 COMEBin outperforms other binners in real datasets in single-and multi-sample binning. Fig. S7 Fig. S7 Comparison of variants of COMEBin.We conducted experiments with different variants of COMEBin by replacing COMEBin embeddings with those from other methods.Subsequently, we applied the same clustering approach used in COMEBin for binning. Fig. S8 Fig. S8 Comparison of COMEBin with different numbers of views.The number of views indicates the number of sequence fragments extracted from each original contig for augmentation.A view count of six implies that we randomly sampled five sequence segments for augmentation from each original contig, resulting in six views, including the original (original contig).The default setting for COMEBin is six views. Fig. S9 Fig.S9Comparison of binning methods on two low-complexity datasets.Note that default settings of VAMB are not applicable to the CAMI mouse gut (10-genome) dataset, as the dataset contains fewer than 4096 contigs. Fig. S10 Fig. S10 Comparison of binning methods on long-read sequencing datasets. Fig.S12Sequence length distribution for the real datasets. Fig. S13 Fig. S13 Comparison of variants of COMEBin using different clustering methods. Table S1 Running time and memory usage for different datasets and binning modes Table S2 Sample information of the MetaHIT (10-sample) and Bermuda-Atlantic Time-series Study (BATS) samples. Table S3 Hyper-parameters used by the network module in the experiments. Table S4 Simulated datasets used in the experiments. Table S5 Real datasets used in the experiments. Table S7 Long-read sequencing datasets used for extended experiments.
1,573.6
2024-01-17T00:00:00.000
[ "Computer Science", "Biology" ]
A novel similarity measurement for triangular cloud models based on dual consideration of shape and distance It is important to be able to measure the similarity between two uncertain concepts for many real-life AI applications, such as image retrieval, collaborative filtering, risk assessment, and data clustering. Cloud models are important cognitive computing models that show promise in measuring the similarity of uncertain concepts. Here, we aim to address the shortcomings of existing cloud model similarity measurement algorithms, such as poor discrimination ability and unstable measurement results. We propose an EPTCM algorithm based on the triangular fuzzy number EW-type closeness and cloud drop variance, considering the shape and distance similarities of existing cloud models. The experimental results show that the EPTCM algorithm has good recognition and classification accuracy and is more accurate than the existing Likeness comparing method (LICM), overlap-based expectation curve (OECM), fuzzy distance-based similarity (FDCM) and multidimensional similarity cloud model (MSCM) methods. The experimental results also demonstrate that the EPTCM algorithm has successfully overcome the shortcomings of existing algorithms. In summary, the EPTCM method proposed here is effective and feasible to implement. INTRODUCTION Natural language is a valuable tool for human communication and thinking.However, there can be great uncertainty in the use of language, which can be summarized by the concepts of randomness and fuzziness (Li & Du, 2017).The research on natural language processing is both challenging and poetic.Artificial intelligence is also marked by ambiguity (Müller et al., 2022), especially in the era of big data.Although the development of information transmission and storage technology has improved big data processing, it is still impossible to obtain a complete, real-time picture of all the data.Li, Di & Li (2000) put forward the cloud model theory in the early 1990s, which integrates fuzziness and randomness, realizes the mutual conversion between qualitative concepts and quantitative representations, and is intuitive and universal.After several years of exploration and development, the cloud model has become completer and more universal (Wang, Li & Yang, 2019).The cloud model has been successfully applied and developed in many fields, such as in the statistical representation of engineering parameters (Chen et al., 2020;Luo et al., 2022), system evaluation and decision-making (Tong & Srivastava, 2022;Su & Yu, 2020;Wu et al., 2020), data mining (Shehab, Badawy & Ali, 2022;Zarinbal, Zarandi & Turksen, 2014), image processing (Li, Li & Du, 2018;Tversky & Kahneman, 1992), and decision making problems (Yu et al., 2021;Wang, Huang & Cai, 2020;Zhou, Chen & Ming, 2022).It should be noted that the practical applications of cloud model theory (such as data mining and decision analysis) all involve the similarity measurement (Zhang, Zhao & Li, 2004).Therefore, the similarity measurement will directly influence the actual application of the cloud model theory. The comparison between the similarities of cloud model applications is of great interest to researchers (Li, Wang & Yang, 2019).Cloud models express the uncertainty of data intuitively and provide a method with which to analyze qualitative concepts similarly to that of human cognition.The randomness and fuzziness of the cloud model make it more advantageous in dealing with uncertain problems such as data clustering (Sheng et al., 2019), data classification (Wang et al., 2021), and similarity searches (Luo et al., 2022).Cloud models have been developed and improved over time, resulting in the development of various similarity measurement methods.Zhang et al. (2007) viewed the digital features of two cloud models as elements of two vectors and characterized the similarity of the cloud models by the cosine angle of the two vectors (LICM).Li, Guo & Qiu (2011) proposed the area proportional method (expectation-based cloud mode, ECM) based on the expectation curve.This method uses the intersection area surrounded by the expectation curve and the horizontal axis of two cloud models to represent similar components, resulting in the similarity of the cloud models.Inspired by the relationship between the Gaussian distribution and GCM, researchers have utilized the distance of probability distributions to determine the Kullback-Leibler divergence (KLD) (Xu & Wang, 2017), earth-movers' distance (EMD) (Yang, Wang & Li, 2016), and the square root of the Jensen-Shannon divergence (Yang et al., 2018), to describe the concept of drift (EMDCM) which is reflected by the distance between two cloud models.Wang et al. (2018) defined a new measure of fuzzy distance for model clouds based on the α-cuts and they proposed a new cloud model similarity measurement method using the fuzzy distance measurements (fuzzy distance-based similarity, FDCM).Yan et al. (2019) used the overlap-based expectation curve of cloud model (OECM) algorithm as a measurement method to measure the similarity of cloud models.In this algorithm, the overlapping degree is used to describe the overlapping part of two clouds, and the overlapping part is transformed into the similarity of cloud models by using the membership degree of ''3En'' boundary and the intersection of two clouds.Li et al. (2020) proposed a cloud model similarity measurement method (UDCM) based on uncertain distribution.Zhang et al. (2021) put forward a new similarity measurement method (MSCM) for multi-dimensional cloud models based on fuzzy similarity principle.Luo et al. (2022) proposed a new structural damage identification method (MCM) based on a cloud model similarity measurement of response surface model Publication year Main contents Limitations Zhang et al. LICM method • The discrimination of the measurement results was low; • When there is a large difference between the numerical features of the cloud model, the calculated similarity error is larger. Li, Guo & Qiu 2011 ECM method • The method ignores the role of hyper-entropy (He) for cloud models, and the metric results are generally different from human cognition; • The calculation steps are tedious and the arithmetic is complicated. EMDCM method • The discrimination of the measurement results was low; • The method do not find the difference between two different concepts in some special situations due to neglecting the variation of hyper-entropy (He); • Moreover, it only have partial interpretability due to the absence of relationship between entropy and hyper-entropy (Li, Wang & Yang, 2019). FDCM method • The method still has a complex arithmetic process and is costly to run on the CPU; • Measurement results remain unstable; • The threshold (δ) of cloud droplets is difficult to determine. OECM method • The discrimination of the measurement results is not good; • The algorithm only considers the overlap between cloud models, and does not consider the shape similarity of cloud models, which can only be partially explained. UDCM method • This method still has integral operations and consumes a large amount of CPU runtime; • The calculation results are still influenced by the number of cloud droplets and the number of experiments. MSCM method • The algorithm can fail in some special cases; • For example, when the two cloud expectation (Ex) are equal, the metric result will be constant at 1.It ignores the role of entropy (En) and hyper-entropy (He), which is inconsistent with human subjective cognition and has loopholes. MCM method • Although hyper-entropy is considered in MCM, it will fail when hyper-entropy is very large; • The calculation steps are tedious and the arithmetic is complicated. updating.A brief summary is given in Table 1 to illustrate the shortcomings of the existing methods. Currently, there is no consensus on how to evaluate the similarity measurement method of cloud models.However, a good cloud concept similarity metric algorithm needs to be stable and efficient, and able to highlight the differences between the different types of clouds.It should ensure greater differentiation and guarantee correct similarity conclusions.In addition, a similarity metric for cloud models with good performance should be generic. In order to solve the problems of existing cloud model similarity metrics, this study aims to propose a new cloud model similarity measurement method using the triangle cloud model (Gong, Dai & Hu, 2016), an extended model of the normal cloud model, as the research object.The triangular cloud model similarity (PCM), based on cloud drop variance, is proposed as the shape similarity of two groups of cloud models.It is combined with triangular cloud model distance similarity (ETCM) based on EW -type closeness (Bao & Bai, 2018).Extra consideration is given to the distance and shape similarity of the cloud model and this method was shown to return better discrimination results.The experimental results show that the discrimination is higher.The simulation results show that the measurement results obtained by EPTCM method are consistent with people's intuitive impression.This can prove that this method is reasonable.It can be analyzed that the process in this paper can better distinguish different types of cloud models. Definitions and notions Here, we provide definitions, relationships, and necessary lemmas for the normal cloud, triangular cloud model, and triangular fuzzy numbers.We then describe the variance of triangular cloud model and EW -type closeness. Definition 1.Let U be a non-empty infinite set expressed by an accurate numerical value, and C is a qualitative concept on U. If there is an accurate numerical value x ∈U, and the mapping y = µ C (x) ∈ [0,1] of x to C is a random number with a stable law, then the distribution of (x, y) on the universe U is called a cloud, and each (x, y) is called a cloud drop (Li, Han & Shi, 1998). Definition 2. The three characteristic parameters (Ex, En, He) of the cloud model are the quantitative embodiment of its qualitative concept.The expectation (Ex) of the cloud is the representation of the expected value of the cloud in the non-empty infinite set expressed and it is also the center of gravity corresponding to the maximum value of the membership degree Y ; entropy (En) is a measure of the uncertainty of cloud model, which reflects the expected dispersion of cloud droplets and the fuzziness of cloud model data.Hyper-entropy (He) is the entropy of En, which is a measure of the uncertainty of cloud model entropy.Its value can represent the thickness of cloud, reflecting the randomness of cloud model data. Definition 3. If the random variable x satisfies x ∼ N (Ex,En ' 2 ), where En ' ∼ N (En,He 2 ), and the certainty of x to the qualitative concept satisfies: then that distribution of x on the non-empty infinite set U is normal cloud (Li, Han & Shi, 1998). Definition 4. If the random variable x satisfies x ∼ N (Ex,En ' 2 ), where En ' ∼ N (En,He 2 ), and the certainty of x to the qualitative concept satisfies: then that distribution of x on the non-empty infinite set U is triangle cloud.Definition 5.If the random variable x satisfies x ∼ N (Ex,En ' 2 ), where En ' ∼ N (En,He 2 ), and En = 0.At this time, the Eq. ( 3) exists: then y is called the expected curve of triangular cloud.The expected curve is obtained from the distribution law of cloud droplets in the horizontal direction.The expected curve can intuitively describe the shape characteristics of triangular cloud and all cloud droplets fluctuate randomly around the expected curve.Definition 6. Fuzzy numbers are convex fuzzy sets defined on real numbers R (Wu & Zhao, 2008).For a certain fuzzy number, its membership degree satisfies Eq. ( 4): then r = (r l ,r m ,r n ) is called triangular fuzzy number.The membership function of r is F (x) : R → [0,1], where x ∈ R and R is a real number field.r l ,r m ,r u are the lower bound, median, and upper bound of triangular fuzzy numbers, respectively, and r l ≤ r m ≤ r u .When they are equal, r degenerate into real values.5) exists: E(a) and W (a) are respectively the expected value and width of interval number a (Bao, Peng & Zhao, 2013). Definition 8.For u ∈ F 0 , F 0 is a fuzzy number space (Bao & Bai, 2018).The r-cut set with fuzzy number u is a closed interval, as shown in Eq. ( 6): The r-cut interval number of triangular numbers is shown in Fig. 2. The order relation on F 0 is defined as: u ≤ v, and if and only if for any r ∈ [0,1],u(r) ≤ v (r) and .u u ≥ v, and if and only if for any r ∈ [0,1],u(r) ≥ v (r) and u Definition 8 can be regarded as a bridge between fuzzy numbers and interval numbers, and it is also the theoretical basis for the transformation of interval number closeness to fuzzy number closeness in EW -type closeness. The D (x) is called the variance of cloud droplets in the cloud model.He determines the thickness of clouds, and En determines the dispersion degree of cloud droplets.The larger the difference between He and En, the smaller the shape similarity between two clouds model. Lemma 2. Set function: (Bao & Bai, 2018) g The mapping f : ) is the closeness of the interval number u and v. Lemma 3. Let the mapping N p EW : F 0 × F 0 → [0,1] be defined as: ) is called EW -type closeness of triangular fuzzy numbers u and v (Bao & Bai, 2018). A novel similarity measure for triangular cloud model Many studies consider only one aspect to measure the similarity of cloud models, therefore, these methods have some shortcomings.In fact, the similarity of cloud models can be observed from two aspects: shape and distance.By combining these two perspectives with scientific methods, a more complete similarity measurement method for cloud models can be obtained.The EPTCM method proposed here is formed from this perspective.The EPTCM method consists of a combination of two methods (the ETCM and PCM methods).Since the ETCM method introduces only Ex and En, it is considered as the distance similarity between the cloud models.However, the PCM method is exactly related to the shape of the two cloud models (only En and He are considered).After establishing these two methods, a scientific empowerment method is designed in this article to combine these two methods scientifically.The ETCM method and PCM method are combined by empowerment to obtain the final EPTCM method. Similarity measure of expected curve of cloud model based on EW -type closeness Fuzzy closeness is an important concept for triangular fuzzy numbers (Jiang et al., 2019).Compared with the traditional Hausdorff-distance (Hossein-Abad, Shabanian & Kazerouni, 2020) and P-distance (You & Yan, 2017) formulas, the EW -type distance formula (d P EW ) (Bao, Peng & Zhao, 2013) used in EW -type closeness (Bao & Bai, 2018) considers the expected difference of two interval numbers and takes into account the difference of their widths.The simulation results from Bao, Peng & Zhao (2013) show that this method describes the distance of interval numbers more comprehensively and meticulously, and the utilization rate of information is greatly improved.Compared with the traditional exponential closeness (Eq.( 11)), EW -type closeness introduces the degree of the interval number closeness to participate in the calculation.Bao & Bai (2018) show that EW -type closeness has better discrimination and practicability. According to the ''3En'' rule of the triangle cloud model (Li, Han & Shi, 1998), more than 90% of cloud droplets fall in the range of [Ex − 3En, Ex + 3En].Therefore, when calculating the similarity of the triangular cloud model, we only need to consider the cloud droplets in this range and the expected curve.As shown in Fig. 3, the forward triangle cloud ''3En'' rule is introduced to the expected curve in Definition 5, and the transformed curve formula: Equation ( 12) conforms to the definition of the triangular fuzzy number in Definition 6 and is denoted as y =< Ex − 3En,Ex,Ex + 3En >.Therefore, EW -type closeness can be applied to y .If Ex − (Ex − 3En) = (Ex + 3En) − Ex = 3En, y is called the symmetric triangular fuzzy number based on the expected curve (Hwang & Yang, 2007), it is noted as ỹ = (Ex,3En) T , where Ex and 3En are the expectation and ambiguity (also called width) of the triangular fuzzy number based on the expected curve, respectively. According to Definitions 7 and 8, the upper bound and lower bound of the r-truncated closed interval [u] r of the expected curve can be obtained from triangular fuzzy number y =< Ex − 3En,Ex,Ex + 3En >: The expectation and width of interval u are: Therefore, the triangular fuzzy number v = Ex − 3En ,Ex ,Ex + 3En of another group of cloud expectation curves is set.The similarity of the expectation curves of the two groups of cloud models based on EW -type closeness can be obtained from Lemma 3. The ETCM algorithm constructs two sets of cloud model expectation curves under the restriction of ''3En'' principle into two triangular fuzzy numbers.The EW -type closeness can be used to calculate the similarity of the two triangular fuzzy numbers under r ∈ [0,1].Because the ETCM method does not introduce He into the calculation, it was considered to be the distance similarity of the triangular cloud model.The larger the Sim (ETCM) value, the higher the distance similarity between the two triangular cloud models, and vice versa. Shape similarity measurement of triangular cloud model based on cloud drop variance All clouds can be translated to the position x = 0, therefore, the shape of the cloud has nothing to do with the cloud's expectation (Ex).As mentioned earlier, the cloud's En and He reflect the shape of the cloud and describe the conceptual expansion of the variables.It is clear that the basic skeleton of the cloud model shape is determined by En with the ''3En'' rule.In addition, the He controls the dispersion of the thickness or conceptual extension of the cloud.Based on the above theory, we sought to determine the relationship mathematically. According to Lemma 1, the variance D(x) = En 2 + He 2 of the triangular cloud model consists of En and He.Although the variance does not consider the Ex (the location relationship of the cloud models) it fully reflects the shape similarity between cloud models.The greater the En difference between two clouds, the smaller the shape similarity between two clouds, so consider introducing the mean square error of cloud model to measure the shape similarity of cloud model.If there are two groups of triangular cloud models C i (Ex i ,En i ,He i ) and C j (Ex j ,En j ,He j ), their shape similarity is expressed as: Regardless of the difference between the Ex of the two cloud models, if the En and He of the two clouds are equal, their shape similarity Sim (PCM) = 1.Although the Ex of triangular cloud models C 2 and C 3 are different, the shapes of the two clouds are consistent (Fig. 4).The PCM method has better authenticity and timeliness compared with the similarity method based on the maximum boundary curve of the cloud model in Li, Guo & Qiu (2011) whose model exaggerates the proportion of He in calculating shape similarity. The integrated similarity measurement of the triangular cloud model As previously mentioned, the ETCM method does not consider the influence of He, while the PCM method does not consider the influence of cloud model Ex.We defined a weighted calculation method, which combines the two methods to enrich the completeness and authenticity of cloud model similarity in order to create an integrated approach that incorporated the three main characteristic parameters of the cloud model (Ex, En, He).Referring to the analytic hierarchy process (AHP) and entropy weight method proposed in the reference (Ruan et al., 2017), we defined a method to determine the similarity weight of the integrated cloud model in Eq. ( 17). The calculated weight coefficients (α and β) were used to weight the triangular cloud ETCM method and PCM method, respectively.Thus, a similarity measurement algorithm for triangular cloud models based on the dual consideration of shape and distance is defined.As shown in Eq. ( 18): Equation ( 18) is the final expression of the proposed EPTCM method.The complete computational procedure of the EPTCM algorithm is shown in Algorithm 1.In order to verify that the EPTCM method was feasible and effective, scientific simulation experiments were conducted. EXPERIMENTS AND RESULTS Here, we verify the feasibility and effectiveness of the proposed EPTCM method.Simulation experiments and time series classification test experiments are conducted respectively by using MATLAB software. Cloud model discrimination simulation experiment Four classical cloud models are given in Zhang et al. (2007) using the collaborative filtering algorithm as follows: These four groups of classical cloud models are used to perform simulation experiments as shown in Li, Guo & Qiu (2011) and Yu et al. (2021).In order to verify the advantages of the proposed EPTCM method over the existing methods (LICM, FDCM, OECM, MSCM), the four groups of classical cloud models were also used for our simulation experiments. We classified C 1 ,C 4 as group A and C 2 ,C 3 as group B. Figure 5 shows that there is a big gap between the cloud models of group A and group B in terms of distance and shape similarity.However, within the group, the distance and shape similarity are slightly different.Therefore, the better similarity measurement method may more accurately reflect the intragroup differences between group A and group B. Equation ( 19) was used as the basis to measure the discrimination ability of the cloud model similarity measurement method.The following uses the above EPTCM method to measure the similarity of cloud models among these four cloud models.The EPTCM method was to compare and analyze 2. The comparison chart of Discrimination of the five methods is presented in Fig. 6.In order to reduce the time complexity in the calculation process: n = 1, p = 1. Details of the synthetic control chart dataset The time series represents the state of an object across different time periods and arranges them according to their time of occurrence, thus generating a data series.Time series is often used in engineering data classification, prediction, and clustering (Alcock & Manolopoulos, 1999).Here, the synthetic control chart dataset (SYNDATA) (Pham & Chan, 1998), a commonly used time series dataset in the UCI knowledge discovery database, was used to test the classification accuracy.The SYNDATA dataset contains 600 control chart samples synthesized by the processes by Alcock and Manolopoulos in 1999 (Pham & Chan, 1998).This dataset was chosen because it contains the time series' of multiple trends, which are volatile and complex and can be a good test of the accuracy of the similarity measurement. There are six main category patterns in this dataset set; these include normal, circular, upward trend, downward trend, upward transition, and downward transition.Each category contains 100 rows of data, with 60 data in each row.All trends were collectively referred to as ''abnormal trends'' with the exception of the normal trend.All the abnormal trends must be corrected, therefore, it is important to detect the abnormal trends quickly and accurately for strict control processes and good product quality.Because time series datasets have high requirements for the correctness and time complexity of the algorithms, they are a logical choice for classification experiments that analyze and study the time series classification algorithms starting from the correctness and time complexity of the results. It was important to understand the data structure before applying the SYNDATA dataset.The synthetic control chart dataset had a labeled dataset.The given dataset D m×n is a matrix with m = 600 rows and n = 60 columns; every 100 rows of data in 600 rows is one category with a total of six categories.In Table 3, we describe in detail the composition of the dataset.When using the SYNDATA dataset for time series classification experiments, each record is treated as a separate query sequence.Each record needs to be calculated with the remaining 599 records for cloud model similarity, then the top k largest ones are 1 Table 3Q 2 SYNDATA-the composition and content details of the data set.Manuscript to be review Computer Science selected according to the similarity ranking, and categorized according to the group to which the k numbers belong. Detailed introduction of time series classification accuracy test To verify the accuracy and rapidity of the EPTCM method in a time series classification, we used the following algorithm and the last 10 rows of data of each main category pattern were used to form the test set.Previous studies interlaced the extraction of grayscale images in order to improve the efficiency of grayscale image processing.Here, the first 90 lines of data of each main category pattern were interlaced, leaving only odd lines.After the interlaced extraction, there were only 45 lines of data of each main category in the training set.The remaining 45 lines of data in each main category were divided into three groups, which were each subdivided into three groups labeled A, B, and C. The groups of data labeled A, B, and C each contained the every ''abnormal trends''.Next, the data groups, A, B, and C, were classified using the k-NN (k-nearest neighbor) (Fuchs et al., 2010;Lin et al., 2007) therefore, the EPTCM method can be introduced into the k-NN algorithm.The traditional k-NN algorithm determines the number of nearest neighbors of a certain eigenvalue by comparing the distances between the eigenvalues and resulting in the classification of a different eigenvalue.The EPTCM method was used to replace the relative distances between the measured eigenvalues in the traditional KNN algorithm for data classification and statistics.The structural framework of the time series classification accuracy experiment is shown in Table 4.The classification accuracy is shown in Eq. ( 20) and the classification results are shown in Figs.7 and 8, and Table 5. P X = Number of correctly classified samples The total number of samples (20) Average Accuracy GroupX = 10 Simulation experimental result analysis The similarity order of the cloud models calculated by the proposed EPTCM method is as follows: 2).This is consistent with the visual impression in Fig. 5. Table 2 shows that the discrimination of the EPTCM method for the cloud models in group A and group B was 0.11.The EPTCM method had the highest discrimination among the five methods, which can better identify Steps Content Step 1: Calculate the feature vectors The data in the dataset D m×n are followed by the inverse cloud generator algorithm to obtain three numerical features for each set of data and use them as the feature vector for the i th record (time series): This enables the dimensionality reduction of the dataset, where 1 ≤ i ≤ m; Step Step 3: Calculate the similarity matrix The (EPTCM, LICM, FDCM, OECM, MSCM) method is used to calculate the similarity between the feature vectors corresponding to the data in the test set and the feature vectors corresponding to the data in the training set.The resulting similarity matrix is generated: the similarity of the cloud models and better reflect the differences between cloud models A and B. The MSCM method had a discrimination of 0.0932 between two groups of cloud models, which is second only to the EPTCM method.However, the measurement range of the MSCM method is very limited.and Sim(C 2 ,C 3 ).These results are limited and are inconsistent with human cognition.The results of the OECM method were 0.0234 for the discrimination between the cloud models of groups A and B. This indicates that the discriminatory ability of the OECM method is poor compared to the EPTCM method.The discrimination of the FDCM method was poor with results of 0.0104.It is not difficult to see from the data in Table 2 that the Sim(C 2 ,C 4 ) of the FDCM method is NAN.This is due to the limitation of its algorithm itself.The method will lose the influence of the judgment expectation on the similarity when the En and He of both clouds are equal.We can see that the cloud similarities of LICM are close to 1 and the discrimination is not clear.These results are obviously inconsistent with intuitive human feeling.Through the above analysis, it can be reflected that the EPTCM method proposed in this paper has certain superiority compared with the four existing methods. Accuracy of the experimental result analysis of time series classification Figure 7 shows that the proposed EPTCM method has a good classification accuracy in the time series classification experiment.When k was 1, 3, 4, 5, or 7, the classification accuracy of EPTCM was over 90% in the time series classification experiment with group A. According to the analysis of the data in Table 5, EPTCM achieved Average Accuracy of over 85% in the time series classification experiment with three training sets.Figure 8 shows the classification accuracy of each method in the range of k = 1 ∼ 10 when the group C training set was used in the time series classification experiments.k value in the group C training set.The proposed EPTCM method has obvious advantages compared with the existing methods that exhibit low classification accuracy and poor stability.Figure 9 shows that the EPTCM method is similar to the MSCM method in CPU overhead time.However, the Average Accuracy of EPTCM was better than that of MSCM.Table 5 shows that the average accuracy of the EPTCM method was significantly better than that of the four existing algorithms when groups A, B, and C were used separately. A comparative analysis of cloud model similarity metrics Studies by Li, Wang & Yang (2019) and Li et al. (2020) provide the evaluation metrics for cloud model similarity metrics.Table 6 shows how the EPTCM method compares with the extant methods in terms of discriminability, efficiency, stability, and interpretability.Discriminability refers to the ability of the similarity matrix to distinguish between the differences of two concepts that are not identical; efficiency refers to the time complexity of computing the similarity between two concepts; stability refers to the fact that the value of the similarity measure is constant over multiple calculations; interpretability means that the process of calculating the similarity metric is interpretable.LICM has high efficiency and stability, but it does not distinguish the differences between two concepts with the same proportion of numerical features.Additionally, considering numerical features as a vector does not reflect the relationship between the numerical features and this lacks interpretability in the calculation process.Simulation experiments also confirm its low discriminatory ability.MCM and OECM are similarity metrics based on the overlap of feature curves.They have high efficiency and stability like LICM, but they do not find the difference between two different concepts in some cases because Future research will address how to introduce a better fuzzy closeness method in the process of cloud model similarity calculation for the development of the triangular cloud model.Time-Series-Data-Processing-and-Classification-Experiment.HTTPS: https://github.com/Jianjun158/Time-Series-Data-Processing-and-Classification-Experiment.git. Definition 7 . For a _ ,a ⊆ R, and a _ ≤ a, then a = a _ ,a is called interval number.The relationship between fuzzy numbers and interval numbers is shown in Fig. 1.The whole number of intervals is recorded as [R ]. for a ∈ [R], the Eq. ( Figure 3 Figure 3 Schematic diagram of the expected curve structure parameters of the cloud model.Full-size DOI: 10.7717/peerjcs.1506/fig-3 Figure 7 Figure 7 Classification accuracy of the EPTCM method under three kinds of training data with different k values.Full-size DOI: 10.7717/peerjcs.1506/fig-7 2: Determine the training set and test set The last 10 rows of data for each major category pattern in the D m×n were used to form the test set.The rest of the data is used as the training set.After interlaced extraction, there are only 45 lines of data of each main category in the training set.And the remaining 45 lines of data in each main category are divided into three groups, which are grouped into three groups: A, B and C in turn.Make the three groups of data A, B and C each contain all kinds of ''abnormal trends''; Figure 8 Figure 8 Classification accuracy of different methods on the training set of group C when k = 1 ∼ 10.Full-size DOI: 10.7717/peerjcs.1506/fig-8 Figure 8 Figure 9 Figure7shows that the proposed EPTCM method has a good classification accuracy in the time series classification experiment.When k was 1, 3, 4, 5, or 7, the classification accuracy of EPTCM was over 90% in the time series classification experiment with group A. According to the analysis of the data in Table5, EPTCM achieved Average Accuracy of over 85% in the time series classification experiment with three training sets.Figure8shows the classification accuracy of each method in the range of k = 1 ∼ 10 when the group C training set was used in the time series classification experiments.Figure8clearly shows that EPTCM was the best among the five methods in terms of classification accuracy.The classification accuracy of the EPTCM method also shows good stability with the change of Table 2 Measurement results of similarity of cloud models of FDCM, OECM, LICM, MSCM and EPTCM methods. The data corresponding to Formula 19 has been indicated in bold. Table 4 Structural framework for time series classification accuracy experiments. Trainj denotes the similarity between C Testi and C Trainj calculated using the EPTCM algorithm; Trainj is sorted from largest to smallest, then the top 1 ≤ k ≤ 10 values of each row of the test set similarity matrix Sim C Testi ,C Trainj are selected according to the number of nearest neighbors k.Finally, the time series classification experiments are completed by categorizing the number of groups to which the k numbers belong. Table 2 reveals that the results of the other four groups of similarity were all 0 (negative values will revert to 0), with the exception of Sim(C 1 ,C 4 )
7,779.8
2023-08-09T00:00:00.000
[ "Computer Science", "Mathematics" ]
Divorce Rate and Economic Factors in Iran This paper will study the relationship between divorce and Iran’s economic-social variables. The results showed that there is a significance relation between income distribution and divorce such that the worse income distribution quality, the more divorces will occur. Among other results of these paper, the direct relationship between divorce rate and monthly expenditures of Iranian households and it’s reverse relationship with income per capita and illiteracy rate. Introduction: Divorce is related with marriage and family, and is a social innovation; it had been used as an instrument exposing failure through marriage.Divorce causes personal, domestic and social disintegration, and, in most cases, has greater banes for women compared to men.Studying historical trend of divorce phenomena, among contemporary societies, revealed that whatever we shift from feudal system to liberal and industrial society, the divorce possibility and frequency will increase.Iran is not an exception such that, according to official statistics of Organization of Registration and Record of Iran, more registered divorces had happened last Empirical evidences, generally, support a negative relationship between men's income and divorce; Hoffman and Ducan (1997) find that the probability of divorce, among families which the man has high income is lesser.Weiss and Willis (1997) find that positive rebounds in men's income will increase the possibility of divorce; also, considering this South and Spitize (1986) realized men's working hours is in reverse relationship with divorce.There are many other researches which had studies effects of various economic variables on divorce ; for example, Nunley (2007) had investigated effects of inflation, unemployment, domestic gross production growth rate and women's education changes on America's divorce rate, since 1960s to forward; he concluded that inflation's effect on divorce rate is significant and positive, statistically.Also, South (1985), had analyzed recession and economic boom, and saw that divorce will increase through recessions and will decrease through booms.The paper's results showed that recession periods induce stress into spouses, and boom and expansion periods will create more incomes by the partners.Trent and south, also, using regression analyze for data from sixty-sex countries, showed that is correlated positively to economic development and women's percentage of labor population.Thus, the literature reviews shows that there is significant relation between divorce and various economic variables.This paper aims to study effects of social-economic variables including family's income and expense on number of occurred divorces in the society. Data and econometric model Our time series data include information regarding 33 intervals since 1974 until 2006; and the data had been provided from statistical annals of many years, time series database central bank of Islamic Republic of Iran, time series database of IRAN's Statistic Center, organization of registration and records and etc.The measure of Income distribution statement, in this paper, is Gini coefficient, which is among important indices of income distribution inequality measurement.Index score will fall between 0 (represents a society, income distribution of which is fully equal) and 1(represent a society, income distribution of which is unequal).Other control variables, which had been used through this indication, are: Iranian household's monthly expenses E, literacy rate B, urbanism rate T and Income per capita I. Regarding E variable, by household's expenses we mean the net expenses, which is the monetary value of products or services by the household for using by the members or giving to other people as gift; the related data to this variable had been provided from the results of Iranian Evaluated coefficients from the model show that there is direct relationship between Gini coefficient and divorce rate such that the more Gini coefficient is with the more occurred divorces per ten-thousands.Gini coefficient is a measure indicating the inequality level of income distribution, and higher levels show more inequality of income distribution; therefore the resulted coefficient shows that by increasing the inequality of income distribution, the number of divorces will increase.Evaluated I coefficient is a negative number which indicates the negative relationship between per capita income and divorce rate; it means the increasing of per capita income will result in decrement of divorce number. The relationship between R and E is positive which indicate this fact that increasing the monthly expenses of Iranian households will lead to more divorces.Also, the evaluated T coefficient shows that there is direct relationship between urbanism rate and divorce rate.It means that the more migration by rural population to cities and other reasons, which result in increment of urban population compared to country population, the more divorces will occur.And finally, there is a negative relationship between literacy and divorce rate; means that increasing of literate population leads to divorce rate decrement. Conclusion: In this paper we aimed to find a relationship between income distribution and occurred divorces in Iran.With respect to statistical data in Iran, we had used Gini coefficient measure indicating the inequality of income distribution; also some other control variables had been used through this research including Iranian Household's monthly expenses, per capita income, literacy rate and urbanism rate.The results show that there is significant relationship between income distribution and divorce such that increment of income distribution inequality will lead to higher divorce rate i.e. number of occurred divorces per ten-thousands population; also, the results show that Iranian household's monthly expenses urbanism increment will increase the divorce rate and increasing the per capita income and literacy, it will decrease. income and expenses measurement by the Statistic Center organization of Iran.Empirical results Used model, for investigating Gini coefficient and other economic and demographic variables on divorce, had been specified and evaluated as follow: D: indicates the number of occurred divorces per ten-thousands.Gin: indicates the Gini coefficient.I : per capita income by thousand Toman.E: indicates Iranian household monthly expenses by thousand Toman.T: indicates the urbanism rate and equals to urban population/country population ratio B: is the literacy rate.The results of evaluation are as follow : D=2.6 + 13Gin-0.007I+0.01E+19.4T-0
1,355.2
2014-03-31T00:00:00.000
[ "Economics" ]
Fabrication of Mn-Doped SrTiO3/Carbon Fiber with Oxygen Vacancy for Enhanced Photocatalytic Hydrogen Evolution With carbon fiber, it is difficult to load semiconductor photocatalysts and easy to shed off thanks to its smooth surface and few active groups, which has always been a problem in the synthesis of photocatalysts. In the study, SrTiO3 nanoparticles were loaded onto the Tencel fibers using the solvothermal method, and then the Tencel fibers were carbonized at a high temperature under the condition of inert gas to form carbon fibers, thus SrTiO3@CF photocatalytic composite materials with solid core shell structure were prepared. Meanwhile, Mn ions were added into the SrTiO3 precursor reagent in the solvothermal experiment to prepare Mn-doped Mn-SrTiO3@CF photocatalytic composite material. XPS and EPR tests showed that the prepared Mn-SrTiO3@CF photocatalytic composite was rich in oxygen vacancies. The existence of these oxygen vacancies formed oxygen defect states (VOs) below the conduction band, which constituted the capture center of photogenerated electrons and significantly improved the photocatalytic activity. The photocatalytic hydrogen experimental results showed that the photocatalytic hydrogen production capacity of Mn-SrTiO3@CF composite material with 5% Mn-doped was six times that of the SrTiO3@CF material, and the doping of Mn ions not only promoted the red shift of the light absorption boundary and the extension to visible light, but also improved the separation and migration efficiency of photocarriers. In the paper, the preparation method solves the difficulty of loading photocatalysts on CF and provides a new design method for the recycling of catalysts, and we improve the hydrogen production performance of photocatalysts by Mn-doped modification and the introduction of oxygen vacancies, which provides a theoretical method for the practical application of hydrogen energy. Introduction As a sustainable green technology, semiconductor photocatalysis can transform solar energy, purify the environment, and produce renewable energy. At present, there are various kinds of semiconductor photocatalysts. Among them, SrTiO 3 semiconductor material has been widely used in fields such as photocatalytic water splitting hydrogen production, virus inactivation, and pollutant treatment thanks to its characteristics of excellent chemical stability, non-toxicity, low cost, good photoelectric property, and environmental friendliness [1]. However, powder SrTiO 3 photocatalytic material is easy to condense into blocks in use, which reduces the photocatalytic property of SrTiO 3 , and causes some problems such as difficulties in the recycling of catalyst, easy consumption, possible secondary pollution, and so on, limiting its practical application. Therefore, some researchers have begun to think about loading photocatalysts on more suitable carriers [2]. Carbon fiber (CF) has Photocatalytic Evolution The photocatalytic hydrogen evolution of the samples was tested through a gas chromatograph (GC-7900, Techcomp (China) Co., Ltd., Shanghai, China), which was used to measure the hydrogen production of samples under simulated sunlight irradiation. Before the experiment, 100 mg of catalyst sample was added into the mixture of 50 mL of Na 2 S (0.35 M) solution and 50 mL of Na 2 SO 3 (0.25 M) solution, wherein Na 2 S and Na 2 SO 3 were used as sacrificial agents in the photocatalytic hydrogen production experiment. During the experiment, a PLS-SE300C 300 W xenon lamp was used to simulate the solar light source for testing. Results and Discussion The qualitative phase analysis and crystal structure analysis of photocatalysts could be carried out through an x-ray diffraction test. Figure 1 shows the XRD patterns of Mn-SrTiO 3 @CF, SrTiO 3 @CF, carbon fibers, and pure SrTiO 3 . It can be observed from Figure (PDF#35-0734), respectively, which indicates that the loaded SrTiO 3 has a cubic phase perovskite structure [27,28]. Mn-SrTiO 3 @CF have basically the same diffraction peak positions and peak intensities as SrTiO 3 @CF and have no new diffraction peak, from which it can be presumed that the doping of 5% Mn does not significantly change the crystal structure of the composite catalyst [13]. The above test analysis indicated that strontium titanate semiconductor material was successfully loaded on the surface of carbon fibers, but whether Mn was doped into the strontium titanate material needed to be further tested and verified. doping of 5% Mn does not significantly change the crystal structure of the composite catalyst [13]. The above test analysis indicated that strontium titanate semiconductor material was successfully loaded on the surface of carbon fibers, but whether Mn was doped into the strontium titanate material needed to be further tested and verified. The morphology characteristics of Mn-SrTiO3@CF photocatalytic composite fibers and its preparation process can be further understood through an SEM test and characterization. As shown in Figure 2a,b, Tencel fibers are smooth-faced circular elongated fibers with a diameter of about 10 µm. Figure 2c shows the morphology structure of SrTiO3@Tencel fiber prepared by the solvothermal method; paste-like SrTiO3 particles are coated on the surface of Tencel fibers, and the diameter of SrTiO3@Tencel fiber composite fibers is about 11 µm. It can also be seen from Figure 2c that the SrTiO3 layer on the surface of Tencel fibers has obvious cracks, which may be caused by high pressure irradiation in the process of taking SEM pictures. This is because, under high pressure irradiation, the SrTiO3 material layer would split with the Tencel fiber. In this solvothermal experiment, the Tencel fibers did not dissolve in the high temperature solution because of the use of ethylene glycol and absolute ethyl alcohol as solvents, which effectively prevented cellulose fibers from dissolving. Figure 2d-f shows the morphology structure of SrTiO3@CF composite fibers prepared after the Tencel fibers are carbonized at a high temperature to form carbon fibers, from which it can be observed that SrTiO3 particles are closely coated on the surface of carbon fibers to form a coating layer with a core shell structure. SrTiO3 particles are organically combined with carbon fibers by the high temperature carbonization process to form a firm and close whole. Interestingly, it can be observed from Figure 2f that SrTiO3 nanoparticles have a cubic phase structure, which is consistent with the XRD test results of SrTiO3@CF. In addition, it can be found by comparing Figure 2c and Figure 2d that the diameter of carbon fibers formed by carbonizing Tencel fibers at a high temperature is reduced, and the diameter of the prepared SrTiO3@CF composite fibers is 6-7 µm. Figure 2g-h is an SEM diagram of Mn-SrTiO3@CF composite fibers prepared after doping of 5% Mn, with the surface morphology structure similar to that of SrTiO3@CF composite fibers. Figure 2i shows an SEM diagram of the cross-section structure of Mn-SrTiO3@CF composite fibers, from which it can be observed that the carbon fiber in the core of the composite material has a circular structure, and the thickness of the Mn-SrTiO3 catalyst layer of the shell layer is about 100 nm. The morphology characteristics of Mn-SrTiO 3 @CF photocatalytic composite fibers and its preparation process can be further understood through an SEM test and characterization. As shown in Figure 2a,b, Tencel fibers are smooth-faced circular elongated fibers with a diameter of about 10 µm. Figure 2c shows the morphology structure of SrTiO 3 @Tencel fiber prepared by the solvothermal method; paste-like SrTiO 3 particles are coated on the surface of Tencel fibers, and the diameter of SrTiO 3 @Tencel fiber composite fibers is about 11 µm. It can also be seen from Figure 2c that the SrTiO 3 layer on the surface of Tencel fibers has obvious cracks, which may be caused by high pressure irradiation in the process of taking SEM pictures. This is because, under high pressure irradiation, the SrTiO3 material layer would split with the Tencel fiber. In this solvothermal experiment, the Tencel fibers did not dissolve in the high temperature solution because of the use of ethylene glycol and absolute ethyl alcohol as solvents, which effectively prevented cellulose fibers from dissolving. Figure 2d-f shows the morphology structure of SrTiO 3 @CF composite fibers prepared after the Tencel fibers are carbonized at a high temperature to form carbon fibers, from which it can be observed that SrTiO 3 particles are closely coated on the surface of carbon fibers to form a coating layer with a core shell structure. SrTiO 3 particles are organically combined with carbon fibers by the high temperature carbonization process to form a firm and close whole. Interestingly, it can be observed from Figure 2f that SrTiO 3 nanoparticles have a cubic phase structure, which is consistent with the XRD test results of SrTiO 3 @CF. In addition, it can be found by comparing Figure 2c,d that the diameter of carbon fibers formed by carbonizing Tencel fibers at a high temperature is reduced, and the diameter of the prepared SrTiO 3 @CF composite fibers is 6-7 µm. Figure 2g-h is an SEM diagram of Mn-SrTiO 3 @CF composite fibers prepared after doping of 5% Mn, with the surface morphology structure similar to that of SrTiO 3 @CF composite fibers. Figure 2i shows an SEM diagram of the cross-section structure of Mn-SrTiO 3 @CF composite fibers, from which it can be observed that the carbon fiber in the core of the composite material has a circular structure, and the thickness of the Mn-SrTiO 3 catalyst layer of the shell layer is about 100 nm. To explore the distribution of various elements in the Mn-SrTiO 3 @CF sample, the SEM mapping test was performed, and the results are shown in Figure 3. It can be observed from Figure 3b that the color of carbon fibers is lighter, which is consistent with the test results in [29]. This is because the SEM mapping test is mainly for the element distribution on the surface of the samples within a certain area, the Mn-SrTiO 3 @CF composite fibers have a core shell structure, and the carbon fibers are in the coated state, so the content of C element detected in the Mn-SrTiO 3 @CF is relatively small. As shown in Figure 3, elements such as Sr, Ti, Mn, and O are evenly distributed on the surface of carbon fibers, and the content of Mn is lower, which corresponds to the actual ratio. To explore the distribution of various elements in the Mn-SrTiO3@CF sample, the SEM mapping test was performed, and the results are shown in Figure 3. It can be observed from Figure 3b that the color of carbon fibers is lighter, which is consistent with the test results in [29]. This is because the SEM mapping test is mainly for the element distribution on the surface of the samples within a certain area, the Mn-SrTiO3@CF composite fibers have a core shell structure, and the carbon fibers are in the coated state, so the content of C element detected in the Mn-SrTiO3@CF is relatively small. As shown in Figure 3, elements such as Sr, Ti, Mn, and O are evenly distributed on the surface of carbon fibers, and the content of Mn is lower, which corresponds to the actual ratio. TEM and HRTEM of the Mn-SrTiO3@CF sample are shown in Figure 4. SrTiO3 plies observed in Figure 4a are nano-fragments shed from the Mn-SrTiO3@CF sample that was cut into pieces, from which it can be seen that most of the SrTiO3 plies are composed of small particles of 10-20 nm combined together, and these small nanoparticles have a larger specific surface area, which is conducive to improving the photocatalytic activity. The corresponding interplanar spacing shown in Figure 4c is about 0.279 nm, which shall belong to the (110) crystal TEM and HRTEM of the Mn-SrTiO 3 @CF sample are shown in Figure 4. SrTiO 3 plies observed in Figure 4a are nano-fragments shed from the Mn-SrTiO 3 @CF sample that was cut into pieces, from which it can be seen that most of the SrTiO 3 plies are composed of small particles of 10-20 nm combined together, and these small nanoparticles have a larger specific surface area, which is conducive to improving the photocatalytic activity. The corresponding interplanar spacing shown in Figure 4c is about 0.279 nm, which shall belong to the (110) crystal face of SrTiO 3 (PDF#35-0734) [30]. The doping of a small amount of Mn may change the spacing of the crystal face, which is not significant. The above experimental results showed that Mn was successfully doped in the SrTiO 3 material [31], and the doping of a small amount of Mn does not significantly change the lattice structure of the SrTiO 3 material. TEM and HRTEM of the Mn-SrTiO3@CF sample are shown in Figure 4. SrTiO3 plies observed in Figure 4a are nano-fragments shed from the Mn-SrTiO3@CF sample that was cut into pieces, from which it can be seen that most of the SrTiO3 plies are composed of small particles of 10-20 nm combined together, and these small nanoparticles have a larger specific surface area, which is conducive to improving the photocatalytic activity. The corresponding interplanar spacing shown in Figure 4c is about 0.279 nm, which shall belong to the (110) crystal face of SrTiO3 (PDF#35-0734) [30]. The doping of a small amount of Mn may change the spacing of the crystal face, which is not significant. The above experimental results showed that Mn was successfully doped in the SrTiO3 material [31], and the doping of a small amount of Mn does not significantly change the lattice structure of the SrTiO3 material. The composition and valence state of Mn-SrTiO 3 @CF photocatalysts are researched through X-ray photoelectron spectroscopy (XPS). Figure 5 shows the full spectrum of the Mn-SrTiO 3 @CF sample and the high-resolution XPS spectra of C 1s, Sr 3d, Ti 2p, O 1s, and Mn 2p. Figure 5a shows the full spectrum of the Mn-SrTiO 3 @CF sample, indicating that there are mainly Sr, Ti, Mn, O, and C in the prepared sample, without other impurities. In the fitting spectrum of C 1s shown in Figure 5b, the fitting peak at 284.0 eV is a C-C bond, which corresponds to the C-C bond in carbon fibers and is partially similar to the graphite structure, while the fitting peak at 285.2 eV shall belong to a small amount of C-O bonds on the surface of carbon fibers [31]. It is presumed that carbon fibers and SrTiO 3 material are bonded partially through a chemical bond formed by O atoms. It can be observed from Figure 5c that the high resolution XPS spectrum of Ti 2p can be fitted into two characteristic peaks, which are located at the binding energies of 458.0 eV and 463.8 eV, respectively, which, presumably, shall belong to Ti 2p 3/2 and Ti 2p 1/2 , respectively, corresponding to Ti 4+ ions. The difference between the two fitting peaks is 5.8 eV [8]. It can be observed from the spectrum of Sr 3d shown in Figure 5d that peaks at the binding energies of 132.5 eV and 134.3 eV are attributed to Sr 3d 5/2 and Sr 3d 3/2 of Sr 2+ , and the difference between the peaks is 1.8 eV [32]. Interestingly, as shown in Figure 5e, the peak of O 1s is fitted into three peaks that are located at 529.2 eV, 530.9 eV, and 532.4 eV, respectively, wherein the peak at the binding energy of 529.2 eV corresponds to lattice oxygen in SrTiO 3 , while the peak at the binding energy of 530.9 eV may be adsorption oxygen in oxygen vacancy, and the peak at 532.4 eV corresponds to surface adsorption oxygen in the SrTiO 3 catalyst. This test indicated that there may be oxygen vacancies in the Mn-SrTiO 3 @CF catalyst sample [33,34], which needs to be further confirmed by the EPR test. Figure 5f shows the high-resolution XPS diagram of Mn 2p. Because of the low content of doped Mn, the corresponding measured XPS characteristic peak signal is relatively weak. After peak fitting, it is presumed that the characteristic peaks located at the binding energies of 641.6 eV and 652.6 eV correspond to Mn 2p 3/2 and Mn 2p 1/2 , respectively, and the difference between the two peaks is 11 eV. The EPR test was conducted to further confirm the existence of oxygen vacancies in the Mn-SrTiO3@CF sample, as shown in Figure 6. It can be observed from Figure 6 that the Mn-SrTiO3@CF sample has a strong peak at g ≈ 2.003, indicating the existence of oxygen vacancies in the Mn-SrTiO3@CF sample [34,35]. The generation of oxygen vacancies in the Mn-SrTiO3@CF sample may be caused by the fact that, in the carbonization process of bamboo pulp fibers loaded with SrTiO3, a part of C reacts with O in SrTiO3, and after SrTiO3 is treated at high temperature in argon atmosphere, a part of O in the lattices sheds off, thus forming oxygen defects. The existence of these oxygen vacancies will help to expand the light absorption range of the photocatalyst as well as to improve the charge transfer ability of the Mn-SrTiO3@CF material. Combined with the above analysis of results, it indicated that Mn-SrTiO3@CF composite material rich in oxygen vacancies is successfully prepared, and the doping of Mn will further improve the photocatalytic activity of the material. The EPR test was conducted to further confirm the existence of oxygen vacancies in the Mn-SrTiO 3 @CF sample, as shown in Figure 6. It can be observed from Figure 6 that the Mn-SrTiO 3 @CF sample has a strong peak at g ≈ 2.003, indicating the existence of oxygen vacancies in the Mn-SrTiO 3 @CF sample [34,35]. The generation of oxygen vacancies in the Mn-SrTiO 3 @CF sample may be caused by the fact that, in the carbonization process of bamboo pulp fibers loaded with SrTiO 3 , a part of C reacts with O in SrTiO 3 , and after SrTiO 3 is treated at high temperature in argon atmosphere, a part of O in the lattices sheds off, thus forming oxygen defects. The existence of these oxygen vacancies will help to expand the light absorption range of the photocatalyst as well as to improve the charge transfer ability of the Mn-SrTiO 3 @CF material. Combined with the above analysis of results, it indicated that Mn-SrTiO 3 @CF composite material rich in oxygen vacancies is successfully prepared, and the doping of Mn will further improve the photocatalytic activity of the material. The photocatalytic hydrogen production performance of composite materials such as SrTiO3@CF and Mn-SrTiO3@CF was tested under simulated sunlight. The test results are shown in Figure 7. Na2S and Na2SO3 were used as sacrificial agents in this experiment. It can be seen from Figure 7 that carbon fibers have no hydrogen production performance under the action of light, while the hydrogen production performance of SrTiO3@CF photocatalytic composite fibers is about 46.90 μmol/g·h. It is presumed that the existence of oxygen vacancies in the SrTiO3@CF material promotes the extension of its light absorption boundary to visible light and enhances the charge transfer ability, which is conducive to improving the photocatalytic property of the material. Meanwhile, the carbon fibers have functions similar to co-catalysts, promoting the migration of photoelectrons [24]. As shown in Figure 7, the hydrogen production capacity of Mn-SrTiO3@CF composite photocatalytic fibers reaches 285.37 μmol/g·h, which is about six times that of the SrTiO3@CF material, which is obviously attributed to the doping of a small amount of Mn ions; the result is similar to the research in [36]. The doping of Mn ions not only promoted the red shift of the light absorption boundary and the extension to visible light, but also improved the separation and migration efficiency of photocarriers. The above test results showed that the existence of oxygen vacancies, the function of carbon fibers similar to a co-catalyst, and the doping of Mn ions significantly improved the hydrogen production performance of Mn-SrTiO3@CF photocatalytic composite fibers. The photocatalytic hydrogen production performance of composite materials such as SrTiO 3 @CF and Mn-SrTiO 3 @CF was tested under simulated sunlight. The test results are shown in Figure 7. Na 2 S and Na 2 SO 3 were used as sacrificial agents in this experiment. It can be seen from Figure 7 that carbon fibers have no hydrogen production performance under the action of light, while the hydrogen production performance of SrTiO 3 @CF photocatalytic composite fibers is about 46.90 µmol/g·h. It is presumed that the existence of oxygen vacancies in the SrTiO 3 @CF material promotes the extension of its light absorption boundary to visible light and enhances the charge transfer ability, which is conducive to improving the photocatalytic property of the material. Meanwhile, the carbon fibers have functions similar to co-catalysts, promoting the migration of photoelectrons [24]. As shown in Figure 7, the hydrogen production capacity of Mn-SrTiO 3 @CF composite photocatalytic fibers reaches 285.37 µmol/g·h, which is about six times that of the SrTiO 3 @CF material, which is obviously attributed to the doping of a small amount of Mn ions; the result is similar to the research in [36]. The doping of Mn ions not only promoted the red shift of the light absorption boundary and the extension to visible light, but also improved the separation and migration efficiency of photocarriers. The above test results showed that the existence of oxygen vacancies, the function of carbon fibers similar to a co-catalyst, and the doping of Mn ions significantly improved the hydrogen production performance of Mn-SrTiO 3 @CF photocatalytic composite fibers. The photocatalytic hydrogen production performance of composite materials such as SrTiO3@CF and Mn-SrTiO3@CF was tested under simulated sunlight. The test results are shown in Figure 7. Na2S and Na2SO3 were used as sacrificial agents in this experiment. It can be seen from Figure 7 that carbon fibers have no hydrogen production performance under the action of light, while the hydrogen production performance of SrTiO3@CF photocatalytic composite fibers is about 46.90 μmol/g·h. It is presumed that the existence of oxygen vacancies in the SrTiO3@CF material promotes the extension of its light absorption boundary to visible light and enhances the charge transfer ability, which is conducive to improving the photocatalytic property of the material. Meanwhile, the carbon fibers have functions similar to co-catalysts, promoting the migration of photoelectrons [24]. As shown in Figure 7, the hydrogen production capacity of Mn-SrTiO3@CF composite photocatalytic fibers reaches 285.37 μmol/g·h, which is about six times that of the SrTiO3@CF material, which is obviously attributed to the doping of a small amount of Mn ions; the result is similar to the research in [36]. The doping of Mn ions not only promoted the red shift of the light absorption boundary and the extension to visible light, but also improved the separation and migration efficiency of photocarriers. The above test results showed that the existence of oxygen vacancies, the function of carbon fibers similar to a co-catalyst, and the doping of Mn ions significantly improved the hydrogen production performance of Mn-SrTiO3@CF photocatalytic composite fibers. The cyclic stability of the Mn-SrTiO 3 @CF composite catalyst was tested, and the results are shown in Figure 8. After four consecutive tests of cyclic photocatalytic water splitting hydrogen production performance, the average photocatalytic hydrogen production performance of the Mn-SrTiO 3 @CF composite catalyst is about 267.69 µmol/g·h. There was only a slight decrease over a period of cyclic experiments, indicating that the the Mn-SrTiO3@CF composite catalyst can maintain relatively stable photocatalytic performance in the water splitting hydrogen production reaction. The cyclic stability of the Mn-SrTiO3@CF composite catalyst was tested, and the results are shown in Figure 8. After four consecutive tests of cyclic photocatalytic water splitting hydrogen production performance, the average photocatalytic hydrogen production performance of the Mn-SrTiO3@CF composite catalyst is about 267.69 μmol/g·h. There was only a slight decrease over a period of cyclic experiments, indicating that the the Mn-SrTiO3@CF composite catalyst can maintain relatively stable photocatalytic performance in the water splitting hydrogen production reaction. Based on the Kubelka-Munk theory, the band gap (Eg) of semiconductor materials can be calculated according to Equation (1) [38]. As shown in Figure 9b, hν is drawn with (Ahν) 2 , from which it can be obtained that the band gap of single SrTiO3 is about 3.32 eV. Similarly, it can also be calculated that the band gaps of Mn-SrTiO3@CF and SrTiO3@CF composites are about 2.91 eV and 3.08 eV, respectively. The light absorption of catalyst materials such as Mn-SrTiO 3 @CF can be measured by UV-Vis diffuse reflection spectrum test. As shown in Figure 9a, pure SrTiO 3 nanoparticles have an obvious characteristic absorption edge at 375 nm, while the light absorption property of SrTiO 3 @CF is significantly enhanced compared with pure SrTiO 3 , which mainly comes from the strong light absorption property of carbon fibers. Mn-SrTiO 3 @CF photocatalytic composite fibers have stronger light absorption property compared with SrTiO 3 @CF materials, because the doping of Mn reduces the band gap of the composite material and promotes the red shift of the light absorption boundary and the extension to visible light [37]. Mn-SrTiO 3 @CF and SrTiO 3 @CF materials have a significant change in the radian of the bottom of the light absorption edge, which is presumed to be because of the existence of oxygen vacancies in the material, reducing the band gap of the catalyst and further enhancing the light absorption property. The cyclic stability of the Mn-SrTiO3@CF composite catalyst was tested, and the results are shown in Figure 8. After four consecutive tests of cyclic photocatalytic water splitting hydrogen production performance, the average photocatalytic hydrogen production performance of the Mn-SrTiO3@CF composite catalyst is about 267.69 μmol/g·h. There was only a slight decrease over a period of cyclic experiments, indicating that the the Mn-SrTiO3@CF composite catalyst can maintain relatively stable photocatalytic performance in the water splitting hydrogen production reaction. The light absorption of catalyst materials such as Mn-SrTiO3@CF can be measured by UV-Vis diffuse reflection spectrum test. As shown in Figure 9a, pure SrTiO3 nanoparticles have an obvious characteristic absorption edge at 375 nm, while the light absorption property of SrTiO3@CF is significantly enhanced compared with pure SrTiO3, which mainly comes from the strong light absorption property of carbon fibers. Mn-SrTiO3@CF photocatalytic composite fibers have stronger light absorption property compared with SrTiO3@CF materials, because the doping of Mn reduces the band gap of the composite material and promotes the red shift of the light absorption boundary and the extension to visible light [37]. Mn-SrTiO3@CF and SrTiO3@CF materials have a significant change in the radian of the bottom of the light absorption edge, which is presumed to be because of the existence of oxygen vacancies in the material, reducing the band gap of the catalyst and further enhancing the light absorption property. Based on the Kubelka-Munk theory, the band gap (Eg) of semiconductor materials can be calculated according to Equation (1) [38]. As shown in Figure 9b, hν is drawn with (Ahν) 2 , from which it can be obtained that the band gap of single SrTiO3 is about 3.32 eV. Similarly, it can also be calculated that the band gaps of Mn-SrTiO3@CF and SrTiO3@CF composites are about 2.91 eV and 3.08 eV, respectively. Based on the Kubelka-Munk theory, the band gap (E g ) of semiconductor materials can be calculated according to Equation (1) [38]. As shown in Figure 9b, hν is drawn with (Ahν) 2 , from which it can be obtained that the band gap of single SrTiO 3 is about 3.32 eV. Similarly, it can also be calculated that the band gaps of Mn-SrTiO 3 @CF and SrTiO 3 @CF composites are about 2.91 eV and 3.08 eV, respectively. where A is absorbance in UV-visible diffuse reflection; hν is photon energy, which is replaced here by the 1024/ wavelength; and C is a constant. The band structure is an important factor affecting the photocatalytic property, and the flat band potential of the semiconductor catalyst can be calculated using the Mott-Schottky equation (Equation (2)) [38,39]. where C is the interface capacitance and V FB is the flat band potential. As shown in Figure 10, the tangent slope of the Mott-Schottky spectral line of SrTiO 3 is positive, indicating that SrTiO 3 is an n-type semiconductor material and the flat band potential of SrTiO 3 is −0.69 eV (calomel electrode, vs. SCE). Based on the fact that the potential of a calomel electrode relative to a standard hydrogen electrode at 25 • C is about 0.24 eV, it can be calculated that the Fermi level corresponding to SrTiO 3 is about −0.45 eV. It is generally believed that the conduction band position of n-type semiconductor is 0.1 eV different from the Fermi level [40], so the conduction band position of SrTiO 3 is −0.55 eV. It can be known from Figure 9b that the band gap of SrTiO 3 is 3.32 eV. It can be calculated that the valence band position of SrTiO 3 is 2.77 eV. (1) where A is absorbance in UV-visible diffuse reflection; hν is photon energy, which is replaced here by the 1024/ wavelength; and C is a constant. The band structure is an important factor affecting the photocatalytic property, and the flat band potential of the semiconductor catalyst can be calculated using the Mott-Schottky equation (Equation (2)) [38,39]. where C is the interface capacitance and VFB is the flat band potential. As shown in Figure 10, the tangent slope of the Mott-Schottky spectral line of SrTiO3 is positive, indicating that SrTiO3 is an n-type semiconductor material and the flat band potential of SrTiO3 is −0.69 eV (calomel electrode, vs. SCE). Based on the fact that the potential of a calomel electrode relative to a standard hydrogen electrode at 25 °C is about 0.24 eV, it can be calculated that the Fermi level corresponding to SrTiO3 is about −0.45 eV. It is generally believed that the conduction band position of n-type semiconductor is 0.1 eV different from the Fermi level [40], so the conduction band position of SrTiO3 is −0.55 eV. It can be known from Figure 9b that the band gap of SrTiO3 is 3.32 eV. It can be calculated that the valence band position of SrTiO3 is 2.77 eV. Based on the above research results, we proposed the photocatalytic water splitting hydrogen production mechanism of the Mn-SrTiO3@CF photocatalytic composite material, as shown in Figure 11. Based on the above research results, we proposed the photocatalytic water splitting hydrogen production mechanism of the Mn-SrTiO 3 @CF photocatalytic composite material, as shown in Figure 11. According to the test results of the UV-Vis diffuse reflection spectrum (Figure 9), it can be calculated that the composite band gaps of the pure SrTiO 3 , Mn-SrTiO 3 @CF, and SrTiO 3 @CF composite materials are about 3.32 eV, 2.91 eV, and 3.08 eV, respectively. The results indicated that the doping of Mn and the existence of oxygen vacancies reduced the band gap of the corresponding material; expanded the light absorption range, which extended to the visible light; and improved the photocatalytic activity. The Mn-SrTiO 3 @CF composite material has a large number of oxygen vacancies, which will create a new donor level below the conduction band, constituting an oxygen vacancy state (VOs) and becoming the capture center of photoelectrons. As shown in Figure 11, under the action of light, the Mn-SrTiO 3 composite material produces electrons and holes, and photoelectrons migrate to the conduction band and the oxygen vacancy state (VOs) [41]. Some of the electrons located on the conduction band will migrate to the surface of carbon fibers, and some will migrate to the oxygen vacancy state (VOs), which promotes the separation and migration of photoelectrons and holes, thus improving the photocatalytic property. The electrons on the conduction band and the oxygen vacancy state (VOs) will combine with H + ions in water to produce hydrogen, while the holes on the valence band will combine with sacrificial agents (Na 2 S and Na 2 SO 3 ) in aqueous solution to promote the separation and generation of photoelectron-hole pairs. According to the test results of the UV-Vis diffuse reflection spectrum (Figure 9), it can be calculated that the composite band gaps of the pure SrTiO3, Mn-SrTiO3@CF, and SrTiO3@CF composite materials are about 3.32 eV, 2.91 eV, and 3.08 eV, respectively. The results indicated that the doping of Mn and the existence of oxygen vacancies reduced the band gap of the corresponding material; expanded the light absorption range, which extended to the visible light; and improved the photocatalytic activity. The Mn-SrTiO3@CF composite material has a large number of oxygen vacancies, which will create a new donor level below the conduction band, constituting an oxygen vacancy state (VOs) and becoming the capture center of photoelectrons. As shown in Figure 11, under the action of light, the Mn-SrTiO3 composite material produces electrons and holes, and photoelectrons migrate to the conduction band and the oxygen vacancy state (VOs) [41]. Some of the electrons located on the conduction band will migrate to the surface of carbon fibers, and some will migrate to the oxygen vacancy state (VOs), which promotes the separation and migration of photoelectrons and holes, thus improving the photocatalytic property. The electrons on the conduction band and the oxygen vacancy state (VOs) will combine with H + ions in water to produce hydrogen, while the holes on the valence band will combine with sacrificial agents (Na2S and Na2SO3) in aqueous solution to promote the separation and generation of photoelectron-hole pairs. Conclusions In this work, Tencel fibers were taken as the substrate, and SrTiO3@CF and Mn-SrTiO3@CF with a firm structure were successfully obtained through the process route of first loading the semiconductor material on the carrier and then carbonizing the tencel fibers. This solved the problem that the semiconductor materials were difficult to directly load on the surfaces of carbon fibers or easy to shed off because of the smooth surface and few active groups of carbon fibers. The Mn-SrTiO3@CF composite photocatalytic fibers Figure 11. Hydrogen production mechanism of Mn-SrTiO 3 @CF photocatalytic composite material. Conclusions In this work, Tencel fibers were taken as the substrate, and SrTiO 3 @CF and Mn-SrTiO 3 @CF with a firm structure were successfully obtained through the process route of first loading the semiconductor material on the carrier and then carbonizing the tencel fibers. This solved the problem that the semiconductor materials were difficult to directly load on the surfaces of carbon fibers or easy to shed off because of the smooth surface and few active groups of carbon fibers. The Mn-SrTiO 3 @CF composite photocatalytic fibers exhibited a higher activity for hydrogen evolution compared with the SrTiO 3 @CF material. Particularly, the photocatalytic hydrogen production of the Mn-SrTiO 3 @CF composite catalyst is about 267.69 µmol/g·h with 5% Mn-doped, which is six times that of the SrTiO 3 @CF material. After modifying SrTiO 3 @CF with Mn, the light absorption boundary could be extended to the visible light direction, and the separation and migration efficiency of photocarriers could be improved. In addition, SrTiO 3 @CF and Mn-SrTiO 3 @CF photocatalytic materials rich in oxygen vacancies were successfully prepared through the high temperature carbonization process. The existence of oxygen vacancies would generate a new donor level below the conduction band, constituting an oxygen vacancy state (VOs) and becoming the capture center of photoelectrons, thus significantly improving the photocatalytic activity. The synergistic effect from Mn doping, oxygen vacancies, the sacrificial agent, and carbon fibers can efficiently absorb photons, transfer photoinduced electrons, restrain carrier recombination, and improve the efficiency of the catalyst hydrogen production.
8,209.6
2022-07-01T00:00:00.000
[ "Materials Science", "Chemistry" ]
Seismic Vulnerability Assessment of Historic Constructions in the Downtown of Mexico City : Seismic risk is determined by the sum of multiple components produced by a certain seismic intensity, being represented by the seismic hazard, the structural vulnerability and the exposure of assets at a specified zone. Most of the methods and strategies applied to evaluate the vulnerability of historic constructions are specialized in buildings with higher importance, either public or private, by relegating ordinary dwellings to a second plane. On account of this, this paper aims to present a seismic vulnerability assessment, considering a limited urban area of the Historic Downtown of Mexico City (La Merced Neighborhood), thus showing the analysis of 166 historic buildings. The seismic vulnerability assessment of the area was performed resorting to a simplified seismic vulnerability assessment method, composed of both qualitative and quantitative parameters. To better manage and analyze the human and economic exposure, the results were integrated into a Geographic Information System (GIS) tool, which allowed to map vulnerability and damage scenarios for di ff erent earthquake intensities. Introduction As is widely known, while seismic hazard involves the probability of occurrence of a seismic event [1], which can be represented by an exposure model [2], seismic vulnerability can be defined as the intrinsic predisposition of an element to suffer damage from a seismic event of a given intensity. Specifically concerning Heritage Sites, the elements considered are the inherent features of a cultural heritage site, a group of buildings, monuments or objects, as well as their institutional and/or socio-economic context [3]. In terms of the vulnerability of historic buildings, it is fundamental that vulnerability studies address the assessment of potential damages and, based on those, discuss possible rehabilitation and/or retrofit interventions and supported pre-and post-disaster decisions [4][5][6]. Aimed at contributing to this discussion, a pilot area of the Mexico City Downtown is comprehensively investigated herein by analyzing and intercrossing its historical seismicity with the most relevant architectural, construction and structural features of the buildings. For this purpose, a matrix of thirty-six typologies of residential and historical buildings was assessed, resorting to a simplified seismic vulnerability assessment. Over this assessment, the identification of the most vulnerable aspects of the building stock allowed the presentation of damage scenarios, generated by different macroseismic intensities. Geographical Information Systems (GIS) tools play an essential role in the establishment of urban management, civil protection, and risk disaster strategies. For that reason, this analysis was established by mapping and discussing all outputs through the free and open-source software QGIS ver. 3.8.1 (QGIS Development Team: Zanzibar) [7]. Recently, in 2017, two intense earthquakes occurred on 7th of September and 19th of September. The first (7th September) occurred near the coast of Oaxaca with a subduction event of M s = 8.2, while the second was a local event with an epicenter located in Axochiapan, Morelos with M s = 7.1 (19th September). Due to these seismic events, a large number of losses affecting immovable cultural heritage were reported in different zones of the Central and Southwest part of the country. The earthquake of the 7th of September 2017 could correspond to the absence of seismic activity located at the Tehuantepec Gap, in the State of Oaxaca, as seen in Figure 1b. Figure 1b shows not only the earthquake near the Tehuantepec Gap but also the seismic activity that occurred during the 20th century and was recorded by the National Seismologic Service of Mexico (SSN). The map (Figure 1b) depicts, along the coast of Guerrero, the absence of seismic activity, which is well-known as the Guerrero Gap. This Gap, located about 300 km from Mexico City, can signify possible future seismic events produced by interplate movements (i.e., subduction trust events), with similar or higher magnitudes than those that occurred in 1985 and 2017 with significant impact on Mexico City. However, the consequences of these seismic events in the city do not depend only on interplate movements, but also on volcanic activity (i.e., the Popocatepetl volcano) [13], denoting a seismic risk between two possible geologic phenomena. Buildings Exposure Model Numerous researches have proposed different methodologies to achieve closer approaches to the history of construction and architecture related to buildings from the 16th century to the beginning of the 20th century. Most of the buildings in the historic center are considered of cultural heritage, catalogued by the National Institution of Anthropology and History (INAH) or by the National Institute of the Fine Arts (Instituto Nacional de Bellas Artes-INBA). Nonetheless, over their lifespan, some of the buildings have been refurbished or retrofitted, resorting to different construction technologies and materials, some of them poorly compatible with the original characteristics of these buildings. A categorical example of such inadequate intervention is the use of concrete or cement-based materials, which are chemically, physically and mechanically incompatible with traditional construction technologies. A comprehensive discussion on this aspect was recently given by Correia Lopes et al. [16]. To determine the characterization of the buildings, highlighting the wide ranges and complex task of collecting the data, the typology matrix presented in Tables 1-3 not depend only on interplate movements, but also on volcanic activity (i.e., the Popocatepetl volcano) [13], denoting a seismic risk between two possible geologic phenomena. Buildings Exposure Model Numerous researches have proposed different methodologies to achieve closer approaches to the history of construction and architecture related to buildings from the 16th century to the beginning of the 20th century. Most of the buildings in the historic center are considered of cultural heritage, catalogued by the National Institution of Anthropology and History (INAH) or by the National Institute of the Fine Arts (Instituto Nacional de Bellas Artes-INBA). Nonetheless, over their lifespan, some of the buildings have been refurbished or retrofitted, resorting to different construction technologies and materials, some of them poorly compatible with the original characteristics of these buildings. A categorical example of such inadequate intervention is the use of concrete or cement-based materials, which are chemically, physically and mechanically incompatible with traditional construction technologies. A comprehensive discussion on this aspect was recently given by Correia Lopes et al. [16]. To determine the characterization of the buildings, highlighting the wide ranges and complex task of collecting the data, the typology matrix presented in Tables 1-3 The next on the list is typology T14 with 7% and 12 buildings, which results from the consideration of geometry B and material M5; and T13 with approximately 5.8% corresponding to 10 buildings (B and M4). These are followed by T34 (D and M7) and T22 (C and M4), which have a percentage of almost 5.2% each (nine buildings respectively). T30 (correlation between D and M3) has 4.6% that is equivalent to eight buildings. The ratio of T10 (B and M1), T28 (D and M1), T29 (D and M2) and T32 (D and M5) is almost 3.4% each (six buildings each). Typologies T2, T3, T5, T8, T9, T15, T16, T17, T18, T19, T20, T21, T23, T24, T25, T26 The next on the list is typology T14 with 7% and 12 buildings, which results from the consideration of geometry B and material M5; and T13 with approximately 5.8% corresponding to 10 buildings (B and M4). These are followed by T34 (D and M7) and T22 (C and M4), which have a percentage of almost 5.2% each (nine buildings respectively). T30 (correlation between D and M3) has 4.6% that is equivalent to eight buildings. The ratio of T10 (B and M1), T28 (D and M1), T29 (D and M2) and T32 (D and M5) is almost 3.4% each (six buildings each). Typologies T2, T3, T5, T8, T9, T15, T16, T17, T18, T19, T20, T21, T23, T24, T25, T26 The next on the list is typology T14 with 7% and 12 buildings, which results from the consideration of geometry B and material M5; and T13 with approximately 5.8% corresponding to 10 buildings (B and M4). These are followed by T34 (D and M7) and T22 (C and M4), which have a percentage of almost 5.2% each (nine buildings respectively). T30 (correlation between D and M3) has 4.6% that is equivalent to eight buildings. The ratio of T10 (B and M1), T28 (D and M1), T29 (D and M2) and T32 (D and M5) is almost 3.4% each (six buildings each). Typologies T2, T3, T5, T8, T9, T15, T16, T17, T18, T19, T20, T21, T23, T24, T25, T26, T27, T31, T33 and T35 The next on the list is typology T14 with 7% and 12 buildings, which results from the consideration of geometry B and material M5; and T13 with approximately 5.8% corresponding to 10 buildings (B and M4). These are followed by T34 (D and M7) and T22 (C and M4), which have a percentage of almost 5.2% each (nine buildings respectively). T30 (correlation between D and M3) has 4.6% that is equivalent to eight buildings. The ratio of T10 (B and M1), T28 (D and M1), T29 (D and M2) and T32 (D and M5) is almost 3.4% each (six buildings each). Typologies T2, T3, T5, T8, T9, T15, T16, T17, T18, T19, T20, T21, T23, T24, T25, T26, T27, T31, T33 and T35 The next on the list is typology T14 with 7% and 12 buildings, which results from the consideration of geometry B and material M5; and T13 with approximately 5.8% corresponding to 10 buildings (B and M4). These are followed by T34 (D and M7) and T22 (C and M4), which have a percentage of almost 5.2% each (nine buildings respectively). T30 (correlation between D and M3) has 4.6% that is equivalent to eight buildings. The ratio of T10 (B and M1), T28 (D and M1), T29 (D and M2) and T32 (D and M5) is almost 3.4% each (six buildings each). Typologies T2, T3, T5, T8, T9, T15, T16, T17, T18, T19, T20, T21, T23, T24, T25, T26, T27, T31, T33 and T35 have between 0.4% (one building) and 2.9% (five buildings) on the analyzed site. Seismic Vulnerability Assessment Following the proposal of Gruppo Nazionale per la Difesa dai Terremoti (GNDT) [17], a simplified seismic vulnerability assessment approach is used in this work. The method was proposed by Ferreira et al. [18] to assess the seismic vulnerability of traditional masonry buildings and to estimate damages and post-seismic losses for different macroseismic scenarios [19]. The method is based on the assessment of 14 parameters within the vulnerability index, organized in four groups: (1) structural building system; (2) irregularities and interaction; (3) floor slabs and roofs; and (4) conservation status and other elements. The first group (Group 1) involves the building resisting system, namely type P1, the quality of the resisting system (P2), the shear strength capacity of the building (P3), the maximum distance between walls whose indicator constitutes a potential Seismic Vulnerability Assessment Following the proposal of Gruppo Nazionale per la Difesa dai Terremoti (GNDT) [17], a simplified seismic vulnerability assessment approach is used in this work. The method was proposed by Ferreira et al. [18] to assess the seismic vulnerability of traditional masonry buildings and to estimate damages and post-seismic losses for different macroseismic scenarios [19]. The method is based on the assessment of 14 parameters within the vulnerability index, organized in four groups: (1) structural building system; (2) irregularities and interaction; (3) floor slabs and roofs; and (4) conservation status and other elements. The first group (Group 1) involves the building resisting system, namely type P1, the quality of the resisting system (P2), the shear strength capacity of the building (P3), the maximum distance between walls whose indicator constitutes a potential Seismic Vulnerability Assessment Following the proposal of Gruppo Nazionale per la Difesa dai Terremoti (GNDT) [17], a simplified seismic vulnerability assessment approach is used in this work. The method was proposed by Ferreira et al. [18] to assess the seismic vulnerability of traditional masonry buildings and to estimate damages and post-seismic losses for different macroseismic scenarios [19]. The method is based on the assessment of 14 parameters within the vulnerability index, organized in four groups: (1) structural building system; (2) irregularities and interaction; (3) floor slabs and roofs; and (4) conservation status and other elements. The first group (Group 1) involves the building resisting system, namely type P1, the quality of the resisting system (P2), the shear strength capacity of the building (P3), the maximum distance between walls whose indicator constitutes a potential Seismic Vulnerability Assessment Following the proposal of Gruppo Nazionale per la Difesa dai Terremoti (GNDT) [17], a simplified seismic vulnerability assessment approach is used in this work. The method was proposed by Ferreira et al. [18] to assess the seismic vulnerability of traditional masonry buildings and to estimate damages and post-seismic losses for different macroseismic scenarios [19]. The method is based on the assessment of 14 parameters within the vulnerability index, organized in four groups: (1) structural building system; (2) irregularities and interaction; (3) floor slabs and roofs; and (4) conservation status and other elements. The first group (Group 1) involves the building resisting system, namely type P1, the quality of the resisting system (P2), the shear strength capacity of the building (P3), the maximum distance between walls whose indicator constitutes a potential Seismic Vulnerability Assessment Following the proposal of Gruppo Nazionale per la Difesa dai Terremoti (GNDT) [17], a simplified seismic vulnerability assessment approach is used in this work. The method was proposed by Ferreira et al. [18] to assess the seismic vulnerability of traditional masonry buildings and to estimate damages and post-seismic losses for different macroseismic scenarios [19]. The method is based on the assessment of 14 parameters within the vulnerability index, organized in four groups: (1) structural building system; (2) irregularities and interaction; (3) floor slabs and roofs; and (4) conservation status and other elements. The first group (Group 1) involves the building resisting system, namely type P1, the quality of the resisting system (P2), the shear strength capacity of the building (P3), the maximum distance between walls whose indicator constitutes a potential out-of-plane failure mechanism (P4), the number of floors (P5) and the geotechnical conditions of the foundations (P6). The second group (Group 2) considers the irregularities and interaction between adjacent buildings (P7), the regularities in plan (P8) and height (P9) and the alignment of the openings (P10). The parameters integrated into the third group (Group 3) are the quality of the horizontal supporting structures, namely of the horizontal diaphragms (P11), and the roofing system (P12). Finally, the fourth group (Group 4) is linked to the conservation status, considering the fragilities of the building (P13) and the characteristics of non-structural elements (P14). The vulnerability parameters are influenced by a vulnerability class (A, B, C and D), by choosing the best-described vulnerability option for each parameter, between the values 0 to 50 multiplied by a weight (P i ), which ranges from 0.5 (lower-ranking) to 1.5 (higher-ranking). A vulnerability index (I * v ) value ranging from 0 to 650 can then be obtained. Furthermore, for ease of use, this value is usually normalized (I V ) between 0 and 100. On account of this, the simplified vulnerability assessment method was applied to the study area in Mexico City through a typological-based approach by establishing some empirical facts. As will be discussed further on, this vulnerability indicator can be used as an early step for estimating damages and losses [20]. Seismic Vulnerability Assessment and Damage Scenarios Once data is collected, the vulnerability assessment was performed for a historic area of Mexico City. The vulnerability assessment is performed herein adopting a typological-based procedure which consists of a pre-assessment of the seismic vulnerability of each one of the typologies identified in Section 3, through the assessment of eight specific vulnerability assessment parameters (P1, P2, P4, P5, P8, P9, P11 and P12), which, as can be seen in Table 2, focus on the structural characteristics of the buildings (Group 1), on their irregularities and the interaction between adjacent buildings (Group 2) and the characteristics of their floor slabs and roof (Group 3), see Table 4. Table 4. Vulnerability index, according to [18], modified for the study area. Vulnerability Index (I v ) Class Weight Vulnerability Index 1. After the eight refereed parameters have been evaluated (according to the aforementioned typological-based approach), the vulnerability analysis is accomplished by evaluating the remaining parameters of the vulnerability assessment methodology, namely parameters P6, P7, P10, P13 and P14. Following this strategy, it was thus possible to perform a complete vulnerability assessment of the whole study area. It is worth noting, that because of the nature of the data required to evaluate Parameter 3, this parameter was neglected in the present study. For this reason, instead of having 650 as a maximum vulnerability index value, the vulnerability index value is limited in this analysis to 575 (I cc v ). Analysis and Discussion of the Results The vulnerability assessment method was applied to 166 historical buildings, resulting in a mean value of the seismic vulnerability index (I cc v ) of 45.91. Non-historic buildings, which include reinforced concrete (RC) and rehabilitated ones, fall outside the scope of the study and are omitted from the data. Figure 2 presents the results of I cc v for the study area, whereas Figure 3a depicts the distribution of the vulnerability index (I cc v ) for the 166 buildings. Almost 75% of the assessed buildings had a vulnerability index value (I v ) greater than 40 (i.e., equivalent to vulnerability class A in the European Macroseismic Scale (EMS-98) [18]). While the maximum and minimum values obtained from the assessment were 75 and 27, respectively, the standard deviation value obtained (σI cc v ) was 8.34. The lower and the upper bond values of the vulnerability distribution are also used in the analyses presented in the following. Figure 3b presents the frequency distributions of the most important parameters in terms of their influence on the vulnerability index definition. As observed, class D overcomes 50% for parameters P4, P9, P10, P12 and 14 (i.e., the distance between walls, regularity in height, openings and alignments, roofing system and non-structural elements) evidencing a significant number of parameters related to the irregularity and interaction of the buildings. The combination of class D and class C covers more than 50% for parameters P1, P6, P8, P11 and P13 (i.e., type of resisting system, location and soil conditions, plan configuration, horizontal diaphragms, and fragilities and conservation state); at this point, the class D for P1, P6, P8, P11 and P13 is lower than parameters P4, P9, P10, P12 and 14; however, it is still significant with excessive deficiency in the structural building system (Group 1), and the irregularities and interaction (Group 2). The presence of parameters A and B (i.e., the classes corresponding to lower vulnerability) is dominant in the parameters P2, P5 and P7 (quality of the resisting system, number of floors and aggregate position and interaction). Some of the highly vulnerable parameters are related to geometry, such as the alignment of the openings (P10), the height (P9), the characteristics of the foundations when interacting with the soil conditions (P6), the connections between vertical and horizontal systems and the increase of the stiffness on the horizontal diaphragm systems. The latter (i.e., increase of stiffness) is conceivably linked to the incompatibility of the systems (P1, P12), the physical or mechanical properties of the wall itself (P2), and the non-structural elements (P14). Even though the weight (P i ) of parameters P1, P2, P11, P12 and P13 is 1.0 or lower (see Table 4), their individual analysis (i.e., non-typological-based method selection) is essential because the set of these parameters reflects higher levels of individual vulnerability. The following figures illustrate some parameters with major class D such as P4 (Figure 4a Table 1 for a description of the parameters. Damage Distribution and Loss Scenario To obtain the damage distribution and loss scenario, the computation of mean damage grade must be considered, through either absolute or relative vulnerability results, depending on the selected methodology [21]. The absolute vulnerability represents the damage as a function of the seismic intensity, or it can be considered as the damage condition attributed to a given seismic intensity. On the contrary, the relative vulnerability is determined by empirical or experimental data, without correlating the damage and the seismic intensity. For this paper, the analysis will be considered absolute. Accordingly, to represent the grade of damage linked to a seismic event, EMS-98 can be used [22]. Nevertheless, the damage grade can be associated, employing phenomena that occurred in a particular location, whose aims entail the assessment of cultural heritage. Thereby, the mean damage grades (µ D ) are estimated for different macro-seismic intensities based on the previous results of the vulnerability index. Under the analytical expression that correlates hazard and a mean damage grade (0 ≤ µ D ≤ 5) of the damage distribution, the vulnerability value (V) is obtained through the Equations (1) and (2) [23]: According to equations above, the vulnerability index value (V) determines the position of the curve, whereas the ductility factor (Q) limits the slope of the vulnerability function (e.g., the rate of damage increases with rising intensity). For the computation of the mean damage grades (µ D ), the input values were the proposed seismic intensities (I) between the range of V and XII, the vulnerability index (I cc v ) calculated previously with a mean value of 45.91 and the proposed ductility factor (Q) of 2.0. The Q factor is based on similar values recommended by the local code (RCDF-NTC) [24] for equivalent buildings. In summary, the vulnerability index value, obtained in the prior assessment (I cc v ), is associated with the vulnerability index (V) through the macroseismic approach seen in Equations (1) and (2). Therefore, the calculation of the mean damage grades (µ D ), and the subsequent estimations of physical, economic and human losses are calculated, by following the initial mean vulnerability index value (I cc v ) [18]. Figure 5a shows the vulnerability curves obtained for the mean value of the vulnerability index (I cc v mean) and the lower and upper bound ranges (I cc v mean − 2σI cc v ; I cc v mean − 1σI cc v ; I cc v mean + 1σI cc v ; I cc v mean + 2σI cc v ) for events with macroseismic intensities ranging from V to XII. Thus, from an overall view, the estimated damages range from 1.02 to 2.29 corresponds to the earthquake scenario of I EMS−98 = VII, the range from 2.06 to 3.48 corresponds to I EMS−98 = VIII and the range from 3.28 to 4.31 is linked to I EMS−98 = IX. The evaluation shows alarming results, due to the high estimation represented by moderate damages (2 ≤ µ D < 3) at I EMS−98 = VII, severe damages (3 ≤ µ D < 4) at I EMS−98 = VIII and possible collapses (4 ≤ µ D < 5) at I EMS−98 = IX. The damage assessment is an initial step to measure the risk linked to economic and human losses. These studies allow the spatial the global damage distribution, and the representation of the building stock analysis, by integrating GIS tools. The mapping damage distribution enables the practical identification of more vulnerable zones with its correspondent specific constructions, thus enhancing the decision-making for urban management and civil protection strategies [25]. The damage distribution scenarios are presented in Figure 6a Fragility Curves Based on a probabilistic approach, the physical building damage distributions are possible to determine through the beta probability function for specific building typologies. Fragility curves are possibly some of the most accepted and used methods for representing estimations of damage, thus defining probabilities that can exceed a specific damage grade D k (∈ [0; 5]) [18]. Fragility curves establish a relationship between five damage states and earthquake intensity, entailed by continuous probability functions, which express the conditional cumulative probability when reaching or exceeding a certain degree of damage state. Equation (3) shows the discrete probabilities, P(D k = d) derived from the difference of accumulative probabilities P D [D i ≥ d]. Influenced by the parameters of the beta distribution function, the estimation of damage can be determined as a continuous probability function. Figure 7a,b shows the fragility curves by inputting a mean vulnerability index of I cc v mean = 45.91 and the mean vulnerability index plus the standard deviation value ( I cc v mean + 1σI cc v = 54.26), respectively. Loss Estimation A wide variety of methods can be currently used to estimate material, human and economic losses [26][27][28][29]. From those, probabilistic-based approaches in which the probability of attaining a specific damage grade for a certain level of action are within the most widely adopted ones. According to these methods, the construction of a damage scenario can be completed through probabilistic distributions, whose input data computation involves the representative vulnerability index values (I cc v mean − 2σI cc v ; I cc v mean; I cc v mean − 1σI cc v ; I cc v mean + 1σI cc v ; I cc v mean + 2σI cc v ). The loss estimation can be considered as part of a damage model, linking the physical damage grades. Thereby, the physical damage grades include the correlations between the probability of exceeding a certain level of damage and the probability of different loss phenomena. These methods are herein applied to estimate the probability of collapsed and unusable buildings or to assess the quantification of probable fatalities and severely injured people after a seismic event. Collapsed and Unusable Buildings The method used to calculate the probability of collapsed and unusable buildings was proposed by Servizio Sismico Nazionale (SSN), based on the studies carried out by Bramerini et al. [30]. This approach involves the analysis of data associated with the probability of buildings, considered unusable after minor and moderate seismic actions. Although such events produce lower levels of structural and non-structural damage, higher mean damage grade values are associated with a higher probability of building collapse. Thus, the probabilities of exceeding a certain damage grade are used in the loss estimation and are affected by multiplier factors, which range from 0 to 1. The following Equations (4) and (5) were used for the computation of the probabilities of collapsed and unusable buildings, respectively: where P(D i ), is the probability of occurrence at a certain damage grade (from D 1 to D 5 ), and W ei, j is the multiplier factor that indicates the percentage of buildings associated with D i . Although [30][31][32][33] have indicated different values for these factors, in this study, the values of W ei,3 and W ei,4 were assumed as equal to 0.4 and 0.6, respectively. Figure 8a,b presents the resultant probability of building collapse and unusable buildings, for the mean value of the vulnerability index (I cc v mean = 45.91) and for other characteristic values of the vulnerability distribution (I cc v mean − 2σI cc v ; I cc v mean; I cc v mean − 1σI cc v ; I cc v mean + 1σI cc v ; I cc v mean + 2σI cc v ), respectively. According to the results in Figure 8a, the building collapse probability curve shows that the probabilities of collapse increase with the higher macroseismic value I EMS−98 . On the other hand, the number of unusable buildings ( Figure 8b) decreases with the increase of seismic intensity, as a result of the ultimate state capacity producing the collapse, and thus its reduction. The overall results from moderate to large intensity seismic events present an exponential rise between VIII and IX, as seen in Table 5, by considering macroseismic intensities from I EMS−98 = VII to I EMS−98 = X [19], and a mean vulnerability index of I cc v mean = 45.91; this output summarizes the number of units affected and the percentage related to the study area. Human Casualties and Homelessness To estimate the probability of deaths, severe injuries associated with a disaster, and homelessness, the vulnerability index values are required, both the mean value of the vulnerability index (I cc v mean = 45.91) and the representative values of the vulnerability distribution (I cc v mean − 2σI cc v ; I cc v mean; I cc v mean − 1σI cc v ; I cc v mean + 1σI cc v ; I cc v mean + 2σI cc v ). Hence, the calculation is carried out by resorting to Equations (6)-(8) [18]. P death and severely injured = 0.3 × P(D 5 ) P homelessness = P unusable buildings + 0.7 × P(D 5 ) where P(D i ), is the probability of occurrence at a certain damage grade (from D 1 to D 5 ), W ei,j is the multiplier factor that indicates the percentage of buildings associated with D i , and D i is the damage grade corresponding to collapse or are considered unusable. In Equation (7), it is assumed that 30% of the population, located in a building expected to collapse (i.e., with a probability of exceeding damage grade D 5 ), will perish or be severely injured. The probability of homelessness is determined by the Equations (8) and (9), which considers that 100% of people living in unusable buildings, and the remaining 70% of residents of collapsed buildings will not be able to reoccupy their dwellings after an earthquake [18]. Four seismic intensity scenarios, ranging between VII and X according to the EMS-98 scale [19], were analyzed, and the results were associated with the number of casualties and homeless. As can be observed in Table 6, the percentage of homelessness becomes relevant for intensity equal to or greater than VIII. With this information, the extrapolation of loss output data for the Downtown area in Mexico City can be possible as a relative value. In other words, if these estimations were extended to the city center, it would have obtained a total number of 14,922 homeless people, which is undoubtedly a concerning result from the risk mitigation point of view. For that reason, appropriate logistical preparedness is required by the stakeholders (i.e., governmental authorities, civil protection, social entities) related to the relocation of residents, which could be performed through pre-seismic simulation exercises. To this end, a logistical plan is essential for having financial resources and thus suggesting the best emergency plan for the inhabitants. Communities and governments should put the same emphasis on planning for post-disaster emergency response by valuing community engagement and decision-making [25]. Figure 9a shows the probability of casualties and Figure 9b presents the probability of homelessness for different vulnerability values. Economic Losses and Repair Cost Estimation The estimated damage grade can either be interpreted economically, as defined by Benedetti and Petrini [34] or as an economic damage index, i.e., the ratio between the repair cost and the replacement cost. The correlation between damage grades and the repair and rebuilding costs is obtained through the processing of post-earthquake damage data [16]. According to Ferreira et al. [18], the repair cost probabilities for a certain seismic event characterized by intensity I, ( P[R I] ) can be obtained from the product of the conditional probability of the repair cost for each damage level (P[R|D k ]) with the conditional probability of the damage condition for each level of building vulnerability and seismic intensity ( P[D k I CC V , I] ) given by the following Equation (9): Loss estimation plays an essential role in the implementation of urban planning and retrofitting strategies, enabling costs to be placed alongside various beneficial measures such as reduced repair costs and life safety [35,36]. To estimate the repair costs associated with the different vulnerability values used in the loss evaluation (I cc v mean − 2σI cc v ; I cc v mean; I cc v mean − 1σI cc v ; I cc v mean + 1σI cc v ; I cc v mean + 2σI cc v ), an average cost per unit area of 506 €/m 2 (about MXN 11,716/m 2 ) was considered for the building stock in Mexico City (according to BIMSA-Cámara Mexicana de la Industria de la Construcción, 2015). The estimated global repair costs for the 166 buildings analyzed in this work are illustrated in Figure 10 and summarized in Table 7 for the most relevant macroseismic intensities. Final Remarks A simplified seismic vulnerability assessment was applied to a set of historical buildings in the selected area of La Merced at Mexico City. Through an overall description of the study area, an index-based seismic vulnerability assessment methodology was applied to 166 buildings. To this purpose, 31 building typologies were originally defined through a matrix of four geometrical types and nine material types. From the analysis made, it was possible to observe that intrinsic characteristics of the buildings, such as their structural and geometrical features, their current conservation state and their location within the urban mesh are the factors that most contribute to their seismic vulnerability. Furthermore, it was possible to notice that, in several cases, massive incompatible refurbishment or retrofit interventions performed over the lifespan of the building also play a significant role in the increase of the seismic vulnerability of these buildings. From the vulnerability assessment results, a series of damage scenarios were also computed and plotted for the study area. Among those, the scenarios obtained for macroseismic intensities VII, VIII and IX were mapped resorting to a GIS tool in order to better understand and identify the buildings that, in the case of an earthquake within this range of intensities, will probably suffer more damage. As a final remark, it is worth highlighting that the overall understanding of the selected area (i.e., historical context and characterization of the buildings), the vulnerability assessment, the computation of different damage scenarios and the estimation of losses are all valuable outputs that can be used by the local and national authorities to support the development of informed preand post-earthquake risk mitigation strategies. Moreover, these kinds of large-scale vulnerability assessment outputs can also guide the action of cultural institutions towards creating and fostering programs for the safeguarding of cultural heritage in historic areas. Conflicts of Interest: The authors declare no conflict of interest.
7,994
2020-02-10T00:00:00.000
[ "Engineering", "Environmental Science" ]
Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis , Introduction As a popular image processing technology, digital halftoning [1] has found wide applications in converting a continuous tone image into a binary halftone image for a better display on binary devices, such as printers and computer screens.Usually, binary halftone images can only be obtained in the process of printing, image scanning, and fax, from which the original continuous tone images need to be reconstructed [2,3], using an inverse halftoning algorithm [4], for image processing, for example, image classification, image compression, image enhancement, and image zooming.However, it is difficult for inverse halftoning algorithms to obtain the optimal reconstruction quality due to unknown halftoning patterns in practical applications.Furthermore, a basic drawback of the existing inverse halftone algorithms is that they do not distinguish the types of halftone images or can only coarsely divide halftone images into two major categories of error-diffused halftone images and orderly dithered halftone images.This inability of exploiting a prior knowledge on the halftone images largely weakens the flexibility, adaptability, and effectiveness of the inverse halftoning techniques, making the study on the classification of halftone images imperative for not only optimizing the existing inverse halftoning schemes, but also guiding the establishment of adaptive schemes on halftone image compression, halftone image watermarking, and so forth. Motivated by observing the significance of classifying halftone images, several halftone image classification methods have been proposed.In 1998, Chang and Yu [5] classified halftone images into four types using an enhanced one-dimensional correlation function and a backpropagation (BP) neural network, for which the data sets in the experiments are limited to the halftone images produced by clustered-dot ordered dithering, dispersed-dot ordered dithering, constrained average, and error diffusion.Kong et al. [6,7] used an enhanced one-dimensional correlation function and a gray level cooccurrence matrix to extract features from halftone images, based on which the halftone images are divided into nine categories using a decision tree algorithm.Liu et al. [8] combined support region and least mean square (LMS) algorithm to divide halftone images into four categories.Subsequently, they [9] used LMS to extract features from Fourier spectrum in nine categories of 2 Advances in Multimedia halftone images and classify these halftone images using naive Bayes.Although these methods work well in classifying some specific halftone images, their performance largely decreases when classifying error-diffused halftone images produced by Floyd-Steinberg filter, Stucki filter, Sierra filter, Burkers filter, Jarvis filter, and Stevenson filter, respectively.They are described as follows. Different Error Diffusion Filters.Consider the following: (a) Floyd-Steinberg filter: (b) Sierra filter: (c) Burkers filter: based on different error diffusion kernels, as summarized in [10][11][12].Moreover, these literatures did not consider all types of error diffusion halftone images.For example, only three error diffusion filters are included in [6,7,9] and only one is involved in [5,8].The idea of halftoning for the six error diffusion filters is quite similar, with the only difference lying in the templates used (shown in different error diffusion filters and the error diffusion of Stucki described above; the templates are shown at the right-hand side in each equation).It is difficult to classify the error-diffused halftone images because of the almost inconspicuous differences among various halftone features extracted from, using these six error diffusion filters, the error-diffused halftone images.However, as a scalable algorithm, the error diffusion has gradually become one of the most popular techniques, due to its ability to provide a solution of good quality at a reasonable cost [13].This asks for an urgent requirement to study the classification mechanism for various error diffusion algorithms, with the hope to promote the existing inverse halftone techniques widely used in different application fields of graphics processing.This paper proposes a new algorithm to classify errordiffused halftone images.We first extract the feature matrices of pixel pairs from the error-diffused halftone image patches, according to statistical characteristics of these patches.The class feature matrices are then subsequently obtained, using a gradient descent method, based on the feature matrices of pixel pairs [14].After applying the spectral regression kernel discriminant analysis to realize the dimension reduction in the class feature matrices, we finally classify the error-diffused halftone images using the idea similar to the nearest centroids classifier [15,16]. The structure of this paper is as follows.Section 2 presents the method of kernel discriminant analysis.Section 3 describes how to extract the feature extraction of pixel pairs from the error-diffused halftone images.Section 4 describes the proposed classification method for the error-diffused halftone image based on the spectral regression kernel discriminant analysis.Section 5 shows the experimental results.Some concluding remarks and possible future research directions are given in Section 6. An Efficient Kernel Discriminant Analysis Method It is well known that linear discriminant analysis (LDA) [17,18] is effective in solving classification problems, but it fails for nonlinear problems.To deal with this limitation, the approach called kernel discriminant analysis (KDA) [19] has been proposed. Overview of where is the number of the samples in the th class, ( =1 ( ) is the global centroid.In the feature space, the aim of the discriminant analysis is to seek the best projection direction, namely, the projective function V to maximize the following objective function: Equation ( 10) can be solved by the eigenproblem According to the theory of reproducing kernel Hilbert space, we know that the eigenvectors are linear combinations of ( ) in the feature space : there exist weight coefficients ( = 1, 2, . . ., ) such that V = ∑ =1 ( ).Let = [ 1 , 2 , . . ., ] ; then it can be proved that (10) can be rewritten as follows: The optimization problem of ( 11) is equal to the eigenproblem where is the kernel matrix and = ( , ); is the weight matrix defined as follows: For sample , the projective function in the feature space can be described as Kernel Discriminant Analysis via Spectral Regression. To efficiently solve the eigenproblem of the kernel discriminant analysis in (12), the following theorem will be used. Theorem 1. Let be the eigenvector of the eigenproblem = with eigenvalue .If = , then is the eigenvector of eigenproblem (12) with the same eigenvalue . According to Theorem 1, the projective function of the kernel discriminant analysis can be obtained according to the following two steps. Step 2. Search eigenvector which satisfies = , where is the positive semidefinite kernel matrix. As we know, if is nonsingular, then, for any given , there exists a unique = −1 satisfying the linear equation described in Step 2. If is singular, then, the linear equation may have infinite solutions or have no solution.In this case, we can approximate by solving the following equation: where ≥ 0 is a regularization parameter and is the identity matrix.Combined with the projective function described in ( 14), we can easily verify that the solution * = ( + ) −1 given by ( 15) is the optimal solution of the following regularized regression problem: where is the th element of and is the reproducing kernel Hilbert space induced from the Mercer kernel with ‖ ⋅ ‖ being the corresponding norm.Due to the essential combination of the spectral analysis and regression techniques in the above two-step approach, the method is named as spectral regression (SR) kernel discriminant analysis. Feature Extraction of the Error-Diffused Halftone Images Since its introduction in 1976, the error diffusion algorithm has attracted widespread attention in the field of printing applications.It deals with pixels of halftone images using, instead of point processing algorithms, the neighborhood processing algorithms.Now we will extract the features of the error-diffused halftone images which are produced using the six popular error diffusion filters mentioned in Section 1. and 0 is diffused ahead to some subsequent pixels not necessary to deal with.Therefore, for some subsequent pixels, the comparison will be implemented between 0 and the value which is the sum of (, ) and the diffusion error .A template matrix can be built using the error diffusion modes and the error diffusion coefficients, as shown in the error diffusion of Stucki described above, for example, (a) the error diffusion filter and (b) the error diffusion coefficients which represent the proportion of the diffusion errors.If the coefficient is zero, then the corresponding pixel does not receive any diffusion errors.According to the error diffusion of Stucki described above, pixel suffers from more diffusion errors than pixel ; that is to say, - has a larger probability to become 1-0 pixel pair than -.The reasons are as follows. Statistic Characteristics of the Suppose that the pixel value of is 0 ≤ ≤ 1, and pixel has been processed by the thresholding method according to the following equation: In general, threshold 0 is set as 0. Since the value of each pixel in the error-diffused halftone image can only be 0 or 1, there are 4 kinds of pixel pairs in the halftone image: 0-1, 0-0, 1-0, and 1-1.Pixel pairs 0-1 and 1-0 are collectively known as 1-0 pixel pairs because of their exchange ability.Therefore, there are only three kinds of pixel pairs essentially: 0-0, 1-0, and 1-1.In this paper, three statistical matrices are used to store the number of different pixel pairs with different neighboring distances and different directions, which are of size × and are referred to as 00 , 10 , and 11 , respectively ( is an odd number satisfying = 2 + 1 and is the maximum neighboring distance).Suppose that the center entry of the statistical matrix template covers pixel of the error-diffused halftone image with the size * , and other entries overlap other neighborhood pixels (, V).Then, we can compute three statistics on 1-0, 1-1, and 0-0 pixel pairs within the scope of this statistics matrix template.If the position (, ) of pixel changes continually, the matrices 00 , 10 , and 11 with zero being the initial values can be updated according to where = −+−1, V = −+−1, 1 ≤ , ≤ , 1 ≤ , V ≤ , 1 ≤ ≤ , and 1 ≤ ≤ .After normalization, the three statistic matrices can be ultimately obtained as the statistical feature descriptor of the error-diffused halftone images. Process of Statistical Feature Extraction of Halftone Images. According to the analysis described above, the process of statistical feature extraction of the error-diffused halftone images can be represented as follows. Step 3. Obtain the statistical matrix of block according to (18), and update using the equation = + . According to the process described above, we know that the statistical features of the error-diffused halftone image are extracted based on the method that divides into image patches, which is significantly different with other feature extraction methods based on image patches.For example, in [20], the brightness and contrast of the image patches are normalized by -score transformation, and whitening (also called "sphering") is used to rescale the normalized data to remove the correlations between nearby pixels (i.e., low-frequency variations in the images) because these correlations tend to be very strong even after brightness and contrast normalization.However, in this paper, features of the patches are extracted based on counting statistical measures of different pixel pairs (0/0, 1/0, and 1/1) within a moving statistical matrix template and are optimized using the method described in Section 3.3. Extraction of the Class Feature Matrix. The statistics matrices 00 , 10 , 11 ( = 1, 2, . . ., ), after being extracted, can be used as the input of other algorithms, such as support vector machines and neural networks.However, the curse of dimensionality could occur, due to the high dimension of 00 , 10 , 11 , making the classification effect possibly not significant.Thereby, six class feature matrices 1 , 2 , . . ., 6 are designed in this paper for the errordiffused halftone images produced by the six error diffusion filters mentioned above.Then, a gradient descent method can be used to optimize these class feature matrices. = 6 × error-diffused halftone images can be derived from original images using the six error diffusion filters, respectively.Then, statistics matrices ( 00 , 10 , 11 ) ( = 1, 2, . . ., ) can be extracted as the samples from error-diffused halftone images using the algorithm mentioned in Section 3.2.Subsequently, we label these matrices as label( 1 ), label( 2 ), . . ., label( ) to denote the types of the error diffusion filters used to produce the error-diffused halftone image.Given the th sample as the input, the target out vector = [ 1 , . . ., 6 ] ( = 1, . . ., ), and the class feature matrices 1 , 2 , . . ., 6 , the square error between the actual output and the target output can be derived according to where The derivatives of (, ) in ( 19) can be explicitly calculated as where 1 ≤ ≤ , 1 ≤ ≤ , and • is the dot product of matrices defined, for any matrices and with the same size × , as The dot product of matrices satisfies the commutative law and associative law; that is to say, • = • and ( • ) • = • ( • ).Then, the iteration equation ( 23) can be obtained using the gradient descent method: where is the learning factor and means the th iteration. The purpose of learning is to seek the optimal matrices ( = 1, 2, . . ., 6) by minimizing the total square error = ∑ =1 , and the process of seeking the optimal matrices can be described as follows. Step 1. Initialize parameters: initialize the numbers of iterations inner and outer, the iteration variables = 0 and = 1, the nonnegative thresholds 1 and 2 used to indicate the end of iterations, the learning factor , the total number of samples , and the class feature matrices ( = 1, 2, . . ., 6). Classification of Error-Diffused Halftone Images Using Nearest Centroids Classifier This section describes the details on classifying error-diffused halftone images using the spectral regression kernel discriminant analysis as follows. Step 2. According to the steps described in Section 3. Step 4. A label matrix information of the size 1 × is built to record the type to which the error-diffused halftone images belong. Step 5.The first features 1 00 , 2 00 , . . ., 00 of the samples feature matrices 00 are taken as the training samples (the first features of 10 , 11 , or all which is the composition of 00 , 10 , and 11 also can be used as the training samples).Reduce the dimension of these training samples using the spectral regression discriminant analysis.The process of dimension reducing can be described by three substeps as follows. Step 7. The remaining − samples are taken as the testing samples, and the dimension reduction is implemented for them using the method described in Step 5. Step 8. Compute the square of the distance | 00 − aver | 2 ( = + 1, + 2, . . ., and = 1, 2, . . ., 6) between each testing sample 00 and different class-centroid aver , according to the nearest centroids classifier; the sample 00 is assigned to the class if = arg min | 00 − aver | 2 .In Step 8, the weak classifier (i.e., the nearest centroid classifier) is used to classify error-diffused halftone images, because this classifier is simple and easy to implement.Simultaneously, in order to prove that these class feature matrices, which are extracted according to the method mentioned in Section 3 and handled by the algorithm of the spectral regression discriminant analysis, are well suited for the classification of error-diffused halftone images, this weak classifier is used in this paper instead of a strong classifier [20], such as support vector machine classifiers and deep neural network classifiers. Experimental Analysis and Results We implement various experiments to verify the efficiency of our methods in classifying error-diffused halftone images.The computer processor is Intel(R) Pentium(R) CPU G2030 @3.00 GHz, the memory of the computer is 2.0 GB, the operating system is Windows 7, and the experimental simulation software is matlab R2012a.In our experiments, all the original images are downloaded from http://decsai.ugr.es/cvg/dbimagenes/ and http://msp.ee.ntust.edu.tw/.About 4000 original images have been downloaded and they are converted into 24000 error-diffused halftone images produced by six different error-diffused filters. Classification Accuracy Rate of the Error-Diffused Halftone Images 5.1.1.Effect of the Number of the Samples.This subsection analyzes the effect of the number on the feature samples on classification.When = 11, and feature matrices 00 , 10 , 11 , all are taken as the input data, respectively, the accuracy rate of classification under different conditions is shown in Tables 1 and 2. Table 1 shows the classification accuracy rates under different number of training samples, when the total number of samples is 12000.Table 2 shows the classification accuracy rates under different number of training samples, when the total number of samples is 24000.The digits in the first line of each table are the size of the training samples.According to Tables 1 and 2, the classification accuracy rates under 12000 samples are higher than that under 24000 samples.Moreover, the classification accuracy rates improve with the increase of the proportion of the training samples when the number of the training samples is lower than the 80% of sample size.And it achieves the highest classification accuracy rates when the number of the training samples is about 80% of the sample size.In addition, from Tables 1 and 2, we can also see that 00 , 10 , 11 can be used as the input data alone.Simultaneously, they can also be combined into the input data all , based on which the classification accuracy rates would be high. Comparison of Classification Accuracy Rate. To analyze the effectiveness of our classification algorithm, the mean values of classification accuracy rate of the four data sets on the right-hand side of each row in Table 2 are computed.The algorithm SR outperforms other baselines in achieving higher classification accuracy rates, when compared with LMS + Bayes (the method composed of least mean square and Bayes method), ECF + BP (the method based on the enhanced correlation function and BP neural network), and ML (the maximum likelihood method).According to optimize the associated nonconvex problems, which are well known to converge very slowly.However, the classifier based on SR performs the classification task through computing the square of the distance between each testing sample and different class-centroids directly.Hence, the time-consumption of it is very cheap. The Experiment of Noise Attack Resistance.In the process of actual operation, the error-defused halftone images are often polluted by noise before the inverse transform.In order to test the ability of SR to resist the attack of noise, different Gaussian noises with mean 0 and different variances are embedded into the error-defused halftone images.Then classification experiments have been done using the algorithm proposed in this paper and the experimental results are listed in Table 6.According to Table 6, the accuracy rates decrease with the increase of the variances.As compared with the accuracy rates listed in Table 7 achieved by other algorithms, such as ECF + BP, LMS + Bayes, and ML, we find that our classification method has obvious advantages in resisting the noise. Conclusion This paper proposes a novel algorithm to solve the challenging problem of classifying the error-diffused halftone images.We firstly design the class feature matrices, after extracting the image patches according to their statistical characteristics, to classify the error-diffused halftone images.Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction.The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier.As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.A very interesting direction is to solve the disturbance, possibly introduced by other attacks such as image scaling and rotation, in the process of errordiffused halftone image classification. denotes the pixel being processed; , , , and indicate the four neighborhood pixels: Error-Diffused Halftone Images.Assume that (, ) is the gray value of the pixel located at position (, ) in the original image and (, ) is the value of the pixel located at position (, ) in the error-diffused halftone image.For the original image, all the pixels are firstly normalized to the range [0, 1].Then, the pixels of the normalized image are converted to the errordiffused image line by line; that is to say, if (, ) ≥ 0 , (, ), which is the value of the pixel located at position (, ) in error-diffused image , is 1; otherwise, (, ) is 0, where 0 is the threshold value.The error between (, ) 5. According to the template shown in the error diffusion of Stucki described above, we can know that the diffusion error is = − , the new value of pixel is = ( + ) + 8/42, and the new value of pixel is = ( + )+/42, where ( + ) and ( + ) are the original values of pixels and , respectively. Table 4 : The training and testing time under different sample sizes (in seconds). Table 3 , the mean values of classification accuracy rates obtained using SR and different features 00 , 10 , 11 , and all , respectively, are higher than the mean values obtained by other algorithms mentioned above.5.2.Effect of the Size of Statistical Feature Template on Classification Accuracies.Here, different features 00 , 10 , 11 of the 24000 error-diffused halftone images are used to test the effect of the size of statistical feature template.00,10, 11 are constructed using the corresponding class feature matrices 00 , 10 , 11 with deferent size × ( = 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25).Figure1shows that the classification accuracy rate achieves the highest value when = 11, no matter which feature is selected for experiments.It is well known that the time-consumption of the classification includes the training time and the testing time.From Table4, we can know that the training time increases with the increase of the number of training samples; on the contrary, the testing time decreases with the increase of the number of training samples. Table 5 : Time-consumption of different algorithms (in second). Table 7 : Classification accuracy rates using other algorithms under different variances.
5,227.6
2016-05-01T00:00:00.000
[ "Computer Science" ]
$\textrm{AdS}_{2}\times S^7$ solutions from D0 $-$ F1 $-$ D8 intersections We study an exhaustive analytic class of massive type IIA backgrounds preserving sixteen real supercharges and enjoying $\textrm{SL}(2,\mathbb{R})\times\textrm{SO}(8)$ bosonic symmetry. The corresponding geometry is described by $\textrm{AdS}_{2}\times S^{7}$ warped over a line, which turns out to emerge from taking the near-horizon limit of D0 $-$ F1 $-$ D8 intersections. By studying the singularity structure of these solutions we find the possible presence of localized O8/D8 sources, as well as of fundamental strings smeared over the $S^{7}$. Finally we discuss the relation between the aforementioned solutions and the known $\textrm{AdS}_{7}\times S^{2}$ class through double analytic continuation. Introduction Ever since the discovery of the AdS/CFT correspondence [1], there has been a lot of effort devoted to the classification of supersymmetric AdS vacua in string theory. While there exist very few examples enjoying maximal supersymmetry, a much richer structure opens up once we look into backgrounds preserving half-maximal supersymmetry, i.e. sixteen real supercharges. This fact is mainly due to the possibility of having solutions of the form AdS d+1 × M 9−d , where the corresponding geometries include a non-trivial warping. When focusing on two-dimensional holography though, a possible AdS 2 /CFT 1 correspondence is often thought of as far less understood than its higher-dimensional counterparts. In particular, many issues and exotic features of gravity in a nearly-AdS 2 geometry are encountered along the way [2,3]. Among these we mention the presence of multiple disconnected time-like boundary components, which may represent a crucial obstruction to identifying a correct holographic dictionary in the first place. A renovated interest in the topic has been sparked by the so-called SYK model [4] and its possible realizations in high-energy theory (see [5] and references therein), along with the novel issues which were recently discussed in [6] within this context. The general challenge posed by two-dimensional holography may be viewed as a motivation for looking into supersymmetric AdS 2 vacua in string theory with a clear brane picture, as those might shed a light on the unresolved issues concerning the AdS 2 /CFT 1 correspondence within a controlled framework. The aim of the present work is precisely that of enriching the landscape of supersymmetric AdS 2 string vacua by presenting a new class of such solutions in massive type IIA string theory. The class under consideration here will be identified with geometries given by warped products of AdS 2 and an 8-manifold constructed as a round seven-sphere fibered over a line. The existence of such solutions may be inferred from "double analytic continuation" arguments which would relate them to the ones in the AdS 7 × S 2 class of [7,8], in the same way as type IIB geometries of the form AdS 2 × S 6 warped over a Riemann surface were argued in [9] to be related to previously known backgrounds AdS 6 × S 2 warped over a Riemann surface, through the aforementioned double analytic continuation. The paper is organized as follows. We start out by reviewing some facts concerning D0 -F1 -D8 intersections in massive type IIA string theory and relate them to 1 4 -BPS supergravity solutions discussed in [10]. Subsequently, we show how to make an educated guess in the general Ansatz which directly produces solutions with enhanced supersymmetry thus realizing AdS 2 geometry. After integrating the obtained differential equations, we discuss the relation of the obtained solutions to the aforementioned AdS 7 counterparts through double analytic continuation. Then, we discuss the possible singularity structures as well as the range of the warp coordinate. We conclude with some further speculations concerning the possible holographic interpretation of our work. 2 AdS 2 solutions from D0 -F1 -D8 intersections D0 -D8 bound states were considered in [11,12,13] as UV descriptions of N = (8, 0) superconformal quantum mechanics. Such bound states require a non-trivial B-field sourced by a fundamental string. Note that, contrary to all other D-branes, strings cannot end on a D-particle due to charge conservation issues [14,15]. However the situation changes in presence of D8-branes in the background, where in fact an F1 stretched between a D0 and the D8 has to be formed whenever the D0 crosses a D8 [16]. This physical process can be understood a dual version of the Hanany-Witten (HW) effect [17]. D0 -F1 -D8 brane systems were exhaustively studied in [10] as a class of 1 4 -BPS solutions in massive type IIA supergravity. The explicit set-up is summarized in table 1. The complete Ansatz for the 10D fields of massive type IIA supergravity can be completely specified in terms of two arbitrary functions of the (y, r) coordinates, denoted by S & K. This reads where g s > 0 is the string coupling and dΩ 2 (7) denotes the metric of a unit-radius sevensphere. For non-zero Romans' mass m the full set of 10D equations of motion and Bianchi identities is implied by the following non-linear PDE's where ∆ (8) is the Laplace operator on R 8 , i.e. the common transeverse space to the Dparticle and the string. Its rotationally invariant form expressed in polar coordinates reads Note that in the massless limit K is no longer determined by the equation (2.2a), but it must instead satisfy the following PDE A particular solution to (2.2b) & (2.4) is given by the semilocalized D0 -F1 intersection constructed by following the prescription in [18], this procedure yielding which may be in turn reinterpreted as a background of the class in (2.1), upon performing the following identification In the remaining part of this section we will show how to extract massive AdS 2 solutions from the PDE's (2.2a) & (2.2a). We will follow the same procedure illustrated in [19], where the AdS 7 solutions of [7] are seen as emerging from NS5 -D6 -D8 systems of the Hanany-Zaffaroni type [20]. The key will be understanding which combination of the (y, r) coordinates plays the role of the radial coordinate of AdS 2 , thus transforming the original system of PDE's determining 1 4 -BPS flows into a single ODE yielding warped AdS geometries with enhanced supersymmetry as solutions. Analog to the case studied in [19], we will be able to exploit the insight coming from the massless case also when the Romans' mass is turned on. This will result in an exhaustive classification of all AdS 2 of this type in massive type IIA string theory. The near-horizon limit of the D0 -F1 system Let us first start from the massless solution in (2.5) describing the semilocalized intersection between a D-particle and a fundamental string. By performing the following coordinate change [21]  we find that the metric, in the ρ → 0 limit, takes the form , which is the warped product of AdS 2 and an 8-manifold obtained as a fibration of S 7 over a line. The insight that we can borrow from the massless situation described above is that taking the near-horizon limit where AdS emerges, involves the following two conditions In the following part of this section we will make use of the above conditions in order to guess the change of variables that translates the PDE's in (2.2a) & (2.2b) into a single ODE to be solved for AdS solutions. AdS 2 × S 7 solutions in massive type IIA supergravity Inspired by (2.9), we proceed by making the following Ansatz for the S and K functions S = r κ G(y 2 r 4 ) , K = 2 mg s y r κ+4 G ′ (y 2 r 4 ) , where G is an arbitrary function of the combination y 2 r 4 ≡ ζ, and κ is a constant yet to be determined. For the above choice of S & K the dilaton reads , (2.11) which stays finite in the limit (2.9) only when κ = −6. With this choice for κ, the 10D metric turns out to be given by AdS 2 × S 7 warped over the ζ coordinate, the warping being specified by the function G(ζ). Upon introducing new coordinates defined by where f −4 ≡ G ′ (G + 4ζ) √ ζ, one can furthermore check that the full set of equations of motion and Bianchi identities are implied whenever G satisfies the following ODE where a prime denotes differentiation with respect to ζ. This is solved by where γ 1 & γ 2 are integration constants. The metric now becomes , where we recognize ds 2 2 as the line element of AdS 2 of radius 1/4. Relation to AdS 7 × S 2 through double analytic continuation From the previous analysis it appears evident that our class of AdS 2 solutions realizes the superalgebra osp(8|2) as an isometry algebra, just like the AdS 7 ones in [7,8]. The only difference between the two realizations being the different choice of real form, i.e. so(2, 1) ⊕ so(8) for AdS 2 vs. so(2, 6) ⊕ so(3) for AdS 7 . A similar phenomenon has been recently discussed in [9] for AdS 6 × S 2 vs. AdS 2 × S 6 solutions of type IIB supergravity, where in both cases the aforementioned 8-dimensional geometry is warped over a Riemann surface Σ. There it was argued that the two geometries are related by a so-called double analytic continuation involving an interchange of AdS and sphere factors, while at the same time performing a Wick rotation of the coordinate parametrizing the warping. In order to make a similar relation manifest for our case, we introduce a new coordinate z and a suitable function α(z) such that where a dot denotes differentiation with respect to z. The ODE (2.14) for G then becomes the following one for α .... which is solved cubic polynomials in z of the form The metric now becomes , (2.20) where ds 2 AdS 2 is the line element of AdS 2 of unit radius and The dilaton reads e Φ = g s ℓ 3 α α We recognize the above metric as the double analytic continuation of the AdS 7 solution of [7], [8] as presented in [22,Sec. 2.2.3]. Let us note that although the above solution is obtained in massive type IIA supergravity, we can get the massless limit by taking m → 0 and c 3 → 0 at the same time. Hence we will discuss the massless solution separately. Analysis of the solutions In this section we analyze the geometry and the dilaton of the solution. To this end, it turns out to be very convenient to use the z coordinate that directly relates our solutions to the AdS 7 ones in [22]. We keep c 3 = 0 as in the opposite case we would obtain a massless solution, as mentioned in the end of the previous section. Positivity of the metric (2.20) metric requires The special regions of the geometry, where the warp factor vanishes or tends to infinity are (i) α = 0, or (ii)α = 0, or (iii)α 2 − 2αα = 0. Note that, in particular, positivity of the metric requiresα going to zero, whenever either α orα go to zero. Hence we have the following special regions: • a double root of α • a triple root of α • a root ofα andα (stationary point of inflection of α) The discriminant ofα 2 − 2αα is ∆(α 2 − 2αα) = −2 8 3 3 c 2 3 ∆(α) 2 , hence we only need to consider a simple root ofα 2 − 2αα as a multiple root is also a multiple root of α and this is covered by the first two cases. In particularα 2 − 2αα can have simple roots, a triple root wich corresponds to a double root of α, or a quadruple root which corresponds to triple root of α. Let us now analyze the geometry and the dilaton near the aforementioned regions. • Near a double root z 0 of α the metric reads where ̺ 2 ≡ z − z 0 , and the dilaton stays finite. Hence the internal space becomes R 8 and the geometry is regular. • Near a triple zero the metric develops a conical singularity and hence we will dismiss this case. • Near a stationary point of inflection of α the metric reads The dilaton becomes ℓ −3 e Φ ∼ 1 6 This singularity we recognize as an O8/D8-brane singularity. We now look at the range of z. An analysis of the roots of the quartic polymialα 2 −2αα shows that it always has two real roots (simple or multiple). Since the coefficient of the z 4 term ofα 2 − 2αα is −3c 2 3 , we conclude that it stays negative for z ∈ [z 0 , ∞] where z 0 is a root. The behavior of the solution at infinity is , (3.6) and for the dilaton ℓ −3 e Φ ∼ β 4 z −1/2 , with β 4 a constant. It is worth mentioning that such a behavior at infinity can be understood as that of a curved D8-domain wall carrying D0-brane charge. This type of BI-on solutions was also investigated in [10]. To make our claim manifest, we describe our asymptotic solution at ∞ as which corresponds to picking G ∞ = 4 3 ζ, and interpret it as a curved domain wall solution whose profile is described by which for S = S ∞ integrates to y = c r 2/3 . One can then check that the metric (3.6) is reproduced by the following change of coordinates where R is to be identified with the AdS 2 radial coordinate. Compactifying the range of the warp coordinate Although z is defined on a half-line we can compactify it by making a coordinate tranformation to a trigonmetric function. Let us look at the case where α has a double root, in the neighborhood of which the internal space is regular. We thus take and perform the following coordinate transformation (3.10) The metric then becomes . (3.11) The massless solution In the massless limit we need to take c 3 = 0 and so α becomes a quadratic polynomial. α 2 − 2αα evaluates to c 2 1 − 4c 0 c 2 which is also the discriminant of ∆(α) of α. Sinceα 2 − 2αα has to be negative we conclude that α has complex roots and is always different from zero. The metric now reads . (3.12) Upon making the following coordinate transformation The metric takes exactly the form in (2.8) obtained as the near-horizon geometry of the massless D0 -F1 semilocalized intersection in [21,Sec. 4.5], provided that Q D0 ≡ 1 Conclusions In this paper we have studied a class of warped supersymmetric AdS 2 solutions in massive type IIA emerging from D0 -F1 -D8 systems as their near-horizon geometries. All solutions in this class turn out to preserve sixteen real supercharges and the corresponding internal geometry is given by a fibration of a (round) seven-sphere over a line. The solutions can be on the one hand obtained as special cases of the 1 4 -BPS family of backgrounds studied in [10], and on the other hand reinterpreted as the double-analytically continued version of the AdS 7 solutions of [7,8]. This latter relation is further corroborated by the fact that both families offer explicit relaizations of the osp(8|2) superalgebra within massive type IIA supergravity, their distinction just corresponding to different choices of real form. Due to the corresponding brane picture of the vacua discussed here, we expect them to be dual to some N = (8, 0) superconformal quantum mechanics. The first interesting check for a possible AdS 2 /CFT 1 correspondence in this context would be to be able to compute the holographic central charge by following the prescription where both the effective AdS radius L AdS and the Newton constant G (d+1) N are averaged over the warping, and make sense of the answer from a field theory viewpoint. However, it may be worth mentioning that the above prescription for computing c hol would yield an infinite result in our case. This is mainly due to the range of the z warp coordinate being non-compact. Similar pathologies where also encountered in [23,24] in the context of AdS 5 solutions in type IIA obtained by taking a non-Abelian T-duality (NATD) on the type IIB Klebanov-Witten AdS 5 × T 1,1 background [25]. There this issue was resolved on the gravity side by introducing a hard cut-off which should be interpreted from a physical perspective as inserting the suitable branes which interrupt the dual quiver which would otherwise be infinitely long. It would be interesting to investigate whether this feature that our solutions have, can be resolved in a way similar to the NATD solutions. This could possibly shed a light on their holographic dual superconformal quantum mechanical models and the possible emergence of deconstructed extra dimensions underlying this structure. We hope to come back to these issues in the future.
3,983
2018-07-02T00:00:00.000
[ "Physics" ]
Ecofriendly Protic Ionic Liquid Lubricants for Ti6Al4V : Three diprotic ionic liquids (PILs) containing bis(2-hydroxyethyl) ammonium cations and citrate (DCi), lactate (DL), or salycilate (DSa) hydroxy/carboxylate anions were studied as lubricants for Ti6Al4V–sapphire contact. At room temperature, the neat PILs are non-Newtonian fluids, which show up to a 70% friction coefficient reduction with respect to water. New aqueous lubricants were developed using PILs as 1 wt.% additives in water. The new (Water + 1 wt.% PILs) lubricants showed friction reductions of higher than 50% with respect to water at room temperature. The lowest friction coefficients at room temperature were achieved with thin lubricant layers deposited on Ti6Al4V using Water + 1 wt.% PIL after water evaporation. At 100 ◦ C, the best tribological performance, with the lowest friction coefficients and wear rates, was obtained for the PILs containing aliphatic anions: DCi, and DL. The surface layers of the sapphire balls with mild adhesion and abrasion wear mechanisms were observed via scanning electron microscopy (SEM), energy dispersive X-ray spectroscopy (EDX), and X-ray photoelectron spectroscopy (XPS). Introduction The advantages of titanium alloys over other metal alloys, such as high corrosion resistance, high specific mechanical properties, good high-temperature performance, and biocompatibility, has seen their increasing use in a wide range of fields from aerospace technology to biomedicine.However, titanium alloys present relatively low hardness values and poor tribological performance.In order to decrease the friction values and increase the wear resistance of titanium alloys [1,2] and, in particular, of TI6Al4V [3,4] is a major technological challenge in critical applications, such as precision instruments, aerospace equipment, or biomedical materials. Recent advances in titanium tribology have focused on the use of water-based lubricants, vegetable oils, and the combination of base lubricants and nanomaterials.Yang et al. [5] used 5 wt.% castor oil sodium sulfate water emulsion in the lubrication of Ti6Al4V against WC-Co under reciprocating sliding, obtaining a friction coefficient of 0.2 after a short running-in period.Adsorption of the micelles formed by the additive molecules on the titanium alloy surface was proposed as the main mechanism.The same research group described [6] the effect of the addition of amines containing hydrophilic or hydrophobic substituents on the effectiveness of castor oil sodium sulfate aqueous lubricant.The formation of ammonium salts by the reaction between the amine and carboxylate groups was proposed.Lubricating performance was related to the ability to form adsorbed layers. A recent review [7] has analyzed the increasing need for sustainable, high-tribological performance metalworking fluids for difficult-to-cut materials, including titanium alloys.The main research lines are based on the use of additives or surfactants to modify vegetable oils or water-based biolubricants and the application of minimum quantity lubrication (MQL) to the cutting regions. Singh et al. [8] reported a 15% friction reduction MQL with 1.5% graphene in canola oil compared to conventional flood lubrication. Some current research lines that look at reducing energy loss and increasing the tribological performance of lubricant oils include the use of nanoadditives [9] and surface coatings [10]. Zn nanoparticles in polyethylene glycol have been used to lubricate the steel/Ti6Al4V pair [16].Good lubricating performance was attributed to the formation of ZnO films on the titanium and steel surfaces under sliding conditions. Ionic liquids (ILs) have been investigated for the last two decades as lubricants, lubricant additives, and precursors of surface coatings for a variety of materials [17,18] found among the numerous relevant industrial applications of these ordered fluids, with a unique combination of properties [19].Although a limited number of studies have referred to titanium lubrication with ionic liquids [20][21][22][23][24][25][26][27], they have shown promising potential. Our research group has reported significant reductions in the friction coefficients and wear rates for titanium and titanium alloys using aprotic imidazolium ionic liquids with halide-containing anions [20,21].In this line, Fan et al. [22] have described the use of perfluorosulfonate ILs as lubricants of Ti6Al4V.Phosphorus-containing ILs, such as quaternary ammonium phosphate [23], have also been studied as lubricants in titaniumsteel contacts.However, the presence of the highly reactive aluminum in Ti6Al4V alloy causes tribocorrosion processes when lubricated with these ILs. Davis et al. [24] studied the effect of 1-butyl-3-methylimidazolium hexafluorophosphate on turning titanium grade 2, obtaining a 60% tool wear reduction with respect to unlubricated conditions and 15% with respect to lubrication without IL. Friction and wear for pure titanium have been reduced by coating the surface with dicationic imidazolium ILs [25].Although ILs have made a significant contribution to precision machining fluids, fluorine-containing imidazolium ILs have been shown to suffer tribochemical degradation on Ti6Al4V surfaces [26]. ILs have made recent contributions to the current need for a reduction in the quantity of lubricant used, particularly for cutting fluids and sustainable precision-machining operations [27,28]. Nontoxic, noncontaminant, sustainable, or even biodegradable lubricants could be based on water as a base fluid [29,30].In this case, halogen-free water-soluble IL additives are needed in order to improve the poor tribological performance of water and to avoid corrosion and contamination.New aqueous water + PIL lubricants will also be analyzed in the present study.Moreover, these aqueous lubricants were used as precursors for thin-film lubricants. Specifically, three protic ammonium carboxylate ionic liquids (PILs), namely, bis(2hydroxydiethylammonium) salicylate (DSa), tri[bis(2-hydroxydiethylammonium)] citrate (DCi), and bis(2-hydroxydiethylammonium) lactate (DL) (Figure 1) were selected.2hydroxyethylammonium lactates have been shown to be nontoxic and highly biodegradable [31].Recent studies [31,32] have also confirmed the low toxicity of PILs, including those used in the present study.Another relevant aspect is that PILs are readily available through a simple synthetic route and have shown good tribological performance in recent studies.DSa and DCi have been previously used as lubricants under various sliding conditions [33,34].As far as the authors are aware, this is the first time that PILs have been used in the lubrication of Ti6Al4V and the first study on the tribological performance of DL, although lactate IL lubricants have been previously studied.1-octyl-3-methylimidazolium lactate has been used as a lubricant additive [35], and triethanolamine lactates have recently shown their ability to form adsorbed layers on iron surfaces [36,37].The pin-on-disk configuration conditions selected in the present study are similar to those used in previous studies on titanium lubrication with aprotic ILs [20,21].Although this is a fundamental study rather than an applied one, the Al 2 O 3 counterpart was selected because it is commonly used in the cutting or machining operations of Ti6Al4V. used in the lubrication of Ti6Al4V and the first study on the tribological performance of DL, although lactate IL lubricants have been previously studied.1-octyl-3-methylimidazolium lactate has been used as a lubricant additive [35], and triethanolamine lactates have recently shown their ability to form adsorbed layers on iron surfaces [36,37].The pin-ondisk configuration conditions selected in the present study are similar to those used in previous studies on titanium lubrication with aprotic ILs [20,21].Although this is a fundamental study rather than an applied one, the Al2O3 counterpart was selected because it is commonly used in the cutting or machining operations of Ti6Al4V. Rheology Viscosity measurements were performed with an AR-G2 rotational rheometer from TA Instruments (New Castle, DE, USA) using a plate-plate configuration, with temperature controlled by a Peltier system with an accuracy of 0.1 °C. Contact Angle Measurements A DSA 30B (Krüss, Hamburg, Germany) equipment was used to measure the contact angles on the Ti6Al4V surfaces.Instantaneous contact angles and contact angles after 5 min were determined for each lubricant. Rheology Viscosity measurements were performed with an AR-G2 rotational rheometer from TA Instruments (New Castle, DE, USA) using a plate-plate configuration, with temperature controlled by a Peltier system with an accuracy of 0.1 • C. Contact Angle Measurements A DSA 30B (Krüss, Hamburg, Germany) equipment was used to measure the contact angles on the Ti6Al4V surfaces.Instantaneous contact angles and contact angles after 5 min were determined for each lubricant. Tribological Tests Friction values of the sapphire balls against the Ti6Al4V disks were recorded by a pin-on-disk tribometer (Microtest, Madrid, Spain) that was equipped with an oven [21] under ambient conditions (HR 50 ± 5%) at 25 and 100 • C. To ensure reproducibility, the tribological tests were repeated at least three times under a normal load of 1N (average contact pressure: 0.99 GPa), with a sliding velocity of 0.05 ms −1 , a sliding radius of 5.0 mm, a sliding distance of 200 m, and a lubricant volume of 0.2 mL.After each tribological test, the disks were cleaned with distilled water and ethanol and then dried with hot air. Surface Analysis Wear measurements of the Ti6Al4V disks were determined by means of a Talysurf CLI 500 (Taylor Hobson, Leicester, UK) optical profilometer.A scanning electron microscope (SEM) S3500 N (Hitachi, Chiyoda, Japan) was used to obtain the electron micrographs and energy dispersive X-ray (EDX) spectra of the wear tracks.X-ray photoelectron spectroscopy (XPS) analyses were performed using K-Alpha Thermo-Scientific equipment (Waltham, MA, USA), with ±0.1 eV precision. Viscosity Measurements The viscosity values of some protic ionic liquids have previously been reported [41] under different conditions.We now report the influence of shear rate and temperature on their rheological behavior.It is important to notice that the protic ionic liquids used in the present study were saturated with adsorbed water, as no drying method was applied before measurements. Figure 2 shows the variation in viscosity with shear rate for the three protic ionic liquids, both at room temperature (Figure 2a) and at 100 • C (Figure 2b). pin-on-disk tribometer (Microtest, Madrid, Spain) that was equipped with an oven [21] under ambient conditions (HR 50 ± 5%) at 25 and 100 °C.To ensure reproducibility, the tribological tests were repeated at least three times under a normal load of 1N (average contact pressure: 0.99 GPa), with a sliding velocity of 0.05 ms −1 , a sliding radius of 5.0 mm, a sliding distance of 200 m, and a lubricant volume of 0.2 mL.After each tribological test, the disks were cleaned with distilled water and ethanol and then dried with hot air. Surface Analysis Wear measurements of the Ti6Al4V disks were determined by means of a Talysurf CLI 500 (Taylor Hobson, Leicester,UK) optical profilometer.A scanning electron microscope (SEM) S3500 N (Hitachi, Chiyoda, Japan) was used to obtain the electron micrographs and energy dispersive X-ray (EDX) spectra of the wear tracks.X-ray photoelectron spectroscopy (XPS) analyses were performed using K-Alpha Thermo-Scientific equipment (Waltham, MA, USA), with ±0.1 eV precision. Viscosity Measurements The viscosity values of some protic ionic liquids have previously been reported [41] under different conditions.We now report the influence of shear rate and temperature on their rheological behavior.It is important to notice that the protic ionic liquids used in the present study were saturated with adsorbed water, as no drying method was applied before measurements. Figure 2 shows the variation in viscosity with shear rate for the three protic ionic liquids, both at room temperature (Figure 2a) and at 100 °C (Figure 2b).At room temperature, the ionic liquids show non-Newtonian behavior, with a shear thinning effect.The data were fitted to the Ostwald-de Waele model, according to equation 1, and the values are shown in Table 1.Parameter n gives information about the deviation from Newtonian behavior.DCi presents a stronger shear thinning, and a lower value of n indicates the presence of reversible interactions disturbed at high shear rates [42].At room temperature, the ionic liquids show non-Newtonian behavior, with a shear thinning effect.The data were fitted to the Ostwald-de Waele model, according to equation 1, and the values are shown in Table 1.Parameter n gives information about the deviation from Newtonian behavior.DCi presents a stronger shear thinning, and a lower value of n indicates the presence of reversible interactions disturbed at high shear rates [42].Conversely, at 100 • C (Figure 2b), all ionic liquids show Newtonian behavior, with constant viscosity values (shown in Table 2).As can be observed in Figure 2, DCi presents much higher viscosity values than DL and DSa, not only at room temperature, where variable water contents can influence viscosity values but also at 100 • C once the water has been removed.This is attributed to stronger anion-cation interactions in DCi due to the presence of a tricarboxylate anion and three protic ammonium cations in its molecular composition, while DSa and DL are monocationic with monocarboxylate anions. Contact Angles Table 3 shows the contact angle values for the neat PILs and for the Water + 1 wt.%PILs.It can be observed that DSa shows much higher wettability on Ti6Al4V than DL and DCi.This is particularly so after 5 min when DSa shows the lowest contact angle of all the lubricants.This is attributed to a strong interaction between the chelating salycilate anion and the titanium surface.2b), all ionic liquids show Newtonian behavior, with constant viscosity values (shown in Table 2).As can be observed in Figure 2, DCi presents much higher viscosity values than DL and DSa, not only at room temperature, where variable water contents can influence viscosity values but also at 100 °C once the water has been removed.This is attributed to stronger anion-cation interactions in DCi due to the presence of a tricarboxylate anion and three protic ammonium cations in its molecular composition, while DSa and DL are monocationic with monocarboxylate anions. Contact Angles Table 3 shows the contact angle values for the neat PILs and for the Water + 1 wt.%PILs.It can be observed that DSa shows much higher wettability on Ti6Al4V than DL and DCi.This is particularly so after 5 min when DSa shows the lowest contact angle of all the lubricants.This is attributed to a strong interaction between the chelating salycilate anion and the titanium surface.2b), all ionic liquids show Newtonian behavior, with constant viscosity values (shown in Table 2).As can be observed in Figure 2, DCi presents much higher viscosity values than DL and DSa, not only at room temperature, where variable water contents can influence viscosity values but also at 100 °C once the water has been removed.This is attributed to stronger anion-cation interactions in DCi due to the presence of a tricarboxylate anion and three protic ammonium cations in its molecular composition, while DSa and DL are monocationic with monocarboxylate anions. Contact Angles Table 3 shows the contact angle values for the neat PILs and for the Water + 1 wt.%PILs.It can be observed that DSa shows much higher wettability on Ti6Al4V than DL and DCi.This is particularly so after 5 min when DSa shows the lowest contact angle of all the lubricants.This is attributed to a strong interaction between the chelating salycilate anion and the titanium surface.2b), all ionic liquids show Newtonian behavior, with constant viscosity values (shown in Table 2). Ionic Liquid Viscosity (Pa•s) DCi 1.42 ± 0.05 DL 0.067 ± 0.004 DSa 0.051 ± 0.003 As can be observed in Figure 2, DCi presents much higher viscosity values than DL and DSa, not only at room temperature, where variable water contents can influence viscosity values but also at 100 °C once the water has been removed.This is attributed to stronger anion-cation interactions in DCi due to the presence of a tricarboxylate anion and three protic ammonium cations in its molecular composition, while DSa and DL are monocationic with monocarboxylate anions. Contact Angles Table 3 shows the contact angle values for the neat PILs and for the Water + 1 wt.%PILs.It can be observed that DSa shows much higher wettability on Ti6Al4V than DL and DCi.This is particularly so after 5 min when DSa shows the lowest contact angle of all the lubricants.This is attributed to a strong interaction between the chelating salycilate anion and the titanium surface.2b), all ionic liquids show Newtonian behavior, with constant viscosity values (shown in Table 2). Ionic Liquid Viscosity (Pa•s) DCi 1.42 ± 0.05 DL 0.067 ± 0.004 DSa 0.051 ± 0.003 As can be observed in Figure 2, DCi presents much higher viscosity values than DL and DSa, not only at room temperature, where variable water contents can influence viscosity values but also at 100 °C once the water has been removed.This is attributed to stronger anion-cation interactions in DCi due to the presence of a tricarboxylate anion and three protic ammonium cations in its molecular composition, while DSa and DL are monocationic with monocarboxylate anions. Contact Angles Table 3 shows the contact angle values for the neat PILs and for the Water + 1 wt.%PILs.It can be observed that DSa shows much higher wettability on Ti6Al4V than DL and DCi.This is particularly so after 5 min when DSa shows the lowest contact angle of all the lubricants.This is attributed to a strong interaction between the chelating salycilate anion and the titanium surface.Due to its very high viscosity (Figure 2) and strong molecular interactions, DCi shows very low wettability, such that the contact angles, both instantaneous and after 5 min, are the highest (Table 3).As expected, the contact angles for all the water-based lubricants are very similar due to the low ionic liquid proportion. Neat PIL Lubricants Figure 3 shows that after the initial period of 20 min, all ionic liquid lubricants show similar steady-state friction coefficient records along a sliding distance up to 200 m (the corresponding record for water is shown in Figure 4).The average coefficient of friction values after three tests for each lubricant, including water, are reported in Table 4. Due to its very high viscosity (Figure 2) and strong molecular interactions, DCi shows very low wettability, such that the contact angles, both instantaneous and after 5 min, are the highest (Table 3).As expected, the contact angles for all the water-based lubricants are very similar due to the low ionic liquid proportion. Neat PIL Lubricants Figure 3 shows that after the initial period of 20 min, all ionic liquid lubricants show similar steady-state friction coefficient records along a sliding distance up to 200 m (the corresponding record for water is shown in Figure 4).The average coefficient of friction values after three tests for each lubricant, including water, are reported in Table 4. Due to its very high viscosity (Figure 2) and strong molecular interactions, DCi shows very low wettability, such that the contact angles, both instantaneous and after 5 min, are the highest (Table 3).As expected, the contact angles for all the water-based lubricants are very similar due to the low ionic liquid proportion. Neat PIL Lubricants Figure 3 shows that after the initial period of 20 min, all ionic liquid lubricants show similar steady-state friction coefficient records along a sliding distance up to 200 m (the corresponding record for water is shown in Figure 4).The average coefficient of friction values after three tests for each lubricant, including water, are reported in Table 4. Due to its very high viscosity (Figure 2) and strong molecular interactions, DCi shows very low wettability, such that the contact angles, both instantaneous and after 5 min, are the highest (Table 3).As expected, the contact angles for all the water-based lubricants are very similar due to the low ionic liquid proportion. Neat PIL Lubricants Figure 3 shows that after the initial period of 20 min, all ionic liquid lubricants show similar steady-state friction coefficient records along a sliding distance up to 200 m (the corresponding record for water is shown in Figure 4).The average coefficient of friction values after three tests for each lubricant, including water, are reported in Table 4. Due to its very high viscosity (Figure 2) and strong molecular interactions, DCi shows very low wettability, such that the contact angles, both instantaneous and after 5 min, are the highest (Table 3).As expected, the contact angles for all the water-based lubricants are very similar due to the low ionic liquid proportion.Figure 3 shows that after the initial period of 20 min, all ionic liquid lubricants show similar steady-state friction coefficient records along a sliding distance up to 200 m (the corresponding record for water is shown in Figure 4).The average coefficient of friction values after three tests for each lubricant, including water, are reported in Table 4. Due to its very high viscosity (Figure 2) and strong molecular interactions, DCi shows very low wettability, such that the contact angles, both instantaneous and after 5 min, are the highest (Table 3).As expected, the contact angles for all the water-based lubricants are very similar due to the low ionic liquid proportion. Neat PIL Lubricants Figure 3 shows that after the initial period of 20 min, all ionic liquid lubricants show similar steady-state friction coefficient records along a sliding distance up to 200 m (the corresponding record for water is shown in Figure 4).The average coefficient of friction values after three tests for each lubricant, including water, are reported in Table 4.As expected, water behaves as a very poor lubricant, with a very high friction value of 0.9.All the neat PILs show a much better friction-reducing performance, with coefficients of friction ≤0.3 and a reduction of between 67% and 70% when compared to water.The short chain, aliphatic DCi, and DL also saw reduced wear rates for Ti6Al4V.The best antiwear performance was obtained in the case of DL.In contrast, DSa produced the highest wear rate.This result could be related to the low contact angle of DSa on Ti6Al4V (Table 3) and the strong surface interactions, which could give rise to the formation of coordination compounds, thus justifying the high wear loss.Differences in viscosity and contact angles do not produce significant variations in the average friction values for PILs.As expected, water behaves as a very poor lubricant, with a very high friction value of 0.9.All the neat PILs show a much better friction-reducing performance, with coefficients of friction ≤ 0.3 and a reduction of between 67% and 70% when compared to water.The short chain, aliphatic DCi, and DL also saw reduced wear rates for Ti6Al4V.The best antiwear performance was obtained in the case of DL.In contrast, DSa produced the highest wear rate.This result could be related to the low contact angle of DSa on Ti6Al4V (Table 3) and the strong surface interactions, which could give rise to the formation of coordination compounds, thus justifying the high wear loss.Differences in viscosity and contact angles do not produce significant variations in the average friction values for PILs. Water + 1 wt.% PIL Once the efficient tribological performance of the PILs as neat lubricants was established, they were used as additives in water in order to develop new aqueous environmentally friendly lubricants.A 1 wt.% mass fraction of PIL was added to water and was selected in an attempt to obtain significant results using a low additive proportion. Figure 4 shows that the addition of 1 wt.% ionic liquid reduces the friction coefficient of water at room temperature.Table 5 summarizes the average coefficients of friction and the wear rate values for these water-based lubricants.The average friction values (Table 5) for all water-based lubricants show that a maximum friction reduction of higher than 50% was obtained with the addition of DSa.However, Water + DL and Water + DCi showed lower friction reductions with respect to water.The order of wear rate is Water + DSa < Water + DCi < Water + DL.It is interesting to note that the wear rate for Water + DSa is not only the lowest of the three Water + PIL lubricants, but it is also lower than that obtained for neat DSa (Table 4).In this way, the dilution of DSa in water might reduce the strong surface interaction with respect to neat DSa, as we have seen via an increase in contact angle, thus reducing the amount of material loss. Water + 1 wt.% PIL Once the efficient tribological performance of the PILs as neat lubricants was established, they were used as additives in water in order to develop new aqueous environmentally friendly lubricants.A 1 wt.% mass fraction of PIL was added to water and was selected in an attempt to obtain significant results using a low additive proportion. Figure 4 shows that the addition of 1 wt.% ionic liquid reduces the friction coefficient of water at room temperature.Table 5 summarizes the average coefficients of friction and the wear rate values for these water-based lubricants.The average friction values (Table 5) for all water-based lubricants show that a maximum friction reduction of higher than 50% was obtained with the addition of DSa.However, Water + DL and Water + DCi showed lower friction reductions with respect to water.The order of wear rate is Water + DSa < Water + DCi < Water + DL.It is interesting to note that the wear rate for Water + DSa is not only the lowest of the three Water + PIL lubricants, but it is also lower than that obtained for neat DSa (Table 4).In this way, the dilution of DSa in water might reduce the strong surface interaction with respect to neat DSa, as we have seen via an increase in contact angle, thus reducing the amount of material loss.Previous results for similar short-chain ammonium carboxylate protic ionic liquids in water [43] have shown that, under sliding conditions (for stainless steel-sapphire contacts), water evaporates after a certain sliding distance and the thin ionic liquid film that remains at the contact is able to give ultralow friction values. In the present case, no transition to lower friction is observed in Figure 4 over 200 m.When the sliding distance was extended to 500 m, Figure 5 shows that water maintains high friction values from 300-500 m, while the Water + PIL lubricants show transitions to lower friction values between 350-450 m.This is attributed to the formation of PIL surface layers on Ti6Al4V when water evaporates at the contact zone.Previous results for similar short-chain ammonium carboxylate protic ionic liquids in water [43] have shown that, under sliding conditions (for stainless steel-sapphire contacts), water evaporates after a certain sliding distance and the thin ionic liquid film that remains at the contact is able to give ultralow friction values. Lubricant In the present case, no transition to lower friction is observed in Figure 4 over 200 m.When the sliding distance was extended to 500 m, Figure 5 shows that water maintains high friction values from 300-500 m, while the Water + PIL lubricants show transitions to lower friction values between 350-450 m.This is attributed to the formation of PIL surface layers on Ti6Al4V when water evaporates at the contact zone. Thin Lubricant Layers In order to take advantage of this lubricating ability of the PILs, the next objectives were to eliminate the presence of water, which causes high friction coefficients, and to reduce the volume of PIL lubricant at the contact point.Following the previously described methodology [44] for other water-based lubricants and materials, the surface of the Ti6Al4V disks was covered with (Water + 1 wt.%PIL), and thin film PIL lubricants were obtained after subsequent water evaporation. Upon the basis of these results, thin PIL layers were generated on the Ti6Al4V disks by water evaporation under mild conditions, as previously described in [44] (before the tribological tests).Figure 6 shows the coefficient of friction results for the new thin-layer lubricants. Thin Lubricant Layers In order to take advantage of this lubricating ability of the PILs, the next objectives were to eliminate the presence of water, which causes high friction coefficients, and to reduce the volume of PIL lubricant at the contact point.Following the previously described methodology [44] for other water-based lubricants and materials, the surface of the Ti6Al4V disks was covered with (Water + 1 wt.%PIL), and thin film PIL lubricants were obtained after subsequent water evaporation. Upon the basis of these results, thin PIL layers were generated on the Ti6Al4V disks by water evaporation under mild conditions, as previously described in [44] (before the tribological tests).Figure 6 shows the coefficient of friction results for the new thin-layer lubricants. These new thin-film lubricants achieve the lowest friction coefficients (Tables 4-6), in particular, the citrate and lactate derivatives DCi and DL.It is especially relevant that the low friction coefficient obtained for the DL thin layer (0.13; Table 6) saw up to a 76% reduction, as compared to Water + DL (Table 5), and more than a 50% reduction when compared with neat DL (Table 4).These new thin-film lubricants achieve the lowest friction coefficients (Tables 4-6), in particular, the citrate and lactate derivatives DCi and DL.It is especially relevant that the low friction coefficient obtained for the DL thin layer (0.13; Table 6) saw up to a 76% reduction, as compared to Water + DL (Table 5), and more than a 50% reduction when compared with neat DL (Table 4).The fact that the DSa thin-layer lubricant presents a friction coefficient and wear rate values very similar to those obtained for Water + DSa (Table 5) could be attributed to the higher water content in the DSa thin layer than in the rest of the thin-layer lubricants. The wear rate values for Ti6Al4V at room temperature (Tables 4-6) are in good agreement with the friction coefficient values.While the best performance for a water additive was found for DSa, the lowest wear rate values were obtained for DCi and DL, both as neat lubricants and as thin-film lubricants. Neat PIL Lubricants Figure 7a shows the SEM micrograph of the wear track and the titanium element map on the sapphire ball after lubrication at room temperature with neat DCi.The scar on the sapphire ball is covered by a thin, discontinuous titanium layer adhered to the Ti6Al4V disk.As expected, the main peaks in the EDX spectrum of the sapphire ball are those of aluminum and oxygen.The presence of carbon or nitrogen peaks from the DCi lubricant was also detected on the ball's surface (Figure 7b), which, however, was free from adhered titanium in some areas, as shown in the spectrum of Figure 7b.The fact that the DSa thin-layer lubricant presents a friction coefficient and wear rate values very similar to those obtained for Water + DSa (Table 5) could be attributed to the higher water content in the DSa thin layer than in the rest of the thin-layer lubricants. The wear rate values for Ti6Al4V at room temperature (Tables 4-6) are in good agreement with the friction coefficient values.While the best performance for a water additive was found for DSa, the lowest wear rate values were obtained for DCi and DL, both as neat lubricants and as thin-film lubricants. Neat PIL Lubricants Figure 7a shows the SEM micrograph of the wear track and the titanium element map on the sapphire ball after lubrication at room temperature with neat DCi.The scar on the sapphire ball is covered by a thin, discontinuous titanium layer adhered to the Ti6Al4V disk.As expected, the main peaks in the EDX spectrum of the sapphire ball are those of aluminum and oxygen.The presence of carbon or nitrogen peaks from the DCi lubricant was also detected on the ball's surface (Figure 7b), which, however, was free from adhered titanium in some areas, as shown in the spectrum of Figure 7b. In contrast to neat DCi, the neat DL lubricant is able to protect the sliding contact from titanium transference at room temperature.As shown in Figure 8, in this case, the entire contact area on the sapphire ball is free from titanium and is covered by a tribolayer containing carbon and nitrogen from the DL lubricant.This is in agreement with the lowest wear rate observed in the case of neat DL (Table 4). The wear track on the disk after lubrication with neat DCi (Figure 9) shows only titanium, aluminum, and vanadium peaks, corresponding to the elements present in Ti6Al4V alloy, without carbon or oxygen from the lubricant.The magnification of the SEM micrograph in Figure 9 shows parallel grooves characteristic of a mild abrasive wear mechanism.In contrast to neat DCi, the neat DL lubricant is able to protect the sliding contact from titanium transference at room temperature.As shown in Figure 8, in this case, the entire contact area on the sapphire ball is free from titanium and is covered by a tribolayer containing carbon and nitrogen from the DL lubricant.This is in agreement with the lowest wear rate observed in the case of neat DL (Table 4).A similar mild abrasive wear mechanism was also observed inside the wear track of the neat DL lubricant (Figure 10). Neat DSa produces very severe abrasive surface damage (Figure 11a), with wear debris adhered to the sapphire ball, which presents a machining chip morphology.A layer of titanium adhered to the sapphire surface (Figure 11b) covers the wear scar, thus showing the adhesive wear mechanism. Water + 1 wt.% PILs Pure water failed to lubricate the sapphire-titanium contact.As seen in Figure 12a, the severe wear produces a flat circular wear scar on the sapphire ball surface.Figure 12b,d show the scars on the sapphire balls after lubrication with the 1 wt.% solutions of the PILs in water.The smallest wear scar area was obtained for the Water + 1%DSa lubricant.This is in agreement with the lower wear rate of Ti6Al4V after lubrication with Water + 1 wt.%DSa (Table 5) and with the variation in wear-track width and the severity of the wear damage on the Ti6Al4V disks for each water-based lubricant (Figure 13a-d).The wear track on the disk after lubrication with neat DCi (Figure 9) shows only titanium, aluminum, and vanadium peaks, corresponding to the elements present in Ti6Al4V alloy, without carbon or oxygen from the lubricant.The magnification of the SEM micrograph in Figure 9 shows parallel grooves characteristic of a mild abrasive wear mechanism.A similar mild abrasive wear mechanism was also observed inside the wear track of the neat DL lubricant (Figure 10).The wear track on the disk after lubrication with neat DCi (Figure 9) shows only titanium, aluminum, and vanadium peaks, corresponding to the elements present in Ti6Al4V alloy, without carbon or oxygen from the lubricant.The magnification of the SEM micrograph in Figure 9 shows parallel grooves characteristic of a mild abrasive wear mechanism.A similar mild abrasive wear mechanism was also observed inside the wear track of the neat DL lubricant (Figure 10).Neat DSa produces very severe abrasive surface damage (Figure 11a), with wear debris adhered to the sapphire ball, which presents a machining chip morphology.A layer of titanium adhered to the sapphire surface (Figure 11b) covers the wear scar, thus showing the adhesive wear mechanism.Neat DSa produces very severe abrasive surface damage (Figure 11a), with wear debris adhered to the sapphire ball, which presents a machining chip morphology.A layer of titanium adhered to the sapphire surface (Figure 11b) covers the wear scar, thus showing the adhesive wear mechanism. Water + 1 wt.% PILs Pure water failed to lubricate the sapphire-titanium contact.As seen in Figure 12a, the severe wear produces a flat circular wear scar on the sapphire ball surface.Figure 12b,d show the scars on the sapphire balls after lubrication with the 1 wt.% solutions of the PILs in water.The smallest wear scar area was obtained for the Water + 1%DSa lubricant.This is in agreement with the lower wear rate of Ti6Al4V after lubrication with Water + 1 wt.%DSa (Table 5) and with the variation in wear-track width and the severity of the wear damage on the Ti6Al4V disks for each water-based lubricant (Figure 13a-d). Thin-Layer Lubricants After lubrication with the DSa thin-layer lubricant, which was deposited onto the Ti6Al4V surface after water evaporation, the sapphire ball surface shows (Figure 14) the presence of adhered particles that do not contain titanium, as the EDX spectrum only shows Al and O from the sapphire and C and N from the lubricant.This is attributed to the strong interaction of DSa with the Ti6Al4V surface after thin-film formation, which would prevent or diminish titanium adhesion to the sapphire ball's surface.This is in sharp contrast to the adhesive mechanism seen for neat DSa (Figure 11b). Figure 15 shows that when a thin layer of DCi is deposited on Ti6Al4V before the tribological test, the wear scar on the sapphire ball (after the test) is also free from titanium adhesion.This is, again, in sharp contrast with the observation made for neat DCi (Figure 6) and could account for the lower wear rate observed for the DCi thin-layer lubricant (Table 6).After lubrication with the DSa thin-layer lubricant, which was deposited onto the Ti6Al4V surface after water evaporation, the sapphire ball surface shows (Figure 14) the presence of adhered particles that do not contain titanium, as the EDX spectrum only shows Al and O from the sapphire and C and N from the lubricant.This is attributed to the strong interaction of DSa with the Ti6Al4V surface after thin-film formation, which would prevent or diminish titanium adhesion to the sapphire ball's surface.This is in sharp contrast to the adhesive mechanism seen for neat DSa (Figure 11b).Figure 15 shows that when a thin layer of DCi is deposited on Ti6Al4V before the tribological test, the wear scar on the sapphire ball (after the test) is also free from titanium adhesion.This is, again, in sharp contrast with the observation made for neat DCi (Figure 6) and could account for the lower wear rate observed for the DCi thin-layer lubricant (Table 6).Figure 15 shows that when a thin layer of DCi is deposited on Ti6Al4V before the tribological test, the wear scar on the sapphire ball (after the test) is also free from titanium adhesion.This is, again, in sharp contrast with the observation made for neat DCi (Figure 6) and could account for the lower wear rate observed for the DCi thin-layer lubricant (Table 6).When DL is used as a thin-layer lubricant, the scar on the sapphire ball (Figure 16a) contains both titanium (Figure 16b) and carbon (Figure 16c).When DL is used as a thin-layer lubricant, the scar on the sapphire ball (Figure 16a) contains both titanium (Figure 16b) and carbon (Figure 16c).The formation of this transfer layer shows that the DL thin-film lubricant is not able to protect the Ti6Al4V alloy as efficiently as neat DL.However, the final wear rate, once the tribolayer is formed, is slightly lower than that obtained for neat DL. Table 7 shows the XPS surface analysis results after lubrication with DL as a neat lubricant and with a DL thin-layer lubricant at room temperature.The binding energies were assigned according to the literature [21,44] and are in agreement with previous XPS results for protic ammonium carboxylate ionic liquid lubricants on different surfaces.The formation of this transfer layer shows that the DL thin-film lubricant is not able to protect the Ti6Al4V alloy as efficiently as neat DL.However, the final wear rate, once the tribolayer is formed, is slightly lower than that obtained for neat DL. Table 7 shows the XPS surface analysis results after lubrication with DL as a neat lubricant and with a DL thin-layer lubricant at room temperature.The binding energies were assigned according to the literature [21,44] and are in agreement with previous XPS results for protic ammonium carboxylate ionic liquid lubricants on different surfaces.Aliphatic or adventitious carbon is the main C1s peak at 285 eV.C1s binding energies at 286, 287, and 289 eV are assigned, respectively, to the -CH 2 OH, -CH 2 NH, and -COO functional groups, which are present in the composition of DL cations and anions (Figure 1).The three O1s binding energies at 530 eV (the most abundant), 531, and 532 eV correspond to oxide, -OH, and -CO-groups.Although the values of the O1s binding energies are the same for neat DL and thin-layer DL, the relative abundance of each peak changes.Thus, for the DL thin-layer lubricant, a higher atomic percentage is found for the peak assignable to the oxides, and a lower atomic percentage is found for the peaks at 532 eV, which could be due to a reduction in the water proportion present in the thin-layer DL with respect to neat DL. The main N 1s peak at 400.1 eV is due to the protic ammonium -NH group present in the cation.The minor peak at 401.5 eV for neat DL and 401.8 eV for the thin-layer DL could be due to -C-N quaternary ammonium caused by surface interactions and degradation processes under sliding conditions. A total of two Ti 2p 3/2 peaks were observed: the less abundant, being assignable to titanium metal, appears at a lower binding energy, and the second, present in a much higher proportion at a higher binding energy, corresponds to titanium oxide.In a similar way, the two Al 2p 3/2 binding energies correspond to metallic aluminum (the minor peak at a lower binding energy) and to aluminum oxide or hydroxide (present in a higher proportion).A third minor Al 2p 3/2 peak observed in the case of lubrication with the thin-layer DL is tentatively assigned to nonstoiquiometric aluminum oxide (AlOx) [45]. Friction Coefficients and Wear Rates at 100 • C Because the Ti6Al4V-sapphire pair is formed by materials with high-temperature applications, it was important to test the tribological behavior of the new PIL lubricants above room temperature, in particular, at 100 • C, under conditions where water cannot be used. Table 8 shows the tribological results for the three neat PIL lubricants at 100 • C. As expected, the friction coefficients and wear rates are higher than those found at room temperature (Table 4) for all lubricants, with the highest increase observed for DSa.The salicylate derivative also presented the highest wear rate under neat lubricant at room temperature (Table 3).These results could be related to its lower viscosity at both 25 and at 100 • C (Tables 1 and 2, respectively).The best lubricants with the lowest friction coefficients and wear rates at 100 • C are, again, as was observed at room temperature, the citrate and lactate aliphatic PILs species, DCi and DL.Here, neat DCi shows the best friction-reducing ability, probably due to a reduction in viscosity at a high temperature (Figure 2; Tables 1 and 2).3.6.Wear Mechanisms and Surface Analysis after Tests at 100 • C SEM/EDX studies were carried out for the best lubricants at 100 • C; in particular, DCi and DL in the presence of adhered material from the Ti6Al4V disk to the sapphire ball were analyzed.Figure 17 shows that, after lubrication with neat DCi at 100 • C, the surface of the sapphire ball is covered by a titanium-free layer.The strong carbon and oxygen peaks in the EDX spectrum could be assigned to the presence of a DCi surface layer.A peak around 1 eV, which is assignable to sodium, could be due to sample contamination. This result is similar to that observed for the DCi thin-layer lubricant at room temperature (Figure 15).However, under these more severe conditions, at 100 °C and in the presence of a thick layer of neat DCi lubricant, a more continuous tribolayer covers the contact region on the sapphire ball surface.After lubrication with DL at 100 °C (Figure 18), the adhesion of some wear debris particles composed of Ti, Al, and V, from the Ti6Al4V disk to the ball's surface took place.A lower carbon and oxygen proportion is present in this case, compared with DCi (Figure 17).This shows that in contrast to the results observed at room temperature, a more stable surface layer with the lowest friction coefficient at 100 °C (Table 8) is formed by DCi. Conclusions Three sustainable and ecofriendly protic ionic liquids containing 2-hydroxyethyl diprotic ammonium cations and carboxylate anions derived from natural products (citrate, salicylate, and lactate) were studied.The rheological study showed that the neat protic ionic liquids are non-Newtonian fluids at room temperature and present Newtonian behavior at 100 °C.The tricarboxilate citrate derivative shows higher viscosity values and lower wettability on Ti6Al4V surfaces than the monocarboxylate versions.This result is similar to that observed for the DCi thin-layer lubricant at room temperature (Figure 15).However, under these more severe conditions, at 100 • C and in the presence of a thick layer of neat DCi lubricant, a more continuous tribolayer covers the contact region on the sapphire ball surface. After lubrication with DL at 100 • C (Figure 18), the adhesion of some wear debris particles composed of Ti, Al, and V, from the Ti6Al4V disk to the ball's surface took place.A lower carbon and oxygen proportion is present in this case, compared with DCi (Figure 17).This shows that in contrast to the results observed at room temperature, a more stable surface layer with the lowest friction coefficient at 100 • C (Table 8) is formed by DCi.This result is similar to that observed for the DCi thin-layer lubricant at room temperature (Figure 15).However, under these more severe conditions, at 100 °C and in the presence of a thick layer of neat DCi lubricant, a more continuous tribolayer covers the contact region on the sapphire ball surface.After lubrication with DL at 100 °C (Figure 18), the adhesion of some wear debris particles composed of Ti, Al, and V, from the Ti6Al4V disk to the ball's surface took place.A lower carbon and oxygen proportion is present in this case, compared with DCi (Figure 17).This shows that in contrast to the results observed at room temperature, a more stable surface layer with the lowest friction coefficient at 100 °C (Table 8) is formed by DCi. Conclusions Three sustainable and ecofriendly protic ionic liquids containing 2-hydroxyethyl diprotic ammonium cations and carboxylate anions derived from natural products (citrate, salicylate, and lactate) were studied.The rheological study showed that the neat protic ionic liquids are non-Newtonian fluids at room temperature and present Newtonian behavior at 100 °C.The tricarboxilate citrate derivative shows higher viscosity values and lower wettability on Ti6Al4V surfaces than the monocarboxylate versions. Conclusions Three sustainable and ecofriendly protic ionic liquids containing 2-hydroxyethyl diprotic ammonium cations and carboxylate anions derived from natural products (citrate, salicylate, and lactate) were studied.The rheological study showed that the neat protic ionic liquids are non-Newtonian fluids at room temperature and present Newtonian behavior at 100 • C. The tricarboxilate citrate derivative shows higher viscosity values and lower wettability on Ti6Al4V surfaces than the monocarboxylate versions. The three protic ionic liquids were studied as neat lubricants, as lubricant additives (in water), and as thin lubricant layers for Ti6Al4V sliding against sapphire at room temperature, as well as neat lubricants at 100 • C. 11, x FOR PEER REVIEW 5 of 20 Figure 3 . Figure 3. Coefficient of friction (COF) sliding distance records for neat ionic liquids at room temperature. Figure 3 . Figure 3. Coefficient of friction (COF) sliding distance records for neat ionic liquids at room temperature. Figure 3 . Figure 3. Coefficient of friction (COF) sliding distance records for neat ionic liquids at room temperature. Figure 3 . Figure 3. Coefficient of friction (COF) sliding distance records for neat ionic liquids at room temperature. Figure 4 . Figure 4. Coefficient of friction records for water and the Water + 1wt.% ionic liquids at room temperature (sliding distance 200 m). Figure 4 . Figure 4. Coefficient of friction records for water and the Water + 1wt.% ionic liquids at room temperature (sliding distance 200 m). Figure 5 . Figure 5. Coefficient of friction records for water and Water + 1 wt.% ionic liquids at room temperature (sliding distance: 500 m). Figure 5 . Figure 5. Coefficient of friction records for water and Water + 1 wt.% ionic liquids at room temperature (sliding distance: 500 m). Figure 6 . Figure 6.Evolution of coefficients of friction with sliding distance for thin-layer PIL lubricants at room temperature (sliding distance: 200 m). Figure 6 . Figure 6.Evolution of coefficients of friction with sliding distance for thin-layer PIL lubricants at room temperature (sliding distance: 200 m). Figure 7 . Figure 7. Lubrication with neat DCi at room temperature: (a) SEM micrograph and titanium element map on the sapphire ball.(b) SEM micrograph and EDX spectrum of the selected (white box) region of the layer on the sapphire ball. Figure 7 . Figure 7. Lubrication with neat DCi at room temperature: (a) SEM micrograph and titanium element map on the sapphire ball.(b) SEM micrograph and EDX spectrum of the selected (white box) region of the layer on the sapphire ball. Figure 8 . Figure 8. SEM micrograph, and Ti, C, and N element maps of the sapphire ball after lubrication with neat DL at room temperature. Figure 9 . Figure 9. SEM micrographs and EDX spectrum of Ti6Al4V disk after lubrication with neat DCi at room temperature and higher magnification of a region inside the wear track. Figure 8 . Figure 8. SEM micrograph, and Ti, C, and N element maps of the sapphire ball after lubrication with neat DL at room temperature. Figure 8 . Figure 8. SEM micrograph, and Ti, C, and N element maps of the sapphire ball after lubrication with neat DL at room temperature. Figure 9 . Figure 9. SEM micrographs and EDX spectrum of Ti6Al4V disk after lubrication with neat DCi at room temperature and higher magnification of a region inside the wear track. Figure 9 . 20 Figure 10 . Figure 9. SEM micrographs and EDX spectrum of Ti6Al4V disk after lubrication with neat DCi at room temperature and higher magnification of a region inside the wear track.Lubricants 2023, 11, x FOR PEER REVIEW 12 of 20 Figure 10 . Figure 10.SEM micrograph of the wear track on the Ti6Al4V disk after lubrication with neat DL at room temperature. Figure 10 . Figure 10.SEM micrograph of the wear track on the Ti6Al4V disk after lubrication with neat DL at room temperature. Figure 11 . Figure 11.(a) SEM micrograph and (b) titanium element map on sapphire ball after the test with neat DSa at room temperature. Figure 11 . 20 Figure 12 . Figure 11.(a) SEM micrograph and (b) titanium element map on sapphire ball after the test with neat DSa at room temperature.Lubricants 2023, 11, x FOR PEER REVIEW 13 of 20 Figure 14 . 20 Figure 14 . Figure 14.SEM image and EDX spectrum of the selected region (white box) of the sapphire ball after lubrication with DSa thin layer. Figure 15 . Figure 15.(a) SEM micrograph and (b) Ti element map of the sapphire ball after lubrication with the DCi thin-lubricant layer at room temperature. Figure 15 . Figure 15.(a) SEM micrograph and (b) Ti element map of the sapphire ball after lubrication with the DCi thin-lubricant layer at room temperature. Lubricants 2023 , 20 Figure 16 . Figure 16.Lubrication with a DL thin film at room temperature: (a) SEM micrograph of the sapphire ball; (b) titanium element map, and (c) carbon element map. Figure 16 . Figure 16.Lubrication with a DL thin film at room temperature: (a) SEM micrograph of the sapphire ball; (b) titanium element map, and (c) carbon element map. Figure 17 . Figure 17.SEM micrograph and EDX spectrum of the selected area (white box) on the sapphire ball after lubrication with neat DCi at 100 °C. Figure 18 . Figure 18.SEM micrograph and EDX spectrum of wear debris on the selected area (white box) on the sapphire ball after lubrication with neat DL at 100 °C. Figure 17 . Figure 17.SEM micrograph and EDX spectrum of the selected area (white box) on the sapphire ball after lubrication with neat DCi at 100 • C. Figure 17 . Figure 17.SEM micrograph and EDX spectrum of the selected area (white box) on the sapphire ball after lubrication with neat DCi at 100 °C. Figure 18 . Figure 18.SEM micrograph and EDX spectrum of wear debris on the selected area (white box) on the sapphire ball after lubrication with neat DL at 100 °C. Figure 18 . Figure 18.SEM micrograph and EDX spectrum of wear debris on the selected area (white box) on the sapphire ball after lubrication with neat DL at 100 • C. Table 4 . Coefficients of friction and wear rates for neat lubricants at room temperature. Table 4 . Coefficients of friction and wear rates for neat lubricants at room temperature. Table 4 . Coefficients of friction and wear rates for neat lubricants at room temperature. Table 4 . Coefficients of friction and wear rates for neat lubricants at room temperature. Table 4 . Coefficients of friction and wear rates for neat lubricants at room temperature. Table 5 . Coefficients of friction and wear rates for the Water + 1wt.%PIL lubricants. Table 6 . Coefficients of friction and wear rates for the thin-layer lubricants. Table 7 . XPS results for Ti6Al4V disk after lubrication tests at room temperature. Table 7 . XPS results for Ti6Al4V disk after lubrication tests at room temperature.
12,114.2
2022-12-22T00:00:00.000
[ "Materials Science", "Chemistry", "Engineering" ]
Proper symmetric and asymmetric endoplasmic reticulum partitioning requires astral microtubules Mechanisms that regulate partitioning of the endoplasmic reticulum (ER) during cell division are largely unknown. Previous studies have mostly addressed ER partitioning in cultured cells, which may not recapitulate physiological processes that are critical in developing, intact tissues. We have addressed this by analysing ER partitioning in asymmetrically dividing stem cells, in which precise segregation of cellular components is essential for proper development and tissue architecture. We show that in Drosophila neural stem cells, called neuroblasts, the ER asymmetrically partitioned to centrosomes early in mitosis. This correlated closely with the asymmetric nucleation of astral microtubules (MTs) by centrosomes, suggesting that astral MT association may be required for ER partitioning by centrosomes. Consistent with this, the ER also associated with astral MTs in meiotic Drosophila spermatocytes and during syncytial embryonic divisions. Disruption of centrosomes in each of these cell types led to improper ER partitioning, demonstrating the critical role for centrosomes and associated astral MTs in this process. Importantly, we show that the ER also associated with astral MTs in cultured human cells, suggesting that this centrosome/astral MT-based partitioning mechanism is conserved across animal species. Introduction Cells face a significant challenge each time they divide, because not only must they faithfully replicate and partition their genomic DNA, they must also expand and partition their cytoplasmic contents and organelles as well. It is particularly important that cells inherit sufficient quantities of functionally competent organelles with each division as cells cannot generate many of their organelles de novo [1]. Despite this, mechanisms that ensure proper partitioning of organelles during cell division, particularly membrane-bound organelles like mitochondria and the endoplasmic reticulum (ER), are poorly understood. Delineating these mechanisms is key to understanding how organelle-specific functions regulate proper development, tissue homeostasis and injury repair [2]. The ER is the largest membrane-bound organelle in the cell, and its functions include folding and trafficking of secretory proteins, lipid synthesis and transport, and regulation of cytoplasmic Ca 2þ . During interphase, the ER is continuous with the nuclear envelope (NE) and is distributed throughout the cytoplasm as a network of broad sheets, or cisternae, and thin tubules [3]. This interphase ER distribution depends in large part on numerous associations with the microtubule (MT) cytoskeleton, which involve MT motor-dependent transport, connections with growing MT tips and stable attachments along MT filaments [4]. Importantly, the roles of particular ER morphologies and MT associations in specific ER and cellular functions are poorly understood. It is also unclear whether specific regulation of ER morphogenesis or distribution is required during cell division for the proper execution of mitosis or to ensure functional ER partitioning to progeny cells. Two hypotheses have been proposed to explain ER partitioning and inheritance during cell division [2]. The first proposes that the ER is actively segregated during division, probably through interactions with cytoskeletal elements. This would provide a mechanism for specific regulation of ER partitioning to progeny cells. In support of this, and consistent with the association of the ER with MTs during interphase, the ER localizes to the MT-based mitotic spindle in a variety of cell types from different species including sea urchin [5] and Drosophila [6] embryos and mammalian tissue culture cells [7]. Thus, it is expected that disruption of ER-spindle interactions would disrupt ER functions in progeny cells. However, the specific factors that physically link the ER with spindle MTs have not been identified in any animal cell type, and this has precluded a direct test of whether the ER-spindle association is required for functional ER partitioning. Further, several recent studies showing that the ER remains mostly peripheral to the mitotic spindle with no obvious MT contacts, particularly in cultured human cells [8,9], have challenged the idea that spindle association is a universal requirement for ER partitioning. These findings support the second hypothesis, which proposes that stochastic distribution of the ER throughout a dividing cell is sufficient to ensure adequate partitioning to progeny cells. Thus, although the ER is associated with MTs in some dividing cells, this active segregation may not be strictly required as long as each progeny cell acquires enough organelle material. However, it is notable that dissociation of the ER from spindle MTs is most readily apparent in cultured cells such as HeLa and Cos-7, and these cells may not have strict requirements for precise ER inheritance. By contrast, when cells divide in the context of a developing organism in which spatial and temporal coordination of cellular events is crucial, small alterations to ER partitioning may have far-reaching effects. This illustrates the critical importance of studying mitotic ER partitioning in cells dividing within intact, developing tissues, in order to understand how the partitioning mechanisms function within physiological cellular processes. A striking example of how active segregation of cellular components during cell division can have significant consequences for progeny cells within a developing or functional tissue is asymmetric stem cell division. During asymmetric stem cell division, differential partitioning of specific factors results in two progeny cells with different identities or fates, most commonly with one cell programmed to remain a stem cell and the second cell becoming a tissue-specific effector [10]. The establishment of asymmetry in these dividing cells raises an important question that has never been addressed: is the ER asymmetrically partitioned during asymmetric stem cell division? If so, then this would strongly support the hypothesis that highly regulated, active segregation of the ER is required during in vivo cell division. Further, by integrating ER dynamics with known mechanisms that establish asymmetry in these cells, we may be able to glean novel insights into ER partitioning mechanisms. We have taken this approach in the current study by analysing ER partitioning in asymmetrically dividing Drosophila neural stem cells known as neuroblasts (NBs). Asymmetric NB divisions produce a large cell that retains NB identity, and a much smaller ganglion mother cell (GMC) that differentiates to form a functional neuron or glial cell [11]. Our analyses define an asymmetric segregation of the ER to the mitotic spindle poles that results in a larger proportion of the organelle being partitioned to the future stem cell. We also show that active, MT-dependent spindle pole segregation is required in vivo for proper ER partitioning in both asymmetrically and symmetrically dividing cells, as well as in human culture cells. Thus, active spindle pole segregation may be a highly conserved mechanism of ER partitioning that can be subject to precise regulation during specific developmental processes, such as asymmetric stem cell division. Live imaging of Drosophila tissues Whole brains or testes were dissected from third-instar larvae in Drosophila Schneider's medium (Life Technologies) containing Antibiotic-Antomycotic (Life Technologies) and were mounted in the same medium for imaging on a 50 mm gas-permeable lumox dish (Sarstedt). The medium was surrounded by Halocarbon 700 oil (Sigma) to support a glass coverslip (22 Â 22 mm, #1.5, Fisher) that was placed on top of the medium [14]. The dish was flipped and placed in a stage incubator heated to 258C (Bionomic System BC-110, 20/20 Technologies) on the stage of an inverted microscope (Eclipse Ti, Nikon) equipped with a spinning disc confocal head (CSU-22, Yokogawa) and a cooled charge-coupled device camera (Flash4, Hamamatsu). Filters were controlled by an automated controller (MAC 6000, Ludl), and excitation light was provided by 491, 561 and 642 nm solid-state lasers housed in a single laser merge module (VisiTech International). All components were run by METAMORPH software (Molecular Devices). Fifteen images at 1 mm intervals were captured every 2 min. Images were processed and videos were compiled using IMAGEJ software (NIH). For embryo imaging, 20-30 mated female flies were housed at 258C for 1 h on grape-juice agar plates with a smear of active yeast paste covered by a perforated beaker. Embryos were collected from the plate and washed three times in PBS, de-chorionated for 5 min in 100% bleach and washed an additional three times in PBS. A drop of embryos in PBS was placed on a glass coverslip, excess PBS was removed and the embryos were covered in a drop of Aqua-Poly/Mount mounting medium (Polysciences). A lumox dish was then placed on top of the mounting medium and rotated to disperse the medium and embryos. The mounted embryos were imaged at 258C by spinning disc confocal microscopy as described above. Embryo microinjection Embryos were collected on grape-juice agar plates, aged on collection plates and dechorionated by hand. Dechorionated rsob.royalsocietypublishing.org Open Biol. 5: 150067 embryos were briefly desiccated and microinjected as previously described [15]. The needle concentrations of rhodamine-labelled tubulin (Cytoskeleton Inc.) were 2 mg ml 21 . Confocal images of injected embryos were obtained with a Zeiss Cell Observer instrument (Carl Zeiss Microimaging, Inc.) using the 488 nm and 543 nm wavelengths from an argon laser and a C-Apochromat 1.2 NA 100Â objective. Images were analysed with IMAGEJ and AXIOVISION (Carl Zeiss MicroImaging, Inc). Culture and live imaging of S2 and HeLa cells S2 cells (Drosophila Genomics Resource Center) were grown in Drosophila Schneider's medium supplemented with 10% fetal bovine serum (Life Technologies) and Antibiotic-Antimycotic at room temperature in air. For live imaging, cells were plated for 1 h in supplemented Schneider's medium in glass-bottom dishes (MatTek) coated with concanavalin A (0.5 mg ml 21 , Sigma). Cells were imaged at 258C by spinning disc confocal microscopy as described above. HeLa cells (ATCC) were grown in DMEM (Life Technologies) supplemented with 10% fetal bovine serum at 378C in 5% CO 2 . Cells were plated in glass-bottom dishes overnight for live spinning disc confocal imaging. Prior to imaging, culture medium was replaced with Leibovitz's L-15 medium (Life Technologies) supplemented with 10% fetal bovine serum, and cells were imaged at 378C in air. Fixation and immunofluorescence Whole third-instar larval Drosophila brains and testes were fixed for 20 min at room temperature in PBST (PBS with 0.3% Triton-X 100) containing 8% paraformaldehyde (Electron Microscopy Sciences). Fixed tissues were then washed three times in PBST, incubated overnight at 48C with rotation in primary antibodies diluted in PBST with 5% bovine serum albumin (BSA), washed three times in PBST, incubated 4 h at room temperature in secondary antibodies in PBST with 5% BSA, washed three times in PBST and mounted in Auqa-Poly/Mount mounting medium. S2 cells were prepared for immunofluorescence by plating them in Schneider's medium on concanavalin A coated glass coverslips for 1 h at room temperature. Cells were then fixed in 100% methanol at 2208C for 20 min and washed three times in PBS. HeLa cells for immunofluorescence were grown on glass coverslips overnight and fixed as described [16]. Fixed HeLa and S2 cells were blocked for 1 h at room temperature in PBT (PBS with 0.1% Tween-20), incubated in primary antibody diluted in PBT with 1% BSA for 1 h at room temperature, washed three times in PBT, incubated in secondary antibody diluted in PBT with 1% BSA, washed three times and mounted in Auqa-Poly/Mount mounting medium. Primary antibodies used were mouse anti-a-tubulin (DM1a, 1 : 200, Sigma), antiphosphorylated histone H3 (1 : 1000, EMD Millipore) guinea pig anti-asl (1 : 10 000, gift from G. Rogers, University of Arizona Cancer Center) and anti-baz (1 : 2000, gift from T. Harris, University of Toronto). Secondary antibodies were Alexa Fluor 488, 568 or 647 (Life Technologies) at 1 : 500. Plasmids and transfections The coding sequence of Drosophila Sec61a was cloned by PCR from EST LD29847 from the DGRC Gold Collection and inserted into the pENTR/D-TOPO plasmid (Life Technologies) according to manufacturer's instructions. This was then recombined with Gateway Destination Vectors that placed EGFP or tagRFP at the C-terminus of Sec61a under control of the Drosophila ubiquitin promoter sequence. S2 cells were transfected with 2 mg plasmid using Effectene (Qiagen) or Amaxa nucleofection according to manufacturers' instructions. For combined expression of RFP-Sec61a and GFP-a-tubulin, an S2 line stably expressing GFP-a-tubulin (DGRC) was transfected with RFP-Sec61a. Plasmid encoding EGFP-tagged human Sec61b was obtained from Addgene and transfected into HeLa cells with Lipofectamine 2000 according to manufacturer's instructions. Quantification of endoplasmic reticulum asymmetry GFP-Sec61a expressing NBs were imaged live, and Z-projections that encompassed the entire ER network were created for each cell at metaphase. The total fluorescence within equally sized regions (approx. 2.25 mm in diameter) centred around each spindle pole was calculated using the Integrated Density function of IMAGEJ. Background from identical regions was subtracted from each pole measurement to obtain corrected ER fluorescence intensities at each apical and basal pole. Total cellular ER fluorescence intensities were similarly calculated from the same images using regions that encompassed all of the ER. Each apical and basal measurement was then expressed as a percentage of the total ER measurement for each cell. Data are presented as mean + s.e.m. and statistical significance was calculated using the Student's t-test. Spindle poles asymmetrically partition the endoplasmic reticulum in dividing Drosophila neuroblasts Asymmetrically dividing stem cells provide a novel and powerful physiological system in which to investigate the cellular mechanisms that regulate ER partitioning during cell division. To carry out this analysis, we first analysed metaphase third-instar larval Drosophila NBs to determine whether there are specific differences in ER localization or distribution along the cell's apico-basal axis. Green fluorescent protein (GFP)-tagged Sec61a was used to label ER membranes relative to apical and basal domains of mitotic NBs as defined by immunostaining for the Drosophila PAR3 orthologue Bazooka (Baz). We found that the ER was predominantly organized as an envelope immediately surrounding the mitotic spindle MTs (figure 1a, arrowheads); we will refer to this structure as the 'ER envelope'. Consistently, we also identified an extension of ER membrane outside of the ER envelope, specifically positioned near the apical spindle pole (figure 1a, arrow). An analogous ER extension was not seen at the opposite, basal pole. This was a surprising result that led to the hypothesis that asymmetrically dividing NBs organize and distribute ER asymmetrically to the two daughter cells at each division. To test this hypothesis, we carried out live in vivo imaging of NBs [14] with the goal of determining when ER asymmetry is established in the cell cycle and how the ER is distributed to the progeny NB versus GMC. We determined cell-cycle stage based on the rsob.royalsocietypublishing.org Open Biol. 5: 150067 shape of the ER envelope and by timing relative to anaphase onset, while apico-basal polarity was determined based on the size of the progeny cells following division. We found that the ER is uniformly distributed throughout the cytoplasm and around the nucleus during interphase (figure 1b; electronic supplementary material, video S1), though the small size of NBs and dim fluorescence prevented us from differentiating between sheet and tubular ER. During prophase, the NE became spherical with the exception of two conspicuous indentations, probably formed by the two centrosomes as they migrated around the NE to opposite poles (figure 1b, arrow and arrowhead). As the amount of ER increased around the nucleus and the centrosomes, there was a concomitant decrease in cytoplasmic ER. This observation is consistent with the presence of an active mechanism that rearranges the ER during mitotic entry, similar to the tubule to sheet transformation documented in mammalian cultured cells [9]. Notably, ER was recruited to the centrosomes early in mitosis, suggesting that centrosomes are key regulators of ER redistribution. Remarkably, one of the centrosomes (figure 1b, arrow) was associated with significantly more ER than the other (figure 1b, arrowhead). Furthermore, tracking the two centrosomes and associated ER as they fully separated to opposite sides of the nucleus allowed us to determine that the centrosome with more associated ER formed the apical spindle pole, destined for the NB. Similar analysis of GFP-atubulin showed that the centrosome that formed the apical spindle pole also had a higher density of MTs during prophase (figure 1c, arrow; electronic supplementary material, video S2), consistent with previously shown centrosome asymmetry at this stage [17,18]. This relationship between ER and MT asymmetry at the two centrosomes suggests that centrosomal MTs may establish ER asymmetry early in mitosis (prophase prior to NEB)) in NBs. Once spindle assembly was complete in metaphase, a prominent ER extension from the apical, but not the basal, spindle pole was detected (figure 1b, asterisk, rsob.royalsocietypublishing.org Open Biol. 5: 150067 concentration in the NB versus GMC. Collectively, we show that the ER is recruited to centrosomes early in mitosis with more organelle material organized by the apical centrosome and inherited by the neural stem cell, potentially resulting in a higher ER concentration needed for NB function. Our results suggest that centrosomes do not merely correlate with the position of ER membranes, but are critical regulators of this ER asymmetry. To determine whether centrosomes are in fact required for establishing ER asymmetry, we analysed NBs from animals lacking centrosomes due to a mutation in the asterless (asl) gene. Although asl mutant cells can form spindles and divide, they lack astral MTs [19]. NBs from animals homozygous for the loss of function asl mecD allele show normal perinuclear ER distribution (figure 1d; electronic supplementary material, video S3), but lack centrosomal/polar accumulation of ER, as expected. Importantly, during metaphase when the ER envelope adopted the typical diamond shape, no ER extensions or additional accumulations at either spindle pole were detected (figure 1d, arrows). This cell still established an asymmetrically located cleavage furrow, typical of about 95% of asl mutant NBs [17,18], and ER membranes are segregated to both the NB and GMC. However, the NB did not inherit any additional apical spindle pole-dependent ER seen in wild-type cells, suggesting that the concentration of ER in the NB and GMC are equalized. We conclude that spindle poles organized by functional centrosomes are required for asymmetric ER partitioning that is established early in mitosis in NBs, which could lead to higher NB ER concentration that could be physiologically relevant. Spindle pole-organized microtubules are required for proper endoplasmic reticulum partitioning Based on our data, one possibility is that NBs have adapted or modified a universal spindle pole-dependent ER partitioning mechanism to achieve functional asymmetry of the organelle. This prompted us to investigate the spindle pole-dependence of the ER in other Drosophila cell types. We began by analysing meiotic spermatocytes in third-instar larval testes. The ER in these cells exhibits a striking distribution in two large crescents centred around each spindle pole during metaphase (figure 2a), forming domains previously termed astral membranes due to their proximity to astral MTs [20]. Consistent with this terminology, we show that the ER does align very closely with astral MTs (figure 2a, arrows), suggesting that interaction with MTs, in addition to centrosomes, may be responsible for the segregation of the ER to spindle poles, similar to what we found in NBs. To determine the dynamics of ER pole segregation, we imaged GFP-Sec61a (ER marker) live throughout meiosis. Histone 2A-mRFP (H2A-mRFP) was also imaged to accurately track cell-cycle progression. Similar to NBs, the ER around the NE intensified during prophase as compared with interphase (figure 2b; electronic supplementary material, video S4), indicative of ER envelope formation. At the time of NEB (determined by the sudden loss of background H2A-mRFP fluorescence in the nucleus), the ER began to organize on opposite sides of the ER envelope. By metaphase, all the ER appeared to be localized either to the spindle poles or the ER envelope layers, suggesting dramatic rearrangement from its interphase distribution. Importantly, the spindle pole domains persisted throughout anaphase and telophase such that each domain was clearly partitioned separately into each of the two progeny cells. Thus, like in NBs, a significant proportion of the ER is segregated to spindle poles early in cell division due to interaction with centrosomes and their associated astral MTs, and this segregation results in specific partitioning of the organelle to the two progeny cells. We next tested the specific role of centrosome-dependent astral MTs in spermatocyte ER segregation by analysing asl mutants. Live imaging clearly showed disrupted ER positioning and segregation in asl mutant spermatocytes. Most notably, the spindle pole enrichment of ER was completely absent in these cells (figure 2c, right panels, arrows; electronic supplementary material, video S5). This is consistent with the loss of astral MTs in the mutants, and was in stark contrast to heterozygous control spermatocytes ( figure 2c, left panels). Importantly, the asl mutant spermatocytes exhibited ER structures that were distant from the spindle poles and that were never seen in controls (figure 2c, arrowheads; electronic supplementary material, video S5). This suggests that loss of ER at spindle poles due to loss of astral MTs in asl mutants leads to major ER positioning and partitioning defects. Our fixed analysis confirms the lack of astral MTs and mislocalized ER (figure 2d). However, this analysis revealed an additional finding-although mislocalized away from the spindle poles, ER accumulations in mutants remain localized with distant clusters of MTs. This strongly supports a model in which the ER remains tightly linked with MTs during mitosis/meiosis, where previous studies have mainly focused on this association during interphase. To explore this further, we also analysed abnormal spindles (asp) mutant spermatocytes. Asp is required for proper spindle formation through a poorly understood MT cross-linking function [21][22][23]. Our fixed analysis revealed that ER organization was also severely disrupted in asp mutant spermatocytes (figure 2e). Notably, instead of being organized around spindle poles as in heterozygous controls, the ER in mutants was associated with abnormal MT structures displaced away from the spindle apparatus (figure 2e, arrowheads). Thus, as in asl mutants, the ER appeared to remain linked with MTs despite the MTs themselves being abnormally organized. These collective results from spermatocytes clearly show a strong association of the ER with MTs during meiotic divisions, and these associations occur specifically with astral MTs when spindles are formed normally. We next investigated ER distribution and segregation behaviour during the syncytial nuclear divisions of the Drosophila embryo. Similar to NBs and spermatocytes, we observed robust spindle pole segregation of the ER. During metaphase, the ER displayed a striking radial organization around the centrosome at each pole of a dividing nuclear unit. This pole-associated organization persisted through anaphase and into telophase, following which the ER dispersed throughout the interphase syncytium (figure 3a; electronic supplementary material, video S6). Previous analyses have described the spindle pole segregation of the ER in Drosophila embryos, but failed to document the radial pattern around the centrosomes that is almost certainly astral MT-dependent [6]. This advance over previous studies is probably due to improvements in imaging technology. We were unable to directly test the role of centrosomes in organizing the ER in the embryo as acentrosomal embryos are not viable [24]. Instead, we took advantage of a unique situation resulting from 'nuclear fallout' events in which damaged nuclei dissociate from centrosomes and are reabsorbed into the rsob.royalsocietypublishing.org Open Biol. 5: 150067 centre of the embryo, leaving behind free centrosomes near the embryo cortex [25]. These free centrosomes can still nucleate MT asters in-phase with the embryonic nuclear cycle [26]. By imaging the ER (GFP-Rtnl1) and MTs in live embryos, we reliably identified nuclear fallout events in wild-type embryos (figure 3b, asterisk) that produced a pair of free, cortically anchored MT asters (figure 3b, arrows). As the embryo entered the next mitotic cycle, these two remnant asters clearly recruited ER membranes (figure 3b, arrows). This suggests that centrosomes with their associated astral MTs can autonomously recruit ER membranes, without contributions from other spindle or nuclear components. Endoplasmic reticulum partitioning by spindle pole microtubules is conserved across species The consistent astral MT-dependent segregation of the ER to spindle poles we found thus far in intact Drosophila tissue cells prompted us to examine the conservation of this mechanism. We turned to cultured mammalian cells; we also characterize ER morphology in cultured Drosophila S2 cells to serve as a more direct comparison. We found that the majority of the ER in S2 cells was organized in two discrete clusters around the spindle poles ( figure 4a) Figure 2. The ER associates with astral MTs in meiotic spermatocytes. (a) A fixed GFP-Sec61a (green) spermatocyte at metaphase of the first meiotic division was immunostained for a-tubulin (red), asterless (Asl, blue) to localize centrosomes (blue arrows) and phosphorylated histone 3 ( pH3, blue). ER structures are closely aligned with astral MTs (yellow arrowheads). (b) A spermatocyte expressing GFP-Sec61a (green) and H2A-mRFP (red) was imaged live throughout the course of the first meiotic division. NEB was determined based on the sudden loss of background H2A-mRFP fluorescence throughout the nucleus. The approximate outline of the cell (yellow dotted lines) and the astral ER domains at metaphase (arrow heads) are indicated. (c) asl mecD /TM6 control (left panel) and asl mecD /asl mecD (right panel) GFP-Sec61a expressing spermatocytes were imaged throughout meiosis I. Spindle poles (arrows) recruit ER in control cells, but fail to do so in asl mecD /asl mecD spermatocyte; abnormal ER structures (arrowheads) are prominent in the asl MecD /asl MecD spermatocyte. (d ) A fixed GFP-Sec61a (green) metaphase I asl MecD /asl MecD spermatocyte was immunostained for a-tubulin (red), asl (blue) and pH3 (blue). Indicated are spindle poles (arrows) and abnormal ER structures associated with MTs (arrowheads). (e) Fixed GFP-Sec61a (green) metaphase I asp MB /TM6 control (left panel) and asp MB /asp MB (right panel) spermatocytes were fixed and immunostained for a-tubulin (red). DAPI staining of DNA is shown in blue in the merged image. Abnormal ER structures are prominent in asp MB /asp MB spermatocyte (arrows). Times, min:s; scale bar, 5 mm. rsob.royalsocietypublishing.org Open Biol. 5: 150067 colchicine caused partial dispersion of these clusters (figure 4a; electronic supplementary material, video S7), suggesting that MT associations may not be a strict requirement for maintaining ER localization at spindle poles in these cells. Alternatively, MT -ER associations may be important for the initial segregation of the ER to spindle poles early in mitosis in S2 cells but may not be required to maintain the ER at these sites at later stages when our colchicine treatments were conducted. We therefore performed another experiment by analysing mitotic S2 cells that only contained one functional centrosome, a common occurrence in these cells due to inherent dysregulation of the centrosome cycle [27]. These cells formed mitotic spindles with only one pole containing the centrosome, which also had significantly more associated ER compared with the acentrosomal pole ( figure 4b). Importantly, the acentrosomal pole completely lacked astral MTs and recruited significantly less ER, consistent with a requirement for astral MTs for spindle pole segregation of the ER. To address the question of mechanistic conservation, we examined HeLa cells, a cell type in which the ER becomes nearly completely dissociated from MTs during mitosis [9,16], leading to the wide-spread conclusion that segregation of the ER occurs via a MT-independent mechanism in these cells. However, most analyses have focused on inner spindle MTs as opposed to astral MTs. The inner spindle region was in fact nearly completely devoid of ER in metaphase HeLa cells except for two small hubs adjacent to the spindle poles ( figure 4c). However, in the cortical region of the cell where the ER was most densely situated, the ER exhibited conspicuous radial extensions that appeared to be organized around the spindle poles, very similar to the organization of astral MTs in this area. Consistent with a role for astral MTs in this radial ER distribution, MT depolymerization caused a complete loss of the highly organized astral-like array of the ER at the spindle poles (figure 4d ). Importantly, following drug treatment, the organelle now exhibited concentric rings that lacked any particular organization or anchoring. Because prior to treatment the ER was only distributed around astral MTs, this result strongly suggests a specific role for astral MTs in the organization and localization of the ER in human cells. The endoplasmic reticulum envelope membranes are actively segregated by spindle microtubules Our results suggest that astral MT association is a common mechanism of mitotic ER positioning in Drosophila and human cells. However, a notable difference is the persistence of an ER envelope that surrounds the spindle apparatus throughout cell division in Drosophila. The significance of this ER envelope is not known, nor are the mechanisms that regulate its organization. We hypothesized that the ER envelope membranes are linked to spindle MTs, as opposed to astral MTs, thus coupling the entire ER to the spindle apparatus. Consistent with this, the ER envelope aligned perfectly with the outermost bundles of spindle MTs in metaphase Figure 3. The ER is organized by centrosomes during embryonic nuclear divisions. (a) A stage 9 embryo expressing GFP-Sec61a was imaged live through the course of a single mitotic division. The location of the two centrosomes (arrows) is clear in the metaphase image; note the radial organization of the ER around the centrosomes. (b) A Rtnl1-GFP (green) expressing embryo was injected with rhodamine-labelled a-tubulin (red). Fallout of a single nucleus ( pink asterisk) can be seen between the 0:00 and 10:50 timepoints. As the syncytium begins the next mitosis (15:50 timepoint), two free centrosomes that organize ER membranes can be detected (arrows). These free centrosomes maintain ER association during metaphase (18:00 timepoint), when other centrosomes still associated with nuclei have formed spindles (arrowheads). Times, min:s; scale bar, 5 mm. rsob.royalsocietypublishing.org Open Biol. 5: 150067 spermatocytes (figure 5a). It is clear that these MTs do not contact the chromosomes, and thus they can be classified as non-kinetochore or interpolar MTs. Because we imaged spermatocytes in intact, non-dissociated tissue, we were also able to image spermatocytes that divided with their spindles oriented perpendicular to the imaging plane. This revealed that the ER envelope formed a continuous layer that immediately surrounded the outermost ring of spindle MTs (figure 5b). Importantly, the ER envelope precisely followed the irregular, non-circular geometry of the outer MT ring, suggesting a physical dependence of the ER envelope membranes on the MTs. Live imaging of spermatocytes dividing in the perpendicular orientation further supported this hypothesis as the ER envelope first achieved a perfectly circular geometry early in meiosis, indicating that these membranes form a complete sphere around the entire spindle (figure 5c; electronic supplementary material, video S8). However, the ER envelope next underwent a striking inward deformation that initiated at several discrete points, appearing as if the membranes were being pulled inward at these sites (figure 5c, arrows). These deformations resulted in a non-circular geometry similar to that seen in fixed images (figure 5b), suggesting that interactions of the ER envelope with outer spindle MTs may drive these deformations. We also observed similar ER envelope deformations in NBs (figure 5d) by imaging live, perpendicular spindles. These results indicate that the entire ER network is linked to both astral and the outermost interpolar MTs. Discussion The mechanisms that regulate ER partitioning in dividing animal cells are far from clear, and the fundamental issue of whether the organelle is actively partitioned or stochastically distributed remains controversial. These controversies may be due to several factors in previous studies that have addressed the issue of ER partitioning: first, several recent studies have relied on cultured, transformed cell lines such as HeLa cells that may not recapitulate physiologically relevant processes [9,16,28]; second, potential active partitioning mechanisms, such as spindle MT interactions, have not been directly tested; and third, most analyses have focused on ER interactions with the inner spindle MTs while largely neglecting astral MTs. Our current study addresses these issues by combining genetic and pharmacological manipulations with analysis of ER partitioning in vivo in intact Drosophila tissues, as well as in cultured cells. Our results show that in the rsob.royalsocietypublishing.org Open Biol. 5: 150067 Drosophila cell types examined, the ER is recruited to centrosomes early in cell division, probably through interactions with centrosomal MTs. This recruitment in prophase is concomitant with centrosome maturation, the process whereby more pericentriolar material is recruited to the centrosome to afford greater MT nucleation and anchorage in preparation for spindle formation. Several models could explain this increase in recruitment of ER to mature centrosomes and astral MTs. One model is that cells simply use the same mechanism of linking the ER and MTs in both interphase and mitosis. Therefore, by increasing MT density at the centrosome in mitosis, more ER is recruited and concentrated at the developing poles. An alternative model is that a new ER-MT linking mechanism is engaged, or activated specifically in mitosis, potentially through regulation by mitotic cyclin/cdks [15]. There is precedence for controlling the ER-MT linkage in a cellcycle-specific manner-STIM1 phosphorylation in mitosis disengages the ER from spindle MTs [16]. Although this is a form of negative regulation, it is not unreasonable to hypothesize the presence of a parallel positive regulatory mechanism. In either model, it is still puzzling as to why the ER is not recruited to all MTs-why is it specific to astral and peripheral interpolar MTs? One hypothesis is that the MT density, or the viscosity of the spindle, forms a physical barrier that prevents ER entry to the interpolar region. We do not favour this hypothesis because we know that artificially linking the ER to MTs using a non-phosphorylatable STIM1 construct can force the ER deep into the spindle region [16]. We favour an alternative hypothesis, whereby specific subpopulations of MTs convey ER-binding capability, while others do not. How might this occur? We are beginning to appreciate the complexity of MT modifications, which can have dramatic effects on MT behaviour and function. A very relevant finding is the presence of detyrosinated tubulin exclusively within the spindle and not in astral MTs [29,30]. One plausible hypothesis is that an unknown ER-MT linker protein cannot bind detyrosinated MTs, which would result in ER exclusion from the spindle region ( figure 6). This would be an exciting future direction because of its implication in asymmetric stem cell divisions-one might envision that unique MT modifications could exist on the apical versus basal spindle poles. In addition, MT motor-dependent ER sliding events preferentially occur on acetylated MTs, further supporting a role for MT modifications in dynamic ER regulation [31]. As the cell proceeds through subsequent stages of mitosis, the ER remains associated with astral MTs, resulting in active Figure 5. The ER envelope associates with peripheral interpolar spindle MTs. (a) A fixed GFP-Sec61a (green) spermatocyte at metaphase of the first meiotic division was immunostained for a-tubulin (red), asterless (Asl, blue) to localize centrosomes and phosphorylated histone 3 (pH3, blue). The ER envelope (arrowheads) overlaps with the outermost interpolar MTs of the spindle. (b) A metaphase spermatocyte fixed and stained as in (a) was imaged perpendicular to the spindle axis. Clear deformations of the ER envelope (arrowheads) are associated with interpolar bundles of MTs. (c) A meiotic GFP-Sec61a expressing spermatocyte that divided with its spindle perpendicular to the imaging plane was imaged live. Inward deformations of the ER envelope are also seen here (arrowheads). (d) A mitotic GFP-Sec61a expressing NB was imaged live perpendicular to the spindle axis to show deformations of the ER envelope (arrowheads). Times, min:s; scale bar, 5 mm. rsob.royalsocietypublishing.org Open Biol. 5: 150067 partitioning of the organelle to the two progeny cells. We further show that disruption of centrosomes and astral MTs leads to ER partitioning defects, confirming the obligate role of these cytoskeletal structures. These mechanisms are not limited to Drosophila cells, as we show that mitotic ER positioning also depends on astral MTs in human cells. We also present the important and novel finding that the ER is asymmetrically partitioned in asymmetrically dividing neural stem cells. This ER asymmetry is probably dependent on centrosomal and MT asymmetry [17,18], and results in a higher concentration of ER in the regenerating stem cell compared with the differentiating GMC. Thus, our data suggest that association of the ER with astral MTs may be a universal mechanism of ER partitioning that can be adapted within specific cellular or developmental contexts such as during asymmetric stem cell division. Further, our NB results present the provocative possibility that ER partitioning may have a specific role in the mechanism of asymmetric cell division or tissue development. Identification of the molecular factors that link the ER with spindle MTs is the key to further understanding the role of ER partitioning in organismal physiology and development. Importantly, it is likely that the molecular features of ER partitioning, including the specific factor that links the ER to astral MTs, are highly conserved as we see similarities in systems as disparate as mitotic HeLa and meiotic Drosophila spermatocytes. We believe that Drosophila, with its powerful genetic tools and in vivo analyses, is an ideal system in which to further investigate these mechanisms. A distinct possibility is that MT motors are involved, and systematic analysis of all known Drosophila MT motors is a clear and tenable approach. Other proteins such as spastin, Climp-63 and REEPs have also been shown to associate the ER with MTs in various cell types [4], and possible roles of these proteins in spindle MT attachment should be studied further. Most significant among these, it was recently shown that REEP3 and REEP4 are required for spindle pole focusing in HeLa cells, suggesting a potential role for these proteins in mitotic ER partitioning [32,33]. In conclusion, our results demonstrate that interaction of the ER with spindle MTs is a conserved mechanism that controls the distribution of the organelle during animal cell division. This facilitates equal partitioning of the organelle during symmetric cell divisions, but also allows for specific regulation of ER distribution during specialized processes such as asymmetric cell division. Drosophila will be a powerful system moving forward to identify the specific molecular mechanisms involved and the functional roles of ER partitioning in organismal physiology. Figure 6. Model depicting the association of the ER (green) with spindle MTs (brown). Centrosomes (yellow) and chromosomes (grey) are indicated. The entire ER network associates with either astral MTs or peripheral interpolar MTs, but not kinetochore MTs. This association is mediated by an ER-MT linker protein (pink), the identity of which is unknown. The exclusion of ER from kinetochore MTs may be due to a MT modification (blue circles) that prevents interaction of the ER-MT linker. This model depicts asymmetric centrosomes, and the amount of associated ER is a direct function of asymmetric astral MT density. This gives rise to asymmetric ER partitioning in NBs. In symmetrically dividing cells like spermatocytes (not depicted), symmetric MT densities around the two centrosomes results in symmetric ER partitioning. rsob.royalsocietypublishing.org Open Biol. 5: 150067
8,933.4
2015-08-01T00:00:00.000
[ "Biology" ]
SPATIOTEMPORAL CONVOLUTIONAL LSTM WITH ATTENTION MECHANISM FOR MONTHLY RAINFALL PREDICTION , INTRODUCTION Rainfall forecast information is one of the crucial analyses to help regulate water resources and often involves several variables since rainfall is part of meteorological phenomena. This prediction is more complicated when dealing with the emergence of climate change in tropical areas such as Indonesia, which lies on the equator with implications from North and South. Furthermore, climate change has affected rainfall patterns, causing several natural disasters such as heavy rains that result in flooding or prolonged absence of precipitation that results in droughts [1]. Drought Management Plans (DMPs) are regulatory instruments that establish priorities among different water uses and define more stringent constraints for access to publicly available water during droughts and reduced water supplies because of climate change vulnerability to drought events. To deal with this problem, rainfall prediction with an excellent and accurate method is needed to anticipate it [2]. Precise rainfall forecasts, both short and long term, have significant benefits in water resource management, flood control, disaster reduction, and agricultural management [3]. However, rainfall is a complicated nonlinear atmospheric system that depends on space and time; besides, many factors can influence rain in the area [4]. Therefore, it is not never convenient to realize the complexity and uncertainty of the predictability of rainfall to produce precise and accurate rainfall forecasts [5]. Forecasting rainfall, the beginning of the rainy season, the duration of the rain, and the end of the rainy season are determined by a monthly period, often using a three-month system known as the SPI (Standardized Precipitation Index) method [6]. In addition, monthly rainfall can provide a more accurate 3 MONTHLY RAINFALL PREDICTION distribution of the mean intra-year rain when compared to seasonal rainfall [7]. Hence, it is vital to periodically estimate rainfall on a monthly time scale, in which rainfall predictions are usually made using physical-based models and deep learning methods [8]. Climate Hazards Group Infrared Precipitation with Stations, also known as (CHIRPS), is data obtained with specifications such as environmental records, new quasi-global (50 ° S-50 ° N), high resolution (0.05 °), daily, pentadal, and monthly rainfall datasets. These datasets have the spatial surface of the earth and temporal from 1981 to 2020, which are able to visualize the rainfall condition in every place on the land. Scientists developed CHIRPS from various countries to support the United States Agency for International Development Famine Early Warning Systems Network (FEWS NET) [9]. The approach is built using thermal infrared precipitation (TIR), which has been successful in trials like the National Oceanic and Atmospheric Administration's (NOAA) Rainfall Estimate CHIRPS uses Tropical Rainfall Measuring Mission Multi-Satellite Precipitation Analysis version 7 to calibrate global Cold Cloud Duration (CCD) rainfall forecasts. In addition, CHIRPS also employs the current state-of-the-art interpolation measurement approach using an 'intelligent interpolation' approach that can work with anomalies in high-resolution climatology [10]. CHIRPS is one of the unique spatiotemporal data that require specific consideration when utilizing deep learning predictions such as LSTM and GRU. While the deep learning approaches mostly used temporal data to build the model, spatiotemporal has always had a different way when choosing suitable algorithms. Understanding the features is necessary, which has high-dimensional and temporally correlation, which means the data are indexed by up to two dimensions in space and one in time [11]. In general, spatiotemporal data is spatial of the correlation between nearby locations like a photo with the pixel and temporal correlation between adjacent timestamps [12]. To handle spatiotemporal data, Tao et al. [3] used attention mechanisms to enhance the prediction model, the state of the art of attention mechanism was invented by Bahdanau [13] to improve the accuracy of machine translation algorithms. Another attention model comes from Vaswani et al. [14], which is Multi-Head Attention, and some people make the robust algorithm, namely transformers. Deep Learning fields are always spread over several areas of prediction and classification. In 4 FREDYAN, KUSUMA sequential or time-series data, Recurrent Neural Networks (RNN) and their derivatives maintain some vectors in calculating every neuron using propagated through time [15]. However, RNN has trouble dealing with long data sequences that preside vanishing gradient problems when training using traditional RNN with long legs [16]. LSTM comes with the solution using memory to improve RNN and avoid vanishing gradient, becoming more advanced with modifications such as encoder-decoder, attention mechanism, etc. [17]. In this study, the authors propose Convolutional LSTM with an additional Attention Layer to enhance the accuracy of monthly rainfall prediction using CHIRPS data as a solution to predict rainfall with gridding data to make more accurate forecasting. A hyperparameter is tuned manually with an endless number of models to ensure that it has the same comparison. We compare and analyze each model's loss error and the number of performances according to the evaluation metrics most used in hydrology and deep learning. The results indicate that the proposed Convolutional LSTM-AT model is the best so far. We also analyze the spatial and temporal for interpreting the physical causality of our model. RELATED WORKS In this section, the authors give reviews of relevant research that can inspire the author to construct the Convolutional LSTM-AT model, including several fundamental studies in rainfall forecasting, sequential data using LSTM-based models, and the exciting method of machine translation, which is an attention-based model. Rainfall Forecasting Seasonal prediction models are commonly used for the prediction of rainfall to make an early warning from the tools or agency of government when hydrological extremes come to attack people. Based on climatologists, climate prediction models can be classified into three approaches first physical or numerical approach, the second empirical or statistical approach, and the third mixing between physical-empirical [18]. Still, rainfall depends on numerous lands, large oceans, and the atmosphere in the lop of the processes. On the other hand, Physical models are generally developed based on interpretations of atmospheric processes, but they frequently show weak 5 MONTHLY RAINFALL PREDICTION predictability in providing good information on annual climate variability [19]. In general, physical-empirical prediction models, which are the most used by climatologists, are developed utilizing the traditional statistical approach. For instance, Zhu and Li [20] 2017 applied the regression method to predict the wet season in East Asia. Li and Wang [21] 2018 studied the forecast capability of summertime highly extreme rainfall days in eastern China by utilizing a stepwise regression model. The ability of the traditional regression-based method is likewise inadequate in forecasting highly nonlinear and nonstationary performance. Therefore, the connection of local climate in a specific area with ocean-atmospheric variables such as SST or sea level pressure cannot be described by employing traditional regression models [22]. LSTM-Based Methods Long Short Term Memory (LSTM) is one of the modified versions of recurrent neural networks that have a problem with vanishing gradient, which is designed to resolve the problem of sequential long-distance (time) data reliance by Hochreiter and Schmidhuber [23]. Yuan et al. [24] proposed an LSTM network model to build occupancy by simulating energy, operation, and management. ElSaadani et al. [25] used the LSTM model to predict soil moisture and fill gaps between the observation. Further, Zhou et al. [26] combined the LSTM model and attention mechanism based on machine-translation to recognize skeleton-based abnormal behavior. The conclusions indicated that attention-based LSTM could recognize behavior better than only LSTM Model. Attention-based methods In deep learning, one way to increase accuracy in the model learning process is through attention mechanisms inspired by selective human visuals to choose which information to pay special attention to and which ones to reject. In general, the application of Attention mechanism has been applied in various areas of research and industry, such as machine translation, image captioning, and video motion recognition. Song et al. [27] have a proposal related to an end-toend spatiotemporal attention model to perform recognition and prediction of human action in a video frame. In addition, Chen et al. [28] proposed a model of spatial combined with channel attention and image labeling with an additional convolutional neural network, having a good result in their data set. Ding et al. [29] proposed spatiotemporal LSTM to predict floods in three basins 6 FREDYAN, KUSUMA in China. Tao et al. [3] also proposed LSTM with an attention mechanism to improve monthly rainfall prediction, which performed well in most spatial points. From the above model author was inspired to develop another model, we propose a multi-head attention LSTM to optimize monthly rainfall prediction with spatiotemporal data. STUDY AREA AND DATASET In this study, Kalimantan Timur was selected as the study area to evaluate and compare the performance of several LSTM models in forecasting monthly rainfall. East Kalimantan is located Monthly rainfall data covering January 1980 -December 2020 is CHIRPS data accessible from https://data.chc.ucsb.edu/products/CHIRPS-2.0/. The data for the 40 years January 1980 to December 2020 was used as a dataset of this model, as shown in Figure 1, sampling of December 2020. Rainfall data known as CHIRPS is still in the form of worldwide raster data, where the research only focuses on the Kalimantan Timur region, so the data needs to be split. First, a printout of the Kalimantan Timur area is required from https://tanahair.indonesia.go.id/. Still, combining the data using the ArcGIS application is necessary because the custom is city and district data. Furthermore, after the data for the East Kalimantan region is obtained, splitting the rainfall data worldwide using the SAGA application is needed. It should be noted that the Split process requires degrees of longitude and degrees of latitude and a grid size that must be adapted to raster data worldwide, which is 0.05 o x 0.05 o , the result can be seen in Figure 2. As shown in Figure 2, data visualization has colors black and white which means black has representative sea surface and white island surface. Raster data is one of the best formats of data to represent surface area since raster can keep multi-band of data to create complex spatial conditions. CHIRPS contain a single band to interpret monthly precipitation values without additional variables. It can be seen in Figure 2 that data has three dimensionality of perspective. 8 FREDYAN, KUSUMA As shown in Figure 3, this data includes dimensions 89 x 89 of spatial and 480 of temporal, in this case, monthly data. Having three-dimensional condition make this research more complex since it should be done with a specific method, so the spatial and temporal will not be biased or even removable on that dimension. PROPOSED METHOD Rainfall is critical in supporting human life; besides, various policies often consider rainfall the main factor. Based on rainfall data, climate classification can be done according to the ratio between the average dry months and the average number of wet months. The dry month occurs when the monthly rainfall is less than 60 mm/month, while the wet month occurs when the monthly rainfall is above 100 mm/month. A humid month occurs between the dry and wet months when the monthly rainfall is between 60-100 mm/month Overview Variational data and models increase with many perspectives to understand data to build the best alternative model. The authors have searched for and understood a literature review to know which is the newest and best model or the strengths and weaknesses of those models. Still, the rainfall forecasting model suffers from predicting rainfall accurately and precisely. Hence, the authors built the proposed model Convolutional LSTM-AT as an alternative solution to optimize monthly rainfall prediction with the spatiotemporal dataset. Data Preprocessing The data preprocessing stage is the data selection stage which aims to obtain relevant data for use. In raw data, missing values are often found, not stored values (misrecording), data sampling that is not good enough, and others. However, because this research does not use raw data but secondary data, preprocessing will be done to process spatial and temporal data. In addition, preprocessing will only be conducted to focus on the data on cells with value, so the cells with no data will not be used. 9 MONTHLY RAINFALL PREDICTION FIGURE 4. Illustrated spatiotemporal data using the sliding window in spatial perspective In this study, focal operation theory is implemented, a spatial function to calculate the output value of each cell using neighborhood values, like the nearest neighbors' algorithm (K-NN), a machine learning algorithm, as shown in Figure 4 [30]. In addition, this theory is also commonly used in convolution, kernel, and moving windows in deep learning algorithms such as CNN or RNN. Moving Window can be imagined as an arrangement of square cells with a specific size, which in this study is 3 x 3 in size, which shifts its position with certain steps. As the operation is applied to each cell of the moving window, the values in the raster tend to be smoother. It was adopted in this study to smooth the predictive value in spatial conditions. Spatial-temporal data are generally placed in continuous space, while classical data sets such as images or video data are usually in a discrete area. Spatiotemporal data patterns usually present 10 FREDYAN, KUSUMA very complex spatial and temporal properties, and correlations between data are challenging to explain with traditional methods. Finally, one of the standard statistical assumptions is that the sample is obtained independently. However, this does not apply in spatiotemporal analysis because Spatiotemporal data tend to be highly correlated, so it is impossible to carry out separate studies. As explained earlier, the data used in each time unit (temporal) is 89x89 with a length of 480 temporal, as shown in Figure 4. Hence for modeling, the data is taken spatially with a size of 3x3 for 13 months (temporal), and this data will slice the sliding window along the temporal axis. Moving to the right side with a single step will be implemented in the data, so after the last window on the right area, it will continue by a sliding window in the next row, from left to right. It can be seen in the blue area in Fig. 4 until the end of the spatial data, which is the right bottom side. Data Clustering One of the data mining techniques is clustering to find similarities in character in the group data; this technique is included in traditional machine learning studies and also becoming part of the unsupervised algorithm, which only requires training data without target data [31]. In theory, cluster analysis is one of the tools to group data based on variables or features to maximize the resemblance of characteristics within the cluster and maximize the differences between clusters themselves [32]. The popular algorithm is the K-means clustering algorithm groups data based on the distance between the data and the cluster centroid point obtained through an iterative process [33]. The analysis needs to determine the number of K as input to the algorithm. Following the Eq. (1), is the objective function, is many clusters, is the number of cases, is a case in , and is the centroid for cluster itself. In k-means clustering, this distance can be measured using distances: Euclidean distance, Manhattan distance, A-squared Euclidean distance measure, and Cosine distance measure. The choice of this distance measurement method will affect how the algorithm calculates the similarity in the cluster and shape. Nevertheless, some of the problems come when determining the number of because no theory 11 MONTHLY RAINFALL PREDICTION states how to choose it very well since the number of is very essential to searching the cluster. The researcher solves this problem using the Elbow Method, which is obtained by performing a visual assessment of the line graph where the x-axis is the number of K, and the Y-axis is the Within Cluster Sum Square (WCSS) value. Convolutional LSTM-AT LSTM is derivative from the RNN in sequential data study, having three units of gates such as input, output, and forget gate. It allows the gates of LSTM to store and access information or characteristics of the data over a while, dependence to Hochreiter and Schmidhuber [23], mitigating the vanishing gradient problem. The model parameter including all the input is weight or and the bias term , , , , Ĉ respectively represent input-output forget and memory, the other symbols are ℎ meaning hidden state and sigmoid activation function, but it always depends on the data, sometime can be changed becoming hyperbolic tangent or ReLU [29] [34] [35]. ̃= tanh(ℎ −1 + + ) = * −1 + * ̃ (4) The Attention Mechanism is often used to optimize sequence handling models in some deep attention. Hard attention refers to selecting a single input data feature, which means the attention weight can only be 0 or 1. Soft attention refers to a weight between 0 and 1, and the range of weight selection is more flexible [36]. Since those several models of attention were invented by Bahdanau et al. [13] and Multi-Head Attention by Vaswani et al. [14], empirically, addictive attention can improve the modal and attention layer's performance and make the unit's weight noticed. FIGURE 5. Spatiotemporal using Convolutional LSTM Attention Layer based Modifying the original LSTM with an attention mechanism is necessary to fully utilize the Spatiotemporal input information. The authors take rainfall as input features, and the output of our model is the next n-step rainfall prediction. Spatial and temporal attention weights affect the input and output of LSTM cells [37]. With the help of the Spatiotemporal attention module, the authors were able to dynamically adjust attention weights and improve the performance of LSTM cells 13 MONTHLY RAINFALL PREDICTION [38]. This model uses Adam's algorithm optimizer [39] to train the model, as shown in Figure 5. Before training the model, the step that must be conducted is to determine the network architecture, such as deciding how many layers are used, the number of neurons in each layer used, the activation function used, and other parameter values, it can be seen in Table 1. For the input layers based on the features that will be used, 9 spatial features will be used as input neurons; the number 9 comes from 3 x 3 spatial. Then temporal data have 12 timesteps and one time step as the target. Experimental Design The fully built model uses five models to compare the proposed model to others. Besides, the whole architecture has explained in Table 1. Postprocessing aims to make better rainfall predictions than "raw" (unprocessed) hydrological simulations. For this aim, it is significant to evaluate the model's performance and compare it with each other to conclude which model is the best. Several metrics are used to evaluate predictions for different wait times. Since accurate and reliable predictions are so crucial during rainfall events, the primary accuracy measure for a 14 FREDYAN, KUSUMA deterministic forecast is the root-mean-square error (RMSE) in equation (8): Where denotes the − th timeprediction of daily rainfall, denotes the observed daily, and represents the total number of time-k monthly rainfall predictions. Compared with mean absolute error (MAE) metrics, RMSE penalizes significant errors [40], desirable for high rainfall forecasts. Unlike RMSE, which gives a relatively high weight to significant errors, Mean Absolute Error (MAE), a linear statistical measure, is more applicable when the overall impact of errors is proportionate to the increase in error, MAE can be formulated as [40] in equation (9). RESULTS AND DISCUSSION Seven models have been built to forecast rainfall area in Kalimantan Timur, leading by 12 months' time step to predict one month. Those models are: • RNN: Recurrent Neural Network that allows previous outputs to be used as inputs while having hidden states. • GRU: Gated recurrent unit (GRU) is a gating mechanism implemented in recurrent neural networks. • LSTM: Long Short-Term Memory Network is a famous variant of RNN having three gates. • Convolutional LSTM-AT: Combination of Convolutional and LSTM with attention layer, as shown in Figure 5. Clustering Result Every spatial point has different statistical distribution, and different models should be trained for different clusters of spatial points with similar characteristics. Because of that, we use K-means clustering to cluster the spatial points. We use the Elbow method to find the optimal = √ ∑( − ) 2 (8) 15 MONTHLY RAINFALL PREDICTION cluster. Figure 6 shows the best number of clusters that we choose is 4 as a representation of the maximum number of clusters with a significant distance reduction indicator is Cluster 0, Cluster 1, Cluster 2, and Cluster 3. This paper evaluates all clusters as input candidates to build the proposed model that every cluster has own characteristics to generate which spatial become specific cluster. This work it might be the first way to find another clustering. In the Elbow method, author is varying the number of clusters (K) from 1 -10. For each value of K, author is calculating WCSS (Within-Cluster Sum of Square). WCSS is the sum of squared distance between each point and the centroid in a cluster. When author plot the WCSS with the K value, the plot looks like an Elbow. As the number of clusters increases, the WCSS value will start to decrease. WCSS value is largest when K = 1. When author analyze the graph author can see that the graph will rapidly change at a point and thus creating an elbow shape. From this point, the graph starts to move almost parallel to the X-axis. The K value corresponding to this point is the optimal K value or an optimal number of clusters. Figure 6 shows the best number of clusters that author choose is 4 as a representation of the maximum number of clusters with a significant distance reduction indicator is cluster 0, cluster 1, cluster 2, and cluster 3. As shown in Table. 2, the number of WCSS can be seen in there same as in Figure. 6. Moreover, Table 3 showed result the clustering location in 4 cluster. Convolutional LSTM-AT Result The first step of this experiment is building a proposed method with LSTM with a modification layer such as an attention mechanism. The challenge of building the model is looking for the best hyperparameter to adjust the number. As shown in Table 1, we use a constant hyperparameter and build all models with the same hyperparameter but different architecture. showed that the performance still best than others method in average value of spatial point. The attention-based models are more accurate and robust than the original LSTM model and reducing 18 FREDYAN, KUSUMA the number of errors significantly. This proves that the proposed method is still the stable to get minimum value of spatial point. Leading to smaller output should be this model perform much better-using data CHIRPS. All the model performances were entirely satisfactory when we see the average of MAE since the average is the testing of all spatial data that have different characteristics. The RMSE and MAE of the predictions from models in experiments is shown in Table 4. On the CHIRPS dataset, the proposed Convolutional LSTM-AT model has lowest error even using maximum value of all spatial target. However, we can infer that the dataset is already split by Table 5 For future work, we will be looking another way to reduce error in the result of the model. The direction may include how to preprocess data and train 3-Dimensional data without losing spatial information. Besides, we will investigate much architecture and develop spatiotemporal approaches. We will consider further improving the performance of the model by utilizing the graph information of area that we predict in the data. Moreover, it will be grated to add flood data augmentation and physical interpretation of model to make prediction more closely with the ground truth.
5,507.6
2022-01-01T00:00:00.000
[ "Computer Science" ]
Conductivity Prediction Method of Carbon Nanotube Resin Composites Considering the Quantum Tunnelling Effect Understanding and predicting the conductivity of carbon nanotube resin composites are essential for structural health detection and monitoring applications. Due to the complexity in the composition of carbon nanotube resin composites, it is of practical significance to develop a method for predicting the conductivity with a view to design and making of the composite. In this paper, the influence of carbon nanotube tunnelling on the conductivity was investigated thoroughly, where the tunnelling conductivity effect is considered as an independent conductive phase. Then, the effective medium model and the Hashin–Shtrikman (H–S) boundary model are used to predict the conductivity of carbon nanotube resin composites. The results presented in this paper show that the developed method can reduce the prediction range of the H–S boundary model and improve the prediction accuracy of the lower bound of the H–S boundary model. The results also show that the tunnelling has little effect on conductivity prediction based on the effective medium model. Based on the results, the effects of nanotube conductivity, the aspect ratio and the barrier height on the prediction of the effective conductivity are discussed to provide a guidance for the design and making of the composites. Introduction Fiber-reinforced resin composites have the advantages of light weight, high strength, corrosion resistance, designability and easy construction [1,2]. These materials have attracted extensive attention in the field of civil engineering detection and reinforcement [2][3][4]. Compared with traditional reinforcement methods, the use of fiber-reinforced resin composites for structural reinforcement and repair has the advantages of fast construction speed, low cost, high efficiency and low later maintenance costs. The use of fiber-reinforced resin composite plates to strengthen and repair concrete structures has been widely used in bridges, tunnels, building structures and other projects [5,6]. However, since large-scale civil engineering facilities experience erosion due to various complex environmental factors, these structures will exhibit ageing, brittle fracture, cracking and other phenomena over time [7,8]. Therefore, it is necessary to carry out health detection and performance monitoring of the structures, especially the reinforced parts of the existing structure. The use of the change characteristics of the electrical properties of fiber-reinforced resin composites to monitor the change in stress state has become a research hotspot [7,9]. Particularly in the monitoring of structural damage or concrete crack propagation, the change in electrical properties of fiber-reinforced resin composites is used for testing, which is simple, intuitive, efficient and fast [10,11]. Compared with traditional macrofibers, carbon nanotubes have the advantages of high conductivity, large specific surface area, strong corrosion resistance and smaller scale, which can realize the micromodification of resin composites [10,12,13]. Using the change in electrical properties of carbon nanotube resin composites to monitor the stress state of the structure, first, composite resins with excellent properties should be prepared. Lin Shaofeng [13] discussed the preparation of carbon nanotube resin composites and found that when the content of carbon nanotubes is 0.5~2%, the electrical conductivity of the composites shows a rapid upwards trend. When the content of carbon nanotubes exceeds 2%, the electrical conductivity of the materials increases slowly. Moisala et al. [14] studied the effects of single-wall carbon nanotubes and multiwall carbon nanotubes on the electrical properties of epoxy resin. The results show that the multiwall carbon nanotube epoxy resin composite has a lower percolation threshold than the single-walled carbon nanotube epoxy resin composite [14]. After surface acid treatment and oxidation treatment, carbon nanotubes are more easily dispersed in epoxy resin. In addition, the introduction of functional groups into oxidized carbon nanotubes can further improve the conductivity of resin composites to prepare composites with greater conductivity [15,16]. Theoretically, the conductivity of carbon nanotube resin composites comes from the number of conductivity networks formed by the overlapping of carbon nanotubes and the tunnelling effect between carbon nanotubes. To measure this effect, it is particularly important to accurately predict the conductivity of composites. To date, there have been prediction models based on micromechanics [17,18]. However, the existing prediction models have an insufficient understanding of the tunnelling effect of carbon nanotubes, and the prediction conclusions have some limitations. In our previous work, we considered the influence of the tunnelling effect on conductivity and the piezoresistive effect and found that considering the influence of the tunnelling effect can improve the prediction accuracy of the Mori Tanaka method [19]. In order to further analyze the practicability of the method, expand the application scope, and determine the influence of relevant important parameters, we conducted further research. In this paper, we continue to extend this idea to the effective medium model and the Hashin-Shtrikman (H-S) boundary model and further analyze the influence of relevant parameters on conductivity prediction results. Conductivity Prediction Method The conductivity of resin materials mostly ranges from 10 −16 to 10 −12 S/m, which is generally considered insulating. Carbon nanotubes have extremely excellent electrical properties, and their conductivity is generally from 1 to 100,000 S/m. When carbon nanotubes are added into the resin matrix, the average spacing between carbon nanotubes gradually decreases with increasing volume fraction of carbon nanotubes. When the average spacing reaches a certain value, a tunnelling current is initiated, and the carbon nanotubes and the tunnelling current begin to form a conductive network in the matrix. In a carbon nanotube resin composite, when the tunnelling effect occurs [18][19][20][21], the interaction model is shown in Figure 1 below. The average spacing between adjacent carbon nanotubes in the resin composite can be recorded as d a , which conforms to the following exponential distribution [18]: In the Equation (1), f is the volume fraction of carbon nanotubes, f cp is the corresponding volume fraction when the percolation threshold is reached, and d cp is the tunnelling spacing between carbon nanotubes. In a previous study [18], a tunneling spacing of 1.8 nm is suggested. The volume fraction corresponding to the percolation threshold is related to the length-to-diameter ratio of carbon nanotubes, and the corresponding relationship is as follows [18]: where α is the aspect ratio of the CNT, L is the length of the carbon nanotube and r c is the inner diameter radius. related to the length-to-diameter ratio of carbon nanotubes, and the corresponding relationship is as follows [18]: where α is the aspect ratio of the CNT, is the length of the carbon nanotube and is the inner diameter radius. The tunnelling conductance between carbon nanotubes can be recorded as [18]: The specific values of parameters (m, e, γ , h, da) in the above Equation (5) can be found in [18]. The conductivity of the polymer is between 10 −16 and 10 −12 S/m. Therefore, in previous studies, the influence of tunnelling in the polymer is often ignored. Now, we take the tunnelling conductivity as the second phase to replace the conductivity of the polymer, the effective conductivity of the polymer can be obtained in a new way. The effective medium model and the H-S boundary model can be expressed as following [18,22]: The effective medium model: In the above Equation H-S boundary model: The tunnelling conductance between carbon nanotubes can be recorded as [18]: The specific values of parameters (m, e, γ, h, d a ) in the above Equation (5) can be found in [18]. The conductivity of the polymer is between 10 −16 and 10 −12 S/m. Therefore, in previous studies, the influence of tunnelling in the polymer is often ignored. Now, we take the tunnelling conductivity as the second phase to replace the conductivity of the polymer, the effective conductivity of the polymer can be obtained in a new way. The effective medium model and the H-S boundary model can be expressed as following [18,22]: The effective medium model: 2(n − n e ) n e + (n − n e )S 11 + n − n e n e + (n − n e )S 33 = 0 where n e = σ e /σ m , n = σ cnt /σ m , In the above Equation (6), S 33 = 1 − 2S 11 and σ cnt is the conductivity of the carbon nanotube. The σ m can be obtained by Equation (5), and α is the aspect ratio of the CNT. H-S boundary model: where the parameters ( f , σ m , σ cnt ) are the same to those of the Equation (6). Equations (8) and (9) are used to calculate the H-S upper-and lower-bound model conductivity, respectively. The solution process of the effective conductivity of carbon nanotube resin composites is as follows: First of all, to determine the relevant parameters of carbon nanotube resin composites, such as α, σ cnt , f , m, e, γ, h, d cp and σ 0 . Secondly, to calculate the percolation threshold according to Equation (2), and then calculate d a according to Equation (1). Validation and Comparison with Experiments In this Section, the developed model is verified, with experimental results which are shown in Figure 2 without tunnelling effect and Figure 3 with tunnelling effect. In both figures, σ cnt = 10 4 S/m, γ = 2.5 eV, α = 100 and d cp = 1.8 nm. An insulating bisphenaol-F epoxy resin (jER806, Japan Epoxy Resins, Co., Ltd.) and an amine Hardener (Tomaido 245-LP) were used at a ratio of 1:2. The preparation process of the polymer can be found in [21]. It can be seen from Figures 2 and 3 that the tunnelling effect of carbon nanotubes has little impact on the prediction results of the effective medium model but has significant impact on the results predicted by the boundary model, especially the lower boundary model. When the tunnelling effect of carbon nanotubes is not considered, the results predicted by the H-S lower-bound model change little with the increase in the volume of carbon nanotubes. This means that carbon nanotubes resin composites are almost insulated with the conductivity of approximate 10 −15 S/m. When the tunnelling effect is considered, the results predicted by the H-S lower-bound model increase sharply and then gradually to more than 10 −2 S/m with the increase in the CNT content. At the same time, the predicted results show an obvious seepage phenomenon when the content of carbon nanotubes increases. When the volume fraction of carbon nanotubes is greater than 0.01, experimental results concentrate between those predicted from lower-and upper-bound models. Since the lower bound prediction is close to experimental results, accuracy of the prediction is significantly improved. When the tunnelling effect between carbon nanotubes is considered, the results predicted by the effective medium model are also close to the experimental results. This means that neglecting the influence of the tunnelling effect on matrix conductivity in Figure 3. Comparison between the prediction results and existing experimental data [21,[23][24][25][26][27][28] considering the tunnelling effect. When the tunnelling effect between carbon nanotubes is considered, the results predicted by the effective medium model are also close to the experimental results. This means that neglecting the influence of the tunnelling effect on matrix conductivity in previous studies [26][27][28] may lead to large errors in predicting the conductivity, This is consistent with our previous findings [19]. It should be noted that we use the literature data [21,[23][24][25][26][27][28] to verify the validity of the calculated model. Since the temperature of this composites changes slowly during engineering application, we do not consider the influence of temperature. Parametric Study The production processes of carbon nanotubes are complex and diverse which leads to great differences in their characteristics. As such the conductivity and the aspect ratios of carbon nanotubes produced by different processes are different. When carbon nanotubes are mixed into resin materials, the performance of the composites varies significantly. To investigate the effect of different conductivity and the aspect ratios of carbon nanotubes on the performance of composites, a sensitivity analysis is carried out, given that the barrier height γ is between 1.0 and 5.0 eV according to [24]; the aspect ratios of carbon nanotubes are 50, 100, and 200, and the conductivities of the carbon nanotubes are 100, 1000 and 10,000 S/m. The results are presented here. As can be seen from Figure 4, for the effective medium model, the increase in carbon nanotube conductivity has little effect on the performance of carbon nanotube resin composites before the percolation threshold but has a significant effect after exceeding the percolation threshold. When the conductivity of carbon nanotubes is 100, 1000 and 10,000 S/m, respectively, all three predicted conductivity curves contain an abrupt phase and a stable phase, with the abrupt phase basically identical to all three cases. It can also be seen that the composites with the lowest conductivity of carbon nanotubes reach the stable phase first, and the composites with the highest conductivity the last. The perco-lation threshold is near 0.003~0.008. At this point, the results predicted by the effective conductivity differ by approximately an order of magnitude, corresponding to the given conductivity of carbon nanotubes. This may be because after reaching the percolation threshold, the influence of the volume fraction of carbon nanotubes on its conductivity is less than that of its own conductivity, and the conductivity of carbon nanotubes plays a dominant role [27,28]. Figure 5 shows that when the tunnelling effect is considered, the greater the conductivity of CNTs, the greater the effective conductivity of the composite, and the magnitude of increase is close. As can be seen in Figure 5, the results predicted by the H-S lower-bound model hardly change with the increase o in f the intrinsic conductivity of carbon nanotubes. When the conductivity of carbon nanotubes increases from 100 to 10,000 S/m, the value predicted by the H-S lower bound is basically unchanged, with the three predicted curves almost identical. The change in conductivity of carbon nanotubes has a great influence on the results predicted from H-S upper bound theory, with the predicted conductivity close to the intrinsic conductivity of carbon nanotubes. When the conductivity of carbon nanotubes increases by an order of magnitude, the effective conductivity of composites predicted by the H-S upper bound theory increases by an order of magnitude correspondingly. From Figure 5, when the conductivity of carbon nanotubes increases to 10,000 S/m, most of the experimental results on conductivity obtained from the literature fall within the range predicted by the upper and lower limit models. This is because that this conductivity value is adopted by most literatures [23][24][25][26][27][28]. The influence of the length-to-diameter ratio on the conductivity of carbon nanotube resin composites is shown in Figure 6, given all other parameters unchanged. Considering the influence of the tunnelling effect, when the CNT length-to-diameter ratio is 50, the percolation threshold predicted by the conductivity of the effective medium model is approximately 0.005. When the length-to-diameter ratio of carbon nanotubes increases, the percolation threshold decreases. For the same volume fraction of carbon nanotubes, the larger the aspect ratio of carbon nanotubes, the larger the value predicted by the effective conductivity. [21,[23][24][25][26][27][28] with the variable CNT aspect ratio. Figure 6. Comparison between the prediction results of the effective medium model and existing experimental data [21,[23][24][25][26][27][28] with the variable CNT aspect ratio. From Figure 7, it can be seen that, when the volume fraction of carbon nanotubes is small, the change in its aspect ratio has a great impact on the effective conductivity of the composites. When the volume fraction exceeds a certain value, an increase in the aspect ratio leads to the gradual decrease in the predicted effective conductivity growth. From viewpoint of physics, when the conductive network in a substance is sparse, it is easier for carbon nanotubes with a larger aspect ratio to form a conductive network. Therefore, increasing the aspect ratio increases the probability of forming a conductive network, and hence the conductivity significantly. When the volume fraction of carbon nanotubes is larger than percolation, the influence of the change in the aspect ratio on the conductive network in the composite resin matrix is reduced. When the aspect ratio is increased, the increase in the effective conductivity becomes slower. Figure 7 shows that the results predicted by the upper bound of the H-S model are close to the conductivity of pure carbon nanotubes, and the change in the aspect ratio has little effect on the predicted results. On the other hand, the change in the aspect ratio has a great influence on the results predicted by the H-S lower-bound model. With the same carbon nanotube volume fraction, when the aspect ratio increases, the conductivity predicted by the H-S lower bound also increases and the percolation threshold decreases, which becomes closer to the H-S upper bound. From Figure 7, it can also be seen that, when the length-to-diameter ratio of CNTs is from 100 to 200, the predicted results by the lower-bound model are close to the test results obtained in the literature [23][24][25][26][27][28]. In this case, most of the test data are between the range of upper and lower bounds as predicted. . Comparison between the prediction results of the upper-and lower-bound models and existing experimental data [21,[23][24][25][26][27][28] with the variable CNT aspect ratio. Figure 7. Comparison between the prediction results of the upper-and lower-bound models and existing experimental data [21,[23][24][25][26][27][28] with the variable CNT aspect ratio. It can be seen from Figure 8 that different barrier heights have little effect on the prediction of the effective conductivity by the effective medium model. As the barrier height decreases, the effective conductivity in the percolation zone increases slightly, and the predicted effective conductivity also increases slightly when a certain volume fraction exceeded, but this effect is small. More specifically as shown in Figure 9, when the barrier height increases from 1.0 to 5.0 eV, the conductivity predicted by the upper bound of H-S hardly changes, and the conductivity predicted by the lower bound of H-S decreases. The possible reason for the decrease in conductivity is that the higher the barrier height, the more the energy required for electrons to move around adjacent carbon nanotubes. Therefore, the lower the barrier height, the easier the tunnelling effect is in adjacent carbon nanotubes. This is consistent with the conclusion from the literature [29]. Moreover, the lower the barrier height, the closer the predicted results are between the lower and upper bounds of the H-S model. According to Equation (8) of the upper-bound model, the conductivity of carbon nanotubes is the dominant factor affecting the upper bound conductivity. Thus, the change in matrix conductivity has little effect on conductivity of the composites. Since the effect of the barrier height on the effective conductivity is in the same order of magnitude as that of the matrix, which is much smaller than the conductivity of carbon nanotubes, it is the reason that the results predicted by upper bound seem hardly change. Figure 9. Comparison between the prediction results of the upper-and lower-bound models and existing experimental data [21,[23][24][25][26][27][28] with the variable barrier height. Conclusions In this paper, a novel method to predict the conductivity of carbon nanotube resin composites was developed. This method was applied to the effective medium model and the upper-and lower-bound models. The effectiveness of this method was analyzed and compared with existing experiment data. The effects of related parameters on the prediction results of the effective medium model and H-S boundary model were compared. The following conclusions are provided as follows: (1) This method has a relatively large impact on the H-S boundary model, which improves the prediction accuracy and has a very small impact on the effective medium model. The lower bound prediction of H-S shows an obvious percolation . Comparison between the prediction results of the upper-and lower-bound models and existing experimental data [21,[23][24][25][26][27][28] with the variable barrier height. Conclusions In this paper, a novel method to predict the conductivity of carbon nanotube resin composites was developed. This method was applied to the effective medium model and the upper-and lower-bound models. The effectiveness of this method was analyzed and compared with existing experiment data. The effects of related parameters on the prediction results of the effective medium model and H-S boundary model were compared. The following conclusions are provided as follows: (1) This method has a relatively large impact on the H-S boundary model, which improves the prediction accuracy and has a very small impact on the effective medium model. The lower bound prediction of H-S shows an obvious percolation Figure 9. Comparison between the prediction results of the upper-and lower-bound models and existing experimental data [21,[23][24][25][26][27][28] with the variable barrier height. Conclusions In this paper, a novel method to predict the conductivity of carbon nanotube resin composites was developed. This method was applied to the effective medium model and the upper-and lower-bound models. The effectiveness of this method was analyzed and compared with existing experiment data. The effects of related parameters on the prediction results of the effective medium model and H-S boundary model were compared. The following conclusions are provided as follows: (1) This method has a relatively large impact on the H-S boundary model, which improves the prediction accuracy and has a very small impact on the effective medium model. The lower bound prediction of H-S shows an obvious percolation phenomenon with increasing carbon nanotubes. When the volume fraction of carbon nanotubes is greater than 0.01, the distance between the lower bound prediction and the upper bound prediction decreases, and the prediction accuracy of the lower bound prediction is significantly improved. (2) In general, the conductivity of carbon nanotubes, the length-to-diameter ratio and the barrier height between carbon nanotubes have important effects on the two models, especially on the upper-and lower-bound models, and on the speed of reaching the percolation threshold. Specifically, with increasing conductivity of carbon nanotubes, the predicted value of the H-S upper-bound model increases, and the order of the increase is close to that of carbon nanotubes. However, the prediction of the H-S lower-bound model and the effective medium model are not affected by the change in conductivity of carbon nanotubes. Moreover, for a given volume fraction of carbon nanotubes, the larger the length-to-diameter ratio of carbon nanotubes, the greater the predicted value of carbon nano resin composite conductivity. This change is consistent with both models, but it is especially significant for the H-S lower-bound model. Finally, when the barrier height increases, the conductivity predicted by the H-S lower-bound model decreases gradually, and the predicted value of the H-S upper-bound model changes little. Relatively speaking, the barrier height has little effect on the effective conductivity prediction of the effective medium model.
5,237.4
2022-08-30T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Modeling Second-Language Learning from a Psychological Perspective Psychological research on learning and memory has tended to emphasize small-scale laboratory studies. However, large datasets of people using educational software provide opportunities to explore these issues from a new perspective. In this paper we describe our approach to the Duolingo Second Language Acquisition Modeling (SLAM) competition which was run in early 2018. We used a well-known class of algorithms (gradient boosted decision trees), with features partially informed by theories from the psychological literature. After detailing our modeling approach and a number of supplementary simulations, we reflect on the degree to which psychological theory aided the model, and the potential for cognitive science and predictive modeling competitions to gain from each other. Introduction Educational software that aims to teach people new skills, languages, and academic subjects have become increasingly popular. The wide-spread deployment of these tools has created interesting opportunities to study the process of learning in large samples. The Duolingo shared task on Second Lanugage Acquisition Modeling (SLAM) was a competitive modeling challenge run in early 2018 . The challenge, organized by Duolingo 1 , a popular second language learning app, was to use log data from thousands of users completing millions of exercises to predict patterns of future translation mistakes in heldout data. The data was divided into three sets covering Spanish speakers learning English (en es), English speakers learning Spanish (es en), and English speakers learning French (fr en). This paper reports the approach used by our team, 1 http://duolingo.com which finished in third place for the en es data set, second place for es en, and third place for fr en. Learning and memory has been a core focus of psychological science for over 100 years. Most of this work has sought to build explanatory theories of human learning and memory using relatively small-scale laboratory studies. Such studies have identified a number of important and apparently robust phenomena in memory including the nature of the retention curve (Rubin and Wenzel, 1996), the advantage for spaced over massed practice (Ruth, 1928;Cepeda et al., 2006;Mozer et al., 2009), the testing effect (Roediger and Karpicke, 2006), and retrieval-induced forgetting (Anderson et al., 1994). The advent of large datasets such as the one provided in the Duolingo SLAM challenge may offer a new perspective and approach which may prove complementary to laboratory scale science (Griffiths, 2015;Goldstone and Lupyan, 2016). First, the much larger sample sizes may help to better identify parameters of psychological models. Second, datasets covering more naturalistic learning situations may allow us to test the predictive accuracy of psychological theories in a more generalizable fashion (Yarkoni and Westfall, 2017). Despite these promising opportunities, it remains unclear how much of current psychological theory might be important for tasks such as the Duolingo SLAM challenge. In the field of education data mining, researchers trying to build predictive models of student learning have typically relied on traditional, and interpretable, models and approaches that are rooted in cognitive science (e.g., Atkinson, 1972b,a;Corbett and Anderson, 1995;Pavlik and Anderson, 2008). However, a recent paper found that state-of-the-art results could be achieved using deep neural networks with little or no cognitive theory built in (so called "deep knowledge tracing", Piech et al., 2015). Khajah, Lindsey, & Mozer (2016) compared deep knowledge tracing (DKT) to more standard "Bayesian knowledge tracing" (BKT) models and showed that it was possible to equate the performance of the BKT model by additional features and parameters that represent core aspects of the psychology of learning and memory such as forgetting and individual abilities (Khajah et al., 2016). An ongoing debate remains in this community whether using flexible models with lots of data can improve over more heavily structured, theory-based models (Tang et al., 2016;Xiong et al., 2016;Zhang et al., 2017). For our approach to the SLAM competition, we decided to use a generic and fairly flexible model structure that we provided with hand-coded, psychologically inspired features. We therefore positioned our entry to SLAM somewhat in between the approaches mentioned above. Specifically, we used gradient boosting decision trees (GBDT, Ke et al., 2017) for the model structure, which is a powerful classification algorithm that is known to perform well across various kinds of data sets. Like deep learning, GBDT can extract complex interactions among features, but it has some advantages including faster training and easier integration of diverse inputs. We then created a number of new psychologically-grounded features for the SLAM dataset covering aspects such as user perseverance, learning processes, contextual factors, and cognate similarity. After finding a model that provided the best held-out performance on the test data set, we conducted a number of "lesioning" studies where we selectively removed features from the model and re-estimated the parameters in order to assess the contribution of particular types of features. We begin by describing our overall modeling approach, and then discuss some of the lessons learned from our analysis. Task Approach We approached the task as a binary classification problem over instances. Each instance was a single word within a sentence of a translation exercise and the classification problem was to predict whether a user would translate the word correctly or not. Our approach can be divided into two components-constructing a set of features that is informative about whether a user will answer an instance correctly, and designing a model that can achieve high performance using this feature set. Feature Engineering We used a variety of features, including features directly present in the training data, features constructed using the training data, and features that use information external to the training data. Except where otherwise specified, categorical variables were one-hot encoded. Exercise features We encoded the exercise number, client, session, format, and duration (i.e., number of seconds to complete the exercise), as well as the time since the user started using Duolingo for the first time. Word features Using spaCy 2 , we lemmatized each word to produce a root word. Both the root word token and the original token were used as categorical features. Due to their high cardinality, these features were not one-hot encoded but were preserved in single columns and handled in this form by the model (as described below). Along with the tokens themselves we encoded each instance word's part of speech, morphological features, and dependency edge label. We noticed that some words in the original dataset were paired with the wrong morphological features, particularly near where punctuation had been removed from the sentence. To fix this, we reprocessed the data using Google SyntaxNet 3 . We also encoded word length and several word characteristics gleaned from external data sources. Research in psychology has suggested certain word features that play a role in how difficult a word is to process, as measured by how long readers look at the word as well as people's performance in lexical-decision and word-identification tasks. Two such features that have somewhat independent effects are word frequency (i.e., how often does the word occur in natural language; Rayner, 1998) and age-of-acquisition (i.e., the age at which children typically exhibit the word in their vocabulary; Brysbaert and Cortese, 2011;Ferrand et al., 2011). We therefore included a feature that encoded the frequency of each word in the language being acquired, calculated from Speer et al. (2017), and a feature that encoded the mean ageof-acquisition (of the English word, in English native speakers), derived from published age-ofacquisition norms for 30,000 words (Kuperman et al., 2012), which covered many of the words present in the dataset. Additionally, words sharing a common linguistic derivation (also called "cognates"; e.g., "secretary" in English and "secretario" in Spanish), are easier to learn than words with dissimilar translations (De Groot and Keijzer, 2000). As an approximate measure of linguistic similarity, we used the Levenshtein edit distance between the word tokens and their translations scaled by the length of the longer word. We found translations using Google Translate 4 and calculated the Levenshtein distance to reflect the letter-by-letter similarity of the word and its translation (Hyyrö, 2001). User features Just as we did for word tokens, we encoded the user ID as a single-column, high-cardinality feature. We also calculated several other user-level features that related to the "learning type" of a user. In particular, we encoded features that might be related to psychological constructs such as the motivation and diligence of a user. These features could help predict how users interact with old and novel words they encounter. As a proxy for motivation, we speculated that more motivated users would complete more exercises every time they decide to use the app. To estimate this, we grouped each user's exercises into "bursts." Bursts were separated by at least an hour. We used three concrete features about these bursts, namely the mean and median number of exercises within bursts as well as the total number of bursts of a given user (to give the model a feature related to the uncertainty in the central tendency estimates). As a proxy for diligence, we speculated that a very diligent user might be using the app regularly at the same time of day, perhaps following a study schedule, compared to a less diligent user whose schedule might vary more. The data set did not provide a variable with the time of day, which would have been an interesting feature on its own. Instead, we were able to extract for each exercise the time of day relative to the first time a user had used the app, ranging from 0 to 1 (with 4 https://cloud.google.com/translate/ 0 indicating the same time, 0.25 indicating a relative shift by 6 hours, etc.). We then discretized this variable into 20-minute bins and computed the entropy of the empirical frequency distribution over these bins. A lower entropy score indicated less variability in the times of day a user started their exercises. The entropy score might also give an indication for context effects on users' memory. A user practicing exercises more regularly is more likely to be in the same physical location when using the app, which might result in better memory of previously studied words (Godden and Baddeley, 1975). Positional features To account for the effects of surrounding words on the difficulty of an instance, we created several features related to the instance word's context in the exercise. These included the token of the previous word, the next word, and the instance word's root in the dependency tree, all stored in single columns as with the instance token itself. We also included the part of speech of each of these context words as additional features. When there was no previous word, next word, or dependency-tree root word, a special None token or None part of speech was used. Temporal features A user's probability of succeeding on an instance is likely related to their prior experience with that instance. To capture this, we calculated several features related to past experience. First, we encoded the number of times the current exercise's exact sentence had been seen before by the user. This is informed by psychological research showing memory and perceptual processing improvements for repeated contexts or "chunks" (e.g., Chun and Phelps, 1999). We also encoded a set of features recording past experience with the particular instance word. These features were encoded separately for the instance token and for the instance root word created by lemmatization. For each token (and root) we tracked user performance through four weighted error averages. At the user's first encounter of the token, each error term E starts at zero. After an encounter with an instance of the token with label L (0 for success, 1 for error), it is updated according to the equation: where α determines the speed of error updating. The four weighted error terms use α = {.3, .1, .03, .01}, allowing both short-run and long-run changes in a user's error rate with a token to be tracked. Note that in cases where a token appears multiple times in an exercise, a single update of the error features is conducted using the mean of the token labels. Along with the error tracking features, for each token we calculated the number of labeled, unlabeled, and total encounters; time since last labeled encounter and last encounter; and whether the instance is the first encounter with the token. In the training data, all instances are labeled as correct or incorrect, so the label for the previous encounter is always available. In the test data, labels are unavailable, so predictions must be made using a mix of labeled and unlabeled past encounters. In particular, for a user's test set with n exercises, each exercise will have between zero and n − 1 preceding unlabeled exercises. To generate training-set features that are comparable to test-set features, we selectively ignored some labels when encoding temporal features on the training set. Specifically, for each user we first calculated the number of exercises n in their true test set 5 . Then, when encoding the features for each training instance, we selected a random integer r in the range [0, n − 1], and ignored the labels in the prior r exercises. That is, we encoded features for the instance as though other instances in those prior exercises were unlabeled, and ignored updates to the error averages from those exercises. The result of this process is that each instance in the training set was encoded as though it were between one and n exercises into the test set. Modeling After generating all of the features for the training data, we trained GBDT models to minimize log loss. GBDT works by iteratively building regression trees, each of which seeks to minimize the residual loss from prior trees. This allows it to capture non-linear effects and high-order interactions among features. We used the LightGBM 6 implementation of GBDT (Ke et al., 2017). For continuous-valued features, GBDT can split a leaf at any point, creating different predicted val-5 If the size of the test set were not available, it could be estimated based on the fact that it is approximately 5% of each participant's data. 6 http://lightgbm.readthedocs.io/ ues above and below that threshold. For categories that are one-hot encoded, it can split a leaf on any of the category's features. This means that for a category with thousands of values, potentially thousands of tree splits would be needed to capture its relation to the target. Fortunately, LightGBM implements an algorithm for partitioning the values of a categorical feature into two groups based on their relevence to the current loss, and create a single split to divide those groups (Fisher, 1958). Thus, as alluded to above, high-cardinality features like token and user ID were encoded as single columns and handled as categories by Light-GBM. We trained a model for each of the three language tracks of en es, es en, and fr en, and also trained a model on the combined data from all three tracks, adding an additional "language" feature. Following model training, we averaged the predictions of each single-language model with that of the all-language model to form our final predictions. Informal experimentation showed that model averaging provided a modest performance boost, and that weighted averages did not clearly outperform a simple average. To tune model hyper-parameters and evaluate the usefulness of features, we first trained the models on the train data set and evaluated them on the dev data set. Details of the datasets and the actual files are provided on the Harvard Dataverse (Settles, 2018). Once the model structure was finalized, we trained on the combined train and dev data and produced predictions for the test data. The LightGBM hyperparameters used for each model are listed in Table 1. Performance The AUROC of our final predictions was .8585 on en es, .8350 on es en, and .8540 on fr en. For reference this placed us within .01 of the winning entry for each problem (.8613 on en es, .8383 on es en, and .8570 on fr en). Also note that the Duolingo-provided baseline model (L2-regularized regression trained with stochastic gradient descent weighted by frequency) obtains .7737 on en es, .7456 on es en, and .7707 on fr en. We did not attempt to optimize F1 score, the competition's secondary evaluation metric. Feature Removal Experiments To better understand which features or groups of features were most important to our model's predictions, we conducted a set of experiments in which we lesioned (i.e., removed) a group of features and re-trained the model on the train set, evaluating performance on the dev set. For simplicity, we ran each of the lesioned models on all language data and report the average performance. We did not run individual-language models as we did for our primary model. The results of the lesion experiments are shown in Figure 1. The models are as follows. none: All features are included. temporal: Temporal information, including number and timing of past encounters with the word and error tracking information, is removed. Interestingly, we found that for both user-level and word-level features, the bulk of the model's predictive power could be achieved using ID's alone, represented as high-cardinality categorical features. Removing other word features, such as morphological features and part of speech, created only a small degradation of performance. In the case of users, removing features such as entropy and average exercise burst length led to a tiny increase of performance. In the case of both users and words, though, we find that in the absence of ID features the other features are helpful and lead to better performance than removing all features. We also found that removing all information about neighboring words and the dependency-parse root word degraded performance. This confirms that word context matters, and suggests that users commonly make errors in word order, subject-verb matching and other grammatical rules. Our external word features-Levenshtein distance to translation, frequency, and age of acquisition-provided a slight boost to model performance, showing the benefit of considering what makes a word hard to learn from a psychological and linguistic perspective. Adding temporal features about past encounters and errors helped the models, but not as much as we expected. While not included in the final model, we had also tried augmenting the temporal feature set with more features related to massing and spacing of encounters with a word, but found it did not improve performance. This is perhaps not surprising given how small the benefit of the existing temporal features are in our model. Though not plotted above, we also ran a model lesioning exercise-level features including client, session type, format, and exercise duration. This model achieved an AUROC of .787, far lower than any other lesion. This points to the fact that the manner in which memory is assessed often affects observed performance (e.g., the large literature in psychology on the difference between recall and recognition memory, Yonelinas, 2002). Discussion When approaching the Duolingo SLAM task, we hoped to leverage psychological insights in building our model. We found that in some cases, such as when using the word's age-of-acquisition, this was helpful. In general, though, our model gained its power not from hand-crafted features but from applying a powerful inference technique (gradient boosted decision trees) to raw input about user IDs, word IDs, and exercise features. There are multiple reasons for the limited applicability of psychology to this competition. First, computational psychological models are typically designed based on small laboratory data sets, which might limit their suitability for generating highly accurate predictions in big data settings. Because they are designed not for prediction but for explanation, they tend to use a small number of input variables and allow those variables to interact in limited ways. In contrast, gradient boosted decision trees, as well as other cutting-edge techniques like deep learning can extract high-level interactions among hundreds of features. While they are highly opaque, require a lot of data, and are not amenable to explanation, these models excel at prediction. Second, it is possible that our ability to use theories of learning, including ideas about massed and spaced practice, was disrupted by the fact that the data may have been adaptively created using these very principles (Settles and Meeder, 2016). If Duolingo adaptively sequenced the spacing of trials based on past errors, then the relationship between future errors and past spacing may have substantially differed from that found in the psychological literature (Cepeda et al., 2006). Finally, if the task had required broader generalization, psychologically inspired features might have performed more competitively. In the SLAM task, there is a large amount of labeled training data for every user and for most words. This allows simple ID-based features to work because the past history of a user will likely influence their future performance. However, with ID-based features there is no way to generalize to newlyencountered users or words, which have an ID that was not in the training set. The learned IDbased knowledge is useless here because there is no way to generalize from one unique ID to another. Theory-driven features, in contrast, can often generalize to new settings because they capture aspects that are shared across (subsets of) users, words, or situations of the learning task. For example, if we were asked to generalize to a completely new language such as German, many parts of our model would falter but word frequency, age of acquisition, and Levenshtein distance to firstlanguage translation would still likely prove to be features which have high predictive utility. In sum, we believe that the Duolingo SLAM dataset and challenge provide interesting oppor-tunities for cognitive science and psychology. Large-scale, predictive challenges like this one might be used to identify features or variables that are important for learning. Then, complementary laboratory-scale studies can be conducted which establish the causal status of such features through controlled experimentation. Conversely, insights from controlled experiments can be used to generate new features that aid predictive models on naturalistic datasets (Griffiths, 2015;Goldstone and Lupyan, 2016). This type of two-way interaction could lead to long-run improvements in both scientific explanation and real-world prediction.
5,131
2018-06-01T00:00:00.000
[ "Computer Science", "Psychology" ]
Blockchain-Based Secure Outsourcing of Polynomial Multiplication and Its Application in Fully Homomorphic Encryption (e efficiency of fully homomorphic encryption has always affected its practicality. With the dawn of Internet of things, the demand for computation and encryption on resource-constrained devices is increasing. Complex cryptographic computing is a major burden for those devices, while outsourcing can provide great convenience for them. In this paper, we firstly propose a generic blockchain-based framework for secure computation outsourcing and then propose an algorithm for secure outsourcing of polynomial multiplication into the blockchain. Our algorithm for polynomial multiplication can reduce the local computation cost to O(n). Previous work based on Fast Fourier Transform can only achieve O(nlog(n)) for the local cost. Finally, we integrate the two secure outsourcing schemes for polynomial multiplication and modular exponentiation into the fully homomorphic encryption using hidden ideal lattice and get an outsourcing scheme of fully homomorphic encryption.(rough security analysis, our schemes achieve the goals of privacy protection against passive attackers and cheating detection against active attackers. Experiments also demonstrate our schemes are more efficient in comparisons with the corresponding nonoutsourcing schemes. Introduction As the development of the big data era, there is an increasing demand for large-scale time-consuming computations. Fortunately, with the emergence of cloud computing, computation outsourcing brings convenience to resourceconstrained users. ey can outsource complex computing tasks into the cloud by paying a fee and avoiding buying expensive high-performance hardware. It not only improves the resource utilization in cloud but also brings economic benefits to resource-constrained users. Nevertheless, the attractive computing scheme also causes security issues. A passive attacker in the cloud may be only curious about the privacy contained in the user's outsourced data, while an active attacker may make malicious damage or forge the results to sabotage the computation. Even if there is no attacker, computing errors caused by cloud hardware failure and software errors, etc. should also be considered. Furthermore, it should not be a great burden for the user to check the correctness of the returned results from the cloud; otherwise, the efficiency benefit of outsourcing will be nullified. erefore, the secure and efficient outsourcing of computations that can not only protect the privacy of users but also ensure the correct results has become a hot research topic. Gentry proposed a homomorphic encryption algorithm based on ideal lattice [1] for the first time, providing us with a direction to solve the privacy issues in computation outsourcing. e direction is a secure computation outsourcing mode: encryption-outsourcing-decryption (EOD). Even if we use the common EOD model, the device should also undertake the computations of secret key generation, encryption, verification, decryption, and so on, locally. ese computations are also great burden for the resource-constrained devices (such as mobile phones and IoT nodes). Blockchain has attractive features such as transparency, traceability, decentralization, and immutability, which make it an optimal approach for applications intrinsically with untrusted natures, such as computation outsourcing. A central trusted entity is not required for the computation outsourcing based on blockchain. Information about the whole data exchange process, computations, users, and computational nodes is recorded and is traceable in blockchain. Besides, smart contract can be utilized to digitally facilitate the implementation of whole transaction, which greatly improves the speed of building applications on blockchain. However, privacy is still an issue in the computation outsourcing based on blockchain. Owing to the low efficiency of fully homomorphic encryption algorithms, the general computation outsourcing mode based on EOD is impractical on the resource-constrained devices. In this paper, we will outsource some complex computations in the fully homomorphic encryption using hidden ideal lattice (FHEHIL) [2] into a blockchain framework. e contributions of this paper can be summarized as follows: (1) We propose a framework of blockchain-based computation outsourcing, in which we can implement secure outsourcing for FHEHIL. e framework has a credit-based task allocation strategy, which will significantly reduce the probability of malicious nodes participating in computing. (2) We propose a secure outsourcing algorithm for polynomial multiplication, which reduces the local computation cost (including the cost on result verification) to O(n). Previous work based on the Fast Fourier Transform (FFT) can only achieve O(nlog(n)) for the local cost. Besides, the algorithm can not only detect cheating but also identify cheating nodes combining with blockchain. e result verification in the outsourcing algorithm does not cause extra burden. (3) We also extend the secure outsourcing algorithm of modular exponentiation, in [3], in our blockchainbased framework. e two algorithms for polynomial multiplication and modular exponentiation are employed in FHEHIL as basic operations, and the FHEHIL implementation on the blockchain-based framework can have higher efficiency compared with previous work. Related Work At present, research studies on secure outsourcing can be roughly divided into two directions. In one direction, a general outsourcing mechanism is studied. In this mechanism, a fully homomorphic encryption algorithm is designed and the EOD model is used to outsource any computations. After the work of Gennaro et al. [1], great progress has been made in the field of fully homomorphic encryption [4][5][6]. However, the fully homomorphic encryption algorithms have high computational complexity. Recently, there are many researches to reduce the computation cost of homomorphic encryption algorithm. For example, Su et al. accelerated the leveled Ring-LWE fully homomorphic encryption [7]. ese research studies mainly focus on the efficiency of hardware. However, secure outsourcing complex computations of fully homomorphic encryption is a better way to improve efficiency for resourceconstrained devices. In the other direction, specific outsourcing algorithms are designed for various scientific computations, e g., modular exponentiation, solution of large-scale linear equations [8], bilinear pairings, and extend Euclidean. e Wei pairing and Tate pairing in algebraic curves are commonly used in key establishment and signature schemes in the field of cryptography. However, the computation of bilinear pairings is time-consuming in resource-constrained devices. us, many outsourcing schemes have been proposed [9][10][11]. To our knowledge, the scheme in [9] is the most efficient and secure till now. Because of the wide application in cryptography, the study on modular exponentiation is also a hot topic of research. Hohenberger and Lysyanskaya [12] proposed a modular exponentiation secure outsourcing scheme. Chen et al. [13] further improved its efficiency and verifiability. Ren et al. [14] proposed a scheme that only protects the privacy of exponent. Recently, Fu et al. [3] proposed a secure outsourcing scheme of modular exponentiation with hidden exponent and base. It has a stronger checkability. In cryptography, extended Euclidean algorithm is usually used to calculate modular inverse, which is widely used in RSA encryption algorithm. Similarly, Euclidean algorithm can be used to find the greatest common factor of two polynomials, which is commonly used in encryption algorithm based on Lattice. Zhou et al. [15] proposed the secure outsourcing algorithm of extended Euclidean algorithm. Polynomial multiplication is likewise a commonly used operation in cryptography schemes, error correcting codes, and computer algebra. e complexity of polynomial multiplication is still a major open problem. Using the FFT, the local computation of polynomial multiplication can achieve the complexity of O(nlog(n)). Recently, some efficient polynomial multiplication methods based on the FFT are proposed. Harvey et al. proposed a faster method over finite fields Z p when the degree of polynomials is less than p [16]. e efficiency has been further improved in [17]. For the hardware utilization, Liu et al. designed a high hardware efficiency polynomial multiplication on field-programmable gate array (FPGA) platform [18]. Hsu and Shieh proposed a method with less addition and multiplication [19] in 2020. Besides, there are also some research studies on reducing the space complexity of polynomial multiplication [20,21]. However, the complexity of some research studies based on the acceleration of FFT remains at O(nlog(n)). Other methods using distributed computing to improve hardware utilization are not applied to the resource-constrained device. Till now, there are few research studies on the secure outsourcing of polynomial multiplication. Due to the characteristics of the blockchain and Bitcoin [22], there are lots of research studies and applications on blockchain in recent years including the secure outsourcing. Lin et al. studied the secure outsourcing for bilinear pairings based on blockchain [23]. Zheng et al. [24] proposed a secure outsourcing scheme for attribute-based encryption on blockchain. ere are also some schemes [25,26] of outsourced data integrity verification. e fairness problem in blockchain-based secure multiparty computation was also solved by multiple efforts. For example, Gao et al. [27] proposed a scheme which realized fairness by maintaining an open reputation system. is type of general scheme for secure multiparty computation can be cumbersome for the problems in secure outsourcing computation. Andrychowicz et al. [28] utilized only scripts in Bitcoin currency to construct a fair protocol for secure multiparty lotteries, without relying on a trusted third party. Zhang et al. proposed BCPay in [29] and BPay in [30] to achieve fair payment for blockchain-based outsourcing services, which are compatible with the Bitcoin and Ethereum platforms. However, these frameworks can only provide fairness between the client and a single server. ey are applicable to the outsourcing scenarios where the task is outsourced to a single server from the client. In the problem of our work, the computation task needs to be outsourced to multiple computational nodes simultaneously. erefore, we propose a new one, considering the penalty on the cheating nodes, compensation on the honest nodes, and application of a credit-based scheme. where v i is the i th element in v. We denote polynomial by lower case italics (eg., f(x)). For a rational number r, round(r) represents the nearest integer to r. e rational vector v can also be rounded to round(v) � [round(v 1 ), . . . , round(v n )]. We use v(x) for the polynomial form of the vector v. We use v 1 × v 2 for polynomial multiplication (v 1 × v 2 � (v 1 (x) × v 2 (x))modf(x)) on the ring. We use |v| for the norm of v and |S| for the base of set S. We use v ∘ w for correlation (v ∘ w � [v 1 * w 1 , . . . , w n ]). We use R(v, f) for the rotation matrix of v whose i th row is the coefficients of v(x) × x (i− 1) modf(x). We use xgcd(a(x), b(x)) for the extended Euclidean algorithm on a(x) and b(x). deg(f(x)) represents the degree of f(x). We use F v to denote the coefficient set of Discrete Fourier transform (DFT) of v and F − v for the coefficient set of inverse DFT of v. Fully Homomorphic Encryption Using Hidden Ideal Lattice. e FHEHIL scheme [2] is described in Algorithm 1, including the components of key generation, encryption, and decryption. e related parameters are shown in Table 1. As shown in Algorithm 1, it is obvious that polynomial multiplication is the primary computation in encryption and decryption. erefore, our proposed algorithm for the secure outsourcing of polynomial multiplication can be directly applied. As for the key generation, the computing burden is from Step 7, 9, and 11. e main computational cost of Step 7 is on computing the determinant of matrix V. e most time-consuming operation in Step 9 is polynomial multiplication. Step 11 computes the inverse of polynomial. Below, we will analyze the detailed computations in Step 7 and 11 and demonstrate that polynomial multiplication and modular exponentiation are the main types of computations, which are the two improvement aims of this paper. 3.2.1. e Method of Computing d in Algorithm 1. Because of the characteristic of V, computing the determinant of matrix V needs only log(n) times of polynomial multiplication using the method in [31]. d is the free item of where p i is the root of f(x) � 0 in the complex domain and they satisfy equation (2). Due to equation (2), we have us, the computation of d mainly involves polynomial multiplications and modular exponentiations. e Method of Computing e specific procedures of secure outsourcing for extended Euclidean algorithm [15] are summarized in Algorithm 2. It can be seen that the local computations of this algorithm consist of mostly modular exponentiation and polynomial multiplications. Detailed analysis of Algorithm 2 can be found in [15]. e Method of Computing w in Algorithm 1 When d Is Not a Prime. When d is not a prime, the fastest method to calculate polynomial inverse at present is Gentry's method in [31]. e method is based on fast Fourier transform and halves the number of terms in each step to offset the doubling of the bit length of the coefficients. is method relays on f(x) � x n+1 , where n is a power of 2. e method is analyzed as follows. Firstly, the second coefficient can be computed. We can get w 1 � (g 1 ′ /n). Finally, other coefficients of w can be computed by Since n is a power of 2 in f(x), the roots satisfy equation (2). In the process of computing coefficients g 1 and g 1 ′ , the major computations are also polynomial multiplications and modular exponentiations. The Framework of Blockchain-Based Computation Outsourcing is section introduces a blockchain-based computation outsourcing framework. e overall system model is illustrated in Figure 1, and the related symbols are described in Table 2. In Figure 1, we assume that, at least, one trusted third party is available to implement the smart contract. is is a generic model on which a variety of computational tasks can be implemented, including the tasks of secure outsourcing of polynomial multiplication and modular exponentiation. e two specific tasks will be given in details in the rest of this paper. Registration. Users and computational nodes need to register before joining the network. ey need to pay deposits in advance, the amount of which needs to be greater than a specified threshold or they will be rejected. e smart contract initializes the same credit score to all new nodes and users. After registration, information of users and computational nodes is written to the smart contract. e specific function is shown as Algorithm 3. In this algorithm, addr[p] is the deposit account of node p in the smart contract. Node p's asset privacy is protected because it is unnecessary for him to expose his total asset to the smart contract. Computational Service. User p posts computing tasks, data, and rewards of each task to the smart contract. If the account balance of p is insufficient for the computations, smart contract refuses this service and reduces the credit score of p, to prevent malicious users from attacking the smart contract by constantly sending tasks that they cannot afford to. After the smart contract accepts the tasks of p, the user's computing tasks are stored in the task queue and wait for the selection of computational nodes. To achieve a high benefit, the computational nodes will be active to undertake computing tasks. If multiple computational nodes select the same task at the same time, the node with the highest credit score will win the task. Nodes that undertake the computing task should submit the results after completing. If any computational node cannot finish on time, it will be added to the dishonest set. If all computational nodes can submit the results on time, the smart contract sends the results to user and initiates the period of dispute resolving. During this period, the user needs to verify the results locally and notify the smart contract whether the (1) results are accepted or not. If the period of dispute ends and there is no feedback received from the user, the smart contract assumes that the computations succeed and performs the reward and charge operations. If the user does not accept the results in the feedback, the smart contract will verify the results by itself. e specific function is shown as Algorithm 4. Verification and Payment. If the user does not accept the computing results, the smart contract will perform verification operations to find dishonest nodes or users. In the verification, he will simulate all computations required by the user on the encrypted data uploaded by the latter. is means he will repeat exactly every step the computational nodes have carried out, so as to find which step is not correct and who is cheating. Decryptions are not required in the simulation, and thus, the data privacy of the user is protected. To complete the verification, the smart contract should be equipped with the same function modules as those of the computational nodes. For example, in the secure outsourcing of polynomial multiplication, a function should be Blockchain User ALGORITHM 2: Securely outsource the extended Euclidean algorithm. Security and Communication Networks 5 added into the smart contract to simulate the FFT/IFFT operations on the data uploaded by the user in case of disputes. e dishonest nodes and users will be put into the dishonest set. If no one is put into the dishonest set, the user will pay the reward to all participating nodes. Otherwise, cheating nodes will be penalized and the honest nodes will be compensated. e credit score of the participating nodes that have correctly completed their tasks will increase, while the credit score of the malicious nodes will decrease. When the account balance of a node is lower than the threshold value or its credit score is reduced to zero, the system will remove it. e specific function is shown as Algorithm 5. A malicious user with enough balance may constantly initiate transactions, aiming to increase the burden of the smart contract. However, he cannot refuse to pay because his deposit account is managed by the smart contract. By setting up a suitable threshold in the registration, sooner or later his balance will be used up by his attack. Security Analysis. Both the malicious user nodes and computational nodes can launch attacks to the framework, but their strategies are different. e malicious user nodes could launch a DDoS attack. A malicious computational node could destroy the computation by returning forged results or not returning any result. e user nodes could employ two ways to launch a DDoS attack. One is to continuously publish the tasks that the user actually cannot afford to; the other is to maliciously inform the smart contract that the results are not accepted during the dispute resolving period. For the first attack, the smart contract will refuse to add the computing tasks and data to the queue and reduce the credit score of the user. Moreover, when the node's credit score drops below 0, the node will be removed. We can increase Δc in the function of TASK UPLOAD to remove malicious users as soon as possible. For the second attack, the smart contract has to simulate the computations of all nodes participating in the outsourcing according to the data and records. is attack has a greater impact on the smart contract, but it brings more loss to the attackers (including the financial punishment). Similarly, we can increase Δb in Algorithm 5 to mitigate the impact on smart contracts. e computational nodes also have two ways to deploy attack: returning forged results or not returning any result. e cost of both attacks is the same (in financial and credit punishment). Since forged results render the smart contract to simulate the computations of all nodes, rational computational nodes prefer to attack by returning forged results, rather than return nothing. Similarly, we can increase Δb and Δc in Algorithm 5 to mitigate the impact on smart contracts. e proposed framework adopts a task allocation strategy based on credit scores. When malicious nodes are found, their credit score is reduced, and their probability of obtaining computing tasks in the next time is also reduced. We assume there are enough computational nodes which are willing to return correct results to fulfil the requirement of outsourcing, under the incentives of achieving financial and credit reward. Compatibility Analysis. We know that the Bitcoin script is not Turing-complete, and Ethereum has a complete programming language on the blockchain to execute more complex smart contracts. It is easy to see that our framework is compatible with opcodes allowed by the Ethereum blockchain. Since the function of Verification and Payment (Algorithm 5) involves loops, which are not allowed by the Bitcoin script, our framework is not compatible with opcodes of the Bitcoin blockchain. Polynomial Multiplication and Modular Exponentiation Secure Outsourcing Algorithm Polynomial Multiplication Secure Outsourcing Algorithm. e computational complexity of traditional polynomial multiplication is O(n 2 ), which is reduced to O(nlog(n)) by the FFT. In this section, we employ secure outsourcing to further reduce the local computational complexity to O(n). e outsourcing is implemented in our proposed framework of blockchain-based secure computation outsourcing. e main idea of our algorithm is as follows. Firstly, the Fourier transform of the polynomial coefficients are securely outsourced. Secondly, correlation operation on the results of the Fourier transform is locally performed. Finally, the inverse Fourier transform on result of the correlation operation are securely outsourced. e specific process of our algorithm is shown as Algorithm 6. Description. In Algorithm 6, the input polynomials are f(x) � a 0 + a 1 x + · · · + a n−1 x n− 1 and g(x) � b 0 + b 1 x + · · · + b n−1 x n− 1 . e output is t(x) � f(x) × g(x) � c 0 + c 1 x + · · · + c 2n−1 x 2n− 1 . For convenience, polynomials are replaced with vectors of polynomial coefficients (a � [a 0 , a 1 , . . . , a n (6) end for (7) else (8) stimulate all computations and put dishonest user or nodes into Dishonest (9) t ⟵ ∆b * [IDishonest] (10) for i � 1⟶|Dishonest| do (11) Rep[node i ]⟵Rep[node i ] + ∆c; addr[node i ] ⟵ addr[node i ]−∆b (12) end for (13) for i � 1⟶|Honest|do (14) Rep (1) Six parameters are picked randomly, three of which are i, j, β, s.t.0 ≤ n − 1, 0 ≤ j ≤ n − 1 and 0 ≤ β ≤ 2n − 2. e other three are k 1 , k 2 , k 3 ∈ R Z. We define that L(i, k, n): Z 3 ⟶ Z n can generate one n-dimensional vector in which the i th element is k, and all the other elements are zeros. We define that T(v, r): z (n * 2) ⟶ Z (n * p) can generate a random matrix en, the user generates r 1 � L(i, k 1 , n), r 2 � L(j, k 2 , n), r 3 � L(β, k 3 , 2n), V � T(a, r 1 ), U � T(a, r 1 ), Z � T(b, r 2 ), and S � T(b, r 2 ). In this way, one must know r 1 to recover a from V or U and know r 2 to recover b from Z or S. (1) function DISCRETE FOURIER TRANSFORM FOR RESERVED VECTOR (DFTRV) (r, i) (2) n ⟵ the length of r (3) and (4) locally; (27) (5)-(7). If they are valid, the computing succeeds; otherwise, the computing fails. e user sends a message to the smart contract. en, the latter calls the function VERIFICATION AND PAYMENT: In Algorithm 6, if any verification fails, the user will report a cheating and the algorithm will come to an end. Figure 2 demonstrates the procedures and data communications in the six steps of Algorithm 6. Correctness and Complexity. Because Using the DFTRV/IDFTRV in Algorithm 6, computing Fourier transform of r 1 , r 2 and inverse Fourier transform of r s needs 4 n multiplications. Because of the characteristics of r 1 ,r 2 , and r 3 , only one multiplication is needed to compute each term in F r 1 , F r 2 , and F − rs . Computing F a and F b needs 2pn additions. Computing F c needs 2n multiplications. e verification of equations (3)-(5) takes 6(p − 1)n additions. 6(p − 1)n additions are needed to compute c. e final verification (equations (6) and (7)) needs l + 2n multiplications. To sum up, we need l + 8n multiplications and 10pn − 6n additions. e local complexity of multiplication in this algorithm is O(n), and the local complexity of addition is O(n). erefore, the local complexity of this algorithm is O(n). Security against Passive Attackers. A participant may be a passive attacker. Passive attackers will follow the scripts of the algorithm while exploiting the intermediate information to breach the privacy of polynomials. In the following, we analyze the security of our algorithm against passive attackers. e algorithm should protect the privacy of f(x), g(x), c, and F c . As it is known to all, when all nodes collude, the passive attackers can get the most information, and the security of privacy is the lowest. Since the operations on f(x) and g(x) are consistent, the risks of privacy leakage of them are the same. We analyze the security of f(x) in the worst case, i e., collusion of all nodes. When all the computational nodes collude, they can guess a set of values [a 0 ′ , a 1 ′ , . . . , a n−1 ′ ], in which n − 1 values are consistent with the true coefficients of f(x), while one value is not. Because of r 1 , they even do not know the position of the false value. ey still have to make a bruteforce guessing. If a i ∈ D, where the base of D is m, the attackers should traverse all possibilities by taking m different values for each coefficient. In this case, the attackers have to make m n attempts to get f(x). However, a i ∈ Z in FHEHIL. en, m ⟶ ∞, and the attackers cannot get f(x). F c and c are also privacy-protected. For the security of F c , when all the computational nodes collude, the passive attackers can guess a set of values F 1 , in which 2n − 2 values are consistent with the true coefficients of F c , while one value is not. However, the existence of r 3 shows that passive attackers do not know the position of the false value. e only way to attack is by making a brute-force guessing. Same as a and b, the domain of the coefficients of F c is infinite. erefore, the attackers cannot get F c . For the security of c, on the one hand, the attackers cannot compute c using inverse Fourier transform without knowing F c . On the other hand, it is easy to see that when lacking F − ra , attackers cannot get c which is equal to Security against Active Attackers. A participant may also be an active attacker. Active attackers will inject false computations into the algorithm to tamper with the whole process. In the following, we analyze the security of our algorithm against active attackers. Active attackers may return forged values to damage computing. To damage computing without being detected, attackers prefer to make minimal changes on results. In Algorithm 6, it is easy to see that the lowest risk way for computational nodes to cheat is to tamper with only one item of the results returned to the user, while the other items are correct. ere is one way to cheat in the process of securely outsourcing Fourier transform. For example, the nodes of computing the DFT of f(x) perform honestly, while the nodes of computing the DFT of g(x) do not. One node n j changed the i th term in F z j and another node n k ′ also changed the i th term in F s k . is way of cheating can nullify the verification at equations (3) and (4) ′ is not equal to c l for any random l. e verification at equation (6) can certainly detect this cheating. ere is the other way to cheat in the process of securely outsourcing the inverse DFT of F c . One node n j changed the i th term in F − d j and another node n k ′ also changed the i th term in F − e k . In this way, we can nullify the verification at equation (9) and return a false item c i . e false item c i causes that F c ′ [m] � 2n−2 i�0 W mi 2n−1 c i is also a false value in equation (7). F c ′ [m] will not be equal to F c [m], for all 0 ≤ m ≤ 2n − 2. e verification at equation (7) can certainly detect this way of cheating. Secure Outsourcing of Modular Exponentiation. To the secure outsourcing of modular exponentiation, we extend the algorithm in [3] and apply it to the blockchain. In our extension, six modular exponentiation pairs are outsourced to six computational nodes, instead of a single cloud, aiming to protect against possible attacks on small discrete logarithms. e process is shown as Algorithm 7. e input is two integers. e output is u d , where u is the base and d is the exponent. In a similar way to Figure 2, Algorithm 7 can be implemented in the framework of Blockchain-based computation outsourcing, but we omit it due to limitation of space. Correctness and Complexity. It is easy to prove that the algorithm is correct from equations (8) and (9). In the process of parameter generation, there are two exponentiations, two divisions, and two multiplications. Two exponentiations and six multiplications are involved during the verification. Compared with the exponentiation, the complexity of multiplication and division can be ignored. However, in the algorithm, the exponents are e and t 1 which are much smaller than the original exponent d through the transformation of t 1 � d − k 1 e. erefore, local complexity will be greatly reduced: Security. e only way to pass the verification is that the six computational nodes perform correctly. e forged results of active attackers cannot pass the verification in Step 8. e user only needs to know whether the results are correct or not, and the smart contract can detect the cheating nodes according to the records. We analyze the security against passive attackers in the worst case, i e., the conspiring of six computational nodes. e exponents k 1 , l 1 , and k 2 are visible for attackers, while the other exponents t 1 , d, and e are not. e bases v 1 , v 2 , w 1 , and w 2 are visible for attackers, while g 1 , g 2 , and e are not. We discover that the privacy of u may leak in [3], which sends six pairs to a single node in the cloud. In [3], the base and exponent are about 1000 bit, while the parameters including g 1 , g 2 , e, k 1 , and k 2 are only 64-bit long, to reduce the overhead of local computation. e shorter bit length of parameters may promote an easier attack on the small discrete logarithms. In this kind of attack, an attacker in the cloud can exhaust x so that w * en, e is breached. e attacker then exhausts g, satisfying g e � v 1 . Finally, the cloud can obtain u by w 1 * g. We solve this attack by distributing six modular exponentiation pairs to six computational nodes, which increases the difficulty of the above attack. Results and Discussion In this section, we conduct three types of experiments. Firstly, we evaluate the efficiency of the secure outsourcing of polynomial multiplication in various numbers of polynomial multiplications and compare it with the traditional nonoutsourcing method using FFT. Secondly, we evaluate the efficiency of the secure outsourcing of polynomial multiplication by varying the numbers of items and bit length of coefficients and also compare it with the nonoutsourcing method. Finally, we complete the secure outsourcing for FHEHIL in blockchain, analyze the time consumption of each step, and compare it with the nonoutsourcing method. e experiments are simulated on two machines with Intel Core i7 processor running at 2.90 GHz and 16G memory as a cloud server and Intel Core i5 processor running at 1.80 GHz and 8G memory as a local user. e communication bandwidth is 20 Mbps. e Evaluation of Secure Outsourcing of Polynomial Multiplication. We make experiments to evaluate the efficiency of the secure outsourcing algorithm for polynomial multiplication. We implement this experiment using Py-thon3 language. We compare the secure outsourcing of polynomial multiplication with the nonsourcing algorithm on time consumption with different numbers of polynomial multiplication, in which n � 1024, p � 3, and all the coefficients are 512-bit long. As demonstrated in Figure 3, it is easy to see that when the number of polynomial multiplications is less than 60, the efficiency of the outsourcing scheme is lower than the nonoutsourcing scheme due to the communication time consumption. However, when the number of polynomial multiplications increases above 60, the efficiency of the outsourcing scheme becomes higher than the nonoutsourcing scheme. When the number of polynomial multiplications is less than 300, the bottleneck of the outsourcing scheme is the time consumption on nodes' computations and interactions. When the number of polynomial Input: u, d Output: u d (1) e user generates random parameters g 1 , g 2 , e, k 1 , k 2 ∈ Z, (2) and computes v 1 ⟵ g e l , v 2 ⟵ g e 2 , w 1 ⟵ (u/g 1 ), w 2 ⟵ (u/g 2 ), (4) e user uploads (k 1 , v 1 ), (k 1 , v 2 ), (l 1 , w 1 ), (k 2 , w 1 ), (l 1 , w 2 ), (k 2 , w 2 ) to the smart contract; (5) e smart contract distributes (k 1 , v 1 ), (k 1 , v 2 ), (l 1 , w 1 ), (k 2 , w 1 ), (l 1 , w 2 ), (k 2 , w 2 ) to 6 computational nodes; (6) e computational nodes compute b a after receiving (a, b) and return results to the smart contract; multiplications becomes larger, the bottleneck is the time consumption on local computation. We make another type of experiments to analyze the influence of number of terms and bit length of coefficients on the efficiency of secure outsourcing of polynomial multiplication. We count the time consumption of 400 random polynomial multiplications, with the number of polynomial items varying from 50 to 1000, and the item bit length varying from 50 to 1000. Figure 4(a) demonstrates that the time consumption increases with the increase of items of polynomials and bit length of coefficients. Moreover, the number of terms has a more obvious effect on time consumption. Besides, compared with the nonoutsourcing polynomial multiplication, our method always has a higher efficiency under all scales of data, as shown in Figure 4(b). e Evaluation of Blockchain-Based Secure Outsourcing Scheme of Fully Homomorphic Encryption Using Hidden Ideal Lattice. We employ the relevant security parameters recommended in [2], i.e., n � 1024, t � 310, and p � 3. Our outsourcing scheme consists of the local user's program and the computational nodes' program. Our outsourcing scheme is compared with the nonoutsourcing scheme. e programs are written in Python3, and the smart contract based on the Ethereum platform is written in Solidity. e smart contract interacts with computational nodes' program and local program by the interface provided by Web3. Figure 5 demonstrates the running time at all stages of the two schemes. is figure does not display the time consumption on generating parameters in the FHEHIL because that is not what we are improving. In the process of computing w efficiency is slightly improved. Compared with the non-sourcing scheme, our scheme saves about 2.6 s. e overall time consumption is improved by about 40.7% (the unmarked areas in Figure 5 are the communication time consumption for interacting with the blockchain). Table 3 shows the detailed time consumption of different entities (user, smart contract, and computational nodes) in different stages (verification, communication, DFTRV/IDFTRV, FFT/IFFT, and other computations) for Key Generation. Table 4 shows the detailed time consumption of different entities in different stages for encryption. e time consumption of decryption is not shown in Figure 5. Since there is only one polynomial multiplication, the time consumption of communication is dominant in the process of decryption, as illustrated in Figure 3. erefore, the time consumption of outsourcing decryption (0.379 s) is larger than the nonoutsourcing decryption (0.103 s). Conclusions In this paper, we propose a secure outsourcing algorithm for polynomial multiplication that reduces the local complexity to O(n). According to security analysis, our algorithm is secure against passive and active attackers. We also propose a framework for blockchain-based computation outsourcing. It has a credit-based task allocation strategy, which significantly reduces the probability of failed computations. Using this framework, we implement the secure outsourcing of FHEHIL, in which the basic computations including polynomial multiplication and modular exponentiation can be securely outsourced by our proposed algorithms. e security analysis and experimental results show that our proposed outsourcing schemes are secure and efficient. In the future, we will apply the secure outsourcing of FHEHIL into some practical secure computation problems, such as the millionaire problem, and set operation problems. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Disclosure e conference version of this paper has been published in the 21st International Conference on Parallel and Distributed Computing, Applications, and Technologies (PDCAT 2020).
8,825.6
2021-01-01T00:00:00.000
[ "Computer Science" ]
Slepton pair production at the LHC in NLO+NLL with resummation-improved parton densities Novel PDFs taking into account resummation-improved matrix elements, albeit only in the fit of a reduced data set, allow for consistent NLO+NLL calculations of slepton pair production at the LHC. We apply a factorisation method to this process that minimises the effect of the data set reduction, avoids the problem of outlier replicas in the NNPDF method for PDF uncertainties and preserves the reduction of the scale uncertainty. For Run II of the LHC, left-handed selectron/smuon, right-handed and maximally mixed stau production, we confirm that the consistent use of threshold-improved PDFs partially compensates the resummation contributions in the matrix elements. Together with the reduction of the scale uncertainty at NLO+NLL, the described method further increases the reliability of slepton pair production cross sections at the LHC. For consistency, hadronic cross sections should be computed with the same level of accuracy in both the partonic cross section and the PDFs. With the publication of the NNPDF30 nll disdytop PDFs [41], this has now become possible at the NLO+NLL level. However, since only a reduced number of partonic cross sections that usually enter global fits are available with NLO+NLL precision, only a subset of the corresponding data sets, i.e. on DIS, DY and top pair production, could be used in the determination of these threshold-resummation improved PDFs. The consequence is that the PDF uncertainty band, produced with the NNPDF replica method, is actually larger, not smaller than with globally fitted NLO PDFs. One of the examples used in ref. [41] is in fact slepton pair production, calculated with RESUMMINO. However, only one invariant-mass distribution for left-handed selectrons of mass 564 GeV is studied there. The NNPDF30 nll disdytop PDFs have subsequently been employed to investigate the effect of threshold-resummation improved PDFs on squark and gluino production [42]. The authors showed that the total central cross sections were modified both in a qualitative and quantitative way, illustrating the relevance and impact of using threshold-resummation improved PDFs. They introduced in particular a factorisation (K-factor) method, described in detail below, that allows to combine the impact of NLO+NLL threshold resummation on the reduced central PDF fit with the smaller uncertainty of the global NLO PDF fit, leading to approximately consistent NLO+NLL hadronic cross sections. This method also allows to avoid the problem of exceedingly large, small or even negative cross sections induced by outlier replicas and the lack of positivity constraints on the NNPDF PDFs. The replica method is known to be particularly problematic for resummation calculations, which rely on a transformation of PDFs to Mellin space from all regions of x, including those, where they are not well constrained [31]. The purpose of this paper is therefore threefold. First, our NLO+NLL predictions for slepton pair production are updated to the current LHC collision energy of 13 TeV. Second, we employ with NNPDF3.0 an up-to-date global set of PDFs, that is now also based on ATLAS and CMS data from jet, vector-boson and top-quark production [43]. This allows us to reach our third and central goal of studying the impact of threshold-resummation improved PDFs not only on differential, but also on total slepton pair production cross sections and for a variety of SUSY scenarios. The remainder of this paper is organised as follows: in section 2 we describe our theoretical approach using the K-factor method and how we combine NLO+NLL resummation effects with global PDF and also scale uncertainties. In section 3, we present numerical results for differential and total cross sections of left-handed first and second generation sleptons in graphical and tabular form. We do this for pp collisions of 13 TeV centre-ofmass energy and various slepton masses relevant for Run II of the LHC. Similar results are presented in section 4 for third-generation sleptons, i.e. right-handed or maximally mixed staus. Our conclusions are given in section 5. JHEP03(2018)094 2 Theoretical method Apart from updating our NLO+NLL predictions for slepton pair production to the current experimental conditions at Run II of the LHC, i.e. to proton-proton collisions with 13 TeV centre-of-mass energy, and with recent PDF sets from the global NNPDF3.0 fits, which are now also based on ATLAS and CMS data from jet, vector-boson and top-quark production [43], the central goal of this work is to quantify the impact of threshold-resummation improved PDFs on our predictions. These PDFs, called NNPDF30 nll disdytop, have only recently been made available by the NNPDF collaboration [41]. They are based on a similar setup as those at leading order (LO) and NLO, but use partonic matrix elements at NLO+NLL, albeit so far only for a smaller set of processes (Deep-Inelastic Scattering, Drell-Yan and top-quark pair production), for which these matrix elements at NLO+NLL are available. Together with this NNPDF30 nll disdytop fit, an NLO fit (NNPDF30 nlo disdytop) based on the same subsample of processes and data sets has been provided. Unfortunately, the reduction of the input data set reduces the precision of these fits, and consequently they have currently still larger uncertainties than the global sets. Threshold-resummation improved PDFs have previously been applied to squark and gluino production cross sections at NLO+NLL. The result there was that their effect cannot be neglected, as it modifies both the qualitative and the quantitative behaviour of the sparticle pair production cross sections [42]. In order to eliminate the impact of the reduction of the fitted data set, the authors introduced a K-factor Varying the NLO global PDFs in σ(NLO) NLO global with their reliable spread then produces a reliable (approximate) NLO+NLL global PDF error. In eq. (2.1), the first ratio takes into account the effect of the resummation in the partonic matrix elements using the NLO PDF fit of the global data set, whereas the second ratio parameterises the impact of the threshold-resummation improved partonic matrix elements on the NLO+NLL PDF fit of the reduced data set. As explained above, we use the NLO global NNPDF3.0 PDF set [43] for the computation of the first ratio related to the global part, while the NLO+NLL and NLO PDF fits based on the reduced data set [41] enter the evaluation of the second ratio. A definition equivalent to eq. (2.2) is adopted for invariant-mass distributions by simply replacing the total integrated cross section σ with the differential cross sections dσ/dM˜ ˜ . With the method described above, we also bypass a known issue with the NNPDF approach to PDF uncertainties. In order to compute this systematic error, the NNPDF collaboration fit a large number of Gaussian-distributed replicas of the experimental data without imposing ad-hoc conditions on the shape or positivity of the PDFs to avoid theoretical bias or underestimation of the resulting PDF uncertainty. The PDF uncertainty on JHEP03(2018)094 any observable results from applying to it the ensemble of the replica fits. However, positivity is and practically can be checked only for a subset of observables. The complication encountered in resummation calculations is that the PDF replicas have to be transformed to Mellin space, i.e. be integrated over regions in x where they are not well constrained, which can then lead to unphysically large variations of the resummed cross sections [31]. At NLO and in x-space, one can directly eliminate the replicas that feature such misbehaviour [43]. Alternatively, one can consider only the 68% CL interval by eliminating the replicas leading to the lowest and highest 16 cross sections, respectively, and use the midpoint of this interval as the central prediction [44]. Note that this prescription leads to central predictions that differ substantially from those obtained with the central PDF fit. The K-factor method described above provides a more elegant solution to the problem of outlier replicas entering resummation predictions by avoiding their transformation to Mellin space altogether. According to eq. (2.1) only the NLO global, NLO reduced and NLO+NLL reduced central fits have to be transformed, and according to eq. (2.2) the global replicas have to be applied only at NLO and in x-space. The K-factor method conceals one important benefit of the resummation calculation, which is the sizeable reduction of scale uncertainties. This reduction from NLO to NLO+NLL would be lost if the scales were varied only in the NLO cross section in eq. (2.2) with respect to the central scale µ R = µ F = m˜ . It is therefore evaluated directly in the NLO+NLL (or NLO) cross sections by applying the usual seven-point method of relative factors of two, but not four among the two types of scales. The total theoretical uncertainty is then obtained by adding the relative PDF and scale uncertainties in quadrature. Left-handed selectron/smuon pair production In this section, we study the effects of the threshold-improved NLO+NLL PDFs as implemented in LHAPDF6 [45] on invariant-mass distributions and total cross sections for left-handed first-and second-generation slepton pair production using RESUMMINO [25]. If we assume as usual universality of the corresponding soft SUSY-breaking masses and do not take into account branching ratios and experimental efficiencies, selectron and smuon production cross sections are identical. We set all SM parameters to their current PDG values [46] and use α s (M Z ) = 0.118 with Λ MS n f =5 = 0.239 GeV as appropriate for NNPDF3.0. In the upper panel of figure 1, we show the invariant-mass distribution of left-handed selectron/smuon pairs with a fixed mass of m˜ = 564 GeV. This mass is identical to the one chosen in ref. [41] in order to facilitate a straightforward comparison of our results. The invariant mass distributions, computed at LO (dotted green), NLO (dashed blue) and NLO+NLL (full red line) in the matrix elements, but always with global NLO NNPDF3.0 PDFs, exhibit the typical rise above pair production threshold to about M˜ ˜ = 1.4 TeV and a subsequent fall-off. At the maximum, an increase of about 16% is visible from LO to NLO with an increase of another 2% from NLO to NLO+NLL, which then rises to 3% at M˜ ˜ = 3 TeV as expected. The lower panel of figure 1 shows the K-factor as defined in eq. (2.1) (full red) as well as its second part (dashed blue line) that comes from the change of PDFs alone. The JHEP03(2018)094 latter amounts to a decrease of more than 10% at high invariant mass, which is partially compensated by the NLL corrections in the matrix elements. At low invariant mass, one observes even an overcompensation, such that the total K-factor is slightly larger than unity. This effect was also observed in figure 17 of ref. [41]. Our results agree quite well with theirs despite the fact that they show the slightly different factor This K -factor thus also includes the impact of the reduction of the data set in the PDF JHEP03(2018)094 fits from NLO to NLO+NLL, which we preferred to remove from our analysis and which impacts mostly the region of large invariant mass or parton momentum fraction, where the PDFs are not well constrained. A second difference in our figure is our (yellow) PDF uncertainty band, which is based on the more reliable global NLO fit, while the (dashed red) band in ref. [41] is based on the reduced data set and thus considerably larger. Furthermore, we also show the (green) scale uncertainty obtained with the usual seven-point method, which, as expected from the reduction due to NLL resummation contributions, contributes only little to the total (dashed red) uncertainty (added in quadrature). Remember that the scale uncertainty has been computed directly at NLO+NLL using the global NLO PDFs and has then been rescaled appropriately (cf. section 2). To estimate the size of (N)NNLL over NLL threshold resummation effects in both the matrix elements and the PDFs, it is instructive to compare the invariant mass distributions for slepton pairs in the lower right plot of figure 7 of ref. [28] and of the quark-antiquark luminosities in the upper left plots of figures 13 and 14 of ref. [41]. While the former lie at the upper end of the NLL uncertainty band, the latter are reduced by 1-2% with respect to NLL, confirming again further increased perturbative stability and additional partial compensation of resummation effects in the PDFs. In figure 2 we show similar results, but now for the total cross section as a function of the selectron/smuon mass. As can be seen in the upper panel, left-handed sleptons should have been produced in significant numbers already at Run II of the LHC with luminosities recorded in 2016-2017 by ATLAS and CMS of 35-50 fb −1 each over most of the mass region shown. Indeed, current left-(right-) handed slepton mass limits reach values of 400 (290) GeV, but they depend strongly on the mass splitting with the lightest SUSY particle, usually assumed to be the lightest neutralinoχ 0 1 [34,37]. The central K-factor (full red line) in the lower panel shows that the NLO+NLL PDFs reduce not only the invariant-mass distribution, but also the total cross section by up to 4% for large slepton masses, where their effect partially compensates again the impact of the NLL corrections in the matrix elements. This result agrees with the one for squark pair production in figure 8 and with the result for the underlying quark-quark luminosity in figure 7 of ref. [42]. The difference with the quark-antiquark luminosity is of minor importance in the sea-quark region. For large slepton masses, the total K-factor from NLO to NLO+NLL is not only larger than the scale, but also the PDF uncertainty. In contrast, the impact of the NLO+NLL matrix elements on the PDF fit alone falls within this uncertainty. We conclude this section for future use, e.g. by the LHC experiments, with explicit results in table 1 on the total cross sections at LO, NLO and NLO+NLL, that have been obtained with consistent PDF choices using eq. (2.2), and on the corresponding theoretical uncertainties. The central NLO+NLL results have been obtained with the K-factor method, while the NLO+NLL (asymmetric) scale uncertainties have been computed directly, and the PDF (symmetric) uncertainties at NLO. The latter are therefore identical in the last two columns. Right-handed and mixed stau pair production In this section we repeat the analysis of section 3 for right-handed and maximally mixed stau pair production. Since the off-diagonal elements of the sfermion mixing matrices are proportional to the SM fermion mass, mixing of chirality superpartners is only important for third-generation sfermions, i.e. in our case for tau sleptons. At the cross section level, left-handed stau cross sections are identical to those for selectrons and smuons, so that we do not show the corresponding results again. Experimentally, the analysis for staus is, of course, very different, since their decay products are unstable tau leptons, that are not directly measured in the tracking systems, electromagnetic calorimeters or muon chambers, but that have to be reconstructed themselves from hadronic [47,48] and/or leptonic [36] decay products. In the upper panels of figure 3 we show the total cross sections for right-handed (left) and maximally mixed (right) stau pairs computed at LO (dotted green), NLO (dashed blue) and NLO+NLL (full red line) with global NLO PDFs. We follow here the experimental analysis in ref. [48] and extend it from masses of 400 GeV to 600 GeV. The total cross Table 1. Total cross section for first-generation slepton pair production at the LHC with √ s = 13 TeV as a function of the slepton mass at LO, NLO and NLO+NLL with consistent PDF choices. The central NLO+NLL results are obtained with the K-factor method, whereas the NLO+NLL (asymmetric) scale uncertainty has been computed directly, and the PDF (symmetric) uncertainty at NLO (identical in the last two columns). sections for right-handed staus are clearly smaller than those for left-handed sleptons in figure 2, with those for maximally mixed staus lying between the two extremes. Compared to the cross sections in Run I of the LHC at √ s = 7 TeV (8 TeV) as listed in table 1 (2) of ref. [31], they are increased by a factor of 2.5 (2) at low slepton masses and up to a factor of 10 (5) at high slepton masses. The PDF update from the NLO fit of CT10 [38] used in ref. [31] to the global NLO fit of NNPDF3.0 used here changes the NLO+NLL cross sections insignificantly at low slepton masses and by up to 5% at high slepton masses, which fell well into the CT10 PDF uncertainty, but exceeds the current NNPDF3.0 PDF uncertainty. Although the total cross sections are relatively large over the full mass range shown and should have led to the production of staus in significant numbers at the LHC, only upper limits on the cross sections could so far be derived by ATLAS in Run I [47] and by CMS in Run II [48] in the purely hadronic decay channel and by CMS, for left-handed staus, in the (semi-)leptonic decay channel(s) [36]. The lower panels of figure 3 show the corresponding K-factors according to the full expression of eq. (2.1) (full red) and to its second, PDF-dependent part only (dashed blue line), together with the PDF (yellow), scale (green) and total theoretical uncertainty (dashed red). The QCD corrections turn out to be largely independent of the weak coupling structure of the underlying partonic cross section, so that the dependence on the weak couplings cancels in the ratios and no differences are visible in the K-factors with respect to those of first-generation left-handed sleptons in the lower panel of figure 3. For better readibility and future use, we end this section by listing consistent total cross sections for right-handed and maximally mixed stau pair production at LO, NLO and NLO+NLL in tables 2 and 3. The central NLO+NLL results have again been obtained with the K-factor method, while the NLO+NLL (asymmetric) scale uncertainties have been computed directly, and the PDF (symmetric) uncertainties at NLO using the global NNPDF3.0 fits. The latter are therefore again identical in the last two columns. Conclusion To summarise, we have studied in this paper the effect of modern, NLO+NLL PDFs on consistent NLO+NLL predictions for slepton pair production at Run II of the LHC. Table 3. Same as table 1, but for the pair production of maximally mixed staus. Compared to previous work by us and other authors, we have updated the analysis of left-handed selectron or smuon as well as right-handed and maximally mixed stau pair production to the current LHC centre-of-mass energy of 13 TeV. Also, cross sections at LO, NLO and NLO+NLL have been computed with NNPDF3.0 PDFs from a global fit at NLO and their uncertainties as estimated with the replica method, as well as from a fit based on threshold-resummation improved NLO+NLL matrix elements of a reduced set of observables (DIS, DY and top pair production). We applied a factorisation method proposed previously that minimises the effect of the data set reduction in the PDF fits, avoids the known problem of outlier replicas, and preserves the reduction of the scale uncertainty in our resummation calculation. Apart from the generally known fact that hadronic cross sections increase significantly at higher collision energies, we also observed slightly larger cross sections, in particular for large slepton masses, due to the NLO PDF update. We confirmed that the consistent use of threshold-improved PDFs partially compensates resummation contributions in the matrix elements. Together with the reduced scale uncertainty at NLO+NLL, the described method further increases the reliability of slepton pair production cross sections at the LHC. The new method has been implemented for sleptons in the public code RESUMMINO.
4,555.6
2018-03-01T00:00:00.000
[ "Physics" ]
since Objective: Cerebral blood fl ow (CBF) plays a critical role in the maintenance of neuronal integrity, and CBF alter- ationshavebeenlinkedtodeleteriouswhitematterchanges.AlthoughbothCBFandwhitemattermicrostructur-al alterations have been observed within the context of traumatic brain injury (TBI), the degree to which these pathological changes relate to one another and whether this association is altered by time since injury have not been examined. The current study therefore sought to clarify associations between resting CBF and white matter microstructure post-TBI. Methods: 37 veterans with history of mild or moderate TBI (mmTBI) underwent neuroimaging and completed health and psychiatric symptom questionnaires. Resting CBF was measured with multiphase pseudocontinuous arterialspinlabeling(MPPCASL),andwhitemattermicrostructuralintegritywasmeasuredwithdiffusiontensor imaging(DTI). The cingulatecortexand cingulum bundlewere selectedasa priori regionsof interest for the ASL and DTI data, respectively, given the known vulnerability of these regions to TBI. Results: Regression analyses controlling for age, sex, and posttraumatic stress disorder (PTSD) symptoms revealed a signi fi cant time since injury × resting CBF interaction for the left cingulum ( p b 0.005). Decreased CBF was signi fi cantly associated with reduced cingulum fractional anisotropy (FA) in the chronic phase; however, no such association was observed for participants with less remote TBI. Conclusions: Our results showed that reduced CBF was associated with poorer white matter integrity in those who werefurtherremovedfrom theirbrain injury. Findingsprovidepreliminary evidence of a possibledynamic associationbetweenCBFandwhitemattermicrostructurethatwarrantsadditionalconsiderationwithinthecon-text of the negative long-term clinical outcomes frequently observed in those with history of TBI. Additional cross-disciplinary studies integrating multiple imaging modalities (e.g., DTI, ASL) and re fi ned neuropsychiatric assessment are needed to better understand the nature, temporal course, and dynamic association between brain changes and clinical outcomes post-injury. Introduction Traumatic brain injury (TBI) has come to be known as the predominant injury of U.S. Veterans returning from the recent wars in Iraq and Afghanistan (Hoge et al., 2008). Of the nearly two million military service members that have been deployed since the beginning of these wars, estimates suggest that an astounding 15-25% of these individuals have sustained at least one TBI during deployment (Fortier et al., 2014;Hoge et al., 2008;Terrio et al., 2011;Warden, 2006). The vast majority of these injuries can be classified as either mild or moderate (Defense and Veterans Brain Injury Center, 2016), and are often the direct result of either blunt-force (i.e., direct blow to the head) or blast-related (i.e., pressure wave from an explosive device) trauma. While most Veterans who experience mild neurotrauma do not require immediate or emergency medical care at the time of injury, a host of troubling cognitive (e.g., executive dysfunction, attention and memory deficits) (Combs et al., 2015;Vanderploeg, Curtiss & Belanger, 2005), post-concussive (e.g., headaches, dizziness, fatigue) (King et al., 2012;Lippa, Pastoerk, Benge, & Thornton, 2010), and psychiatric symptoms (e.g, anxiety, depression) (Brenner, 2011;Yurgil et al., 2014) frequently emerge postinjury. Collectively, these enduring neurobehavioral symptoms contribute to considerable health care costs (Stroupe et al., 2013;Tanielian & Jaycox, 2008), and they play a fundamental role in frequently reported decreased quality of life (Schiehser et al., 2015), and increased rates of disability and unemployment observed in Veterans with history of head injury (Lippa et al., 2015). Importantly, although most individuals with mild TBI appear to fully recover within about one year post-injury, a subset of individuals-oftentimes referred to as the "miserable minority"-continue to experience long-term cognitive, psychiatric, and behavioral difficulties (Bigler, 2013a, b;Ruff, Camenzuli, & Mueller, 1996;Vanderploeg, Curtiss, Luis & Salazar, 2007). Unfortunately, the exact neuropathological mechanisms underlying the persistent sequelae of mild neurotrauma remain poorly understood since traditional neuroimaging techniques are generally insensitive to subtle neuropathological changes associated with mTBI, as conventional computed tomography (CT) and magnetic resonance imaging (MRI) scans have largely yielded normal results (Bigler, 2013a(Bigler, , 2013bBrenner, 2011;McAllister, Sparling, Flashman, & Saykin, 2001). The inconsistent nature of neuroimaging findings following TBI may be partially explained by the heterogeneous nature of injury, or alternatively, differences in sample characteristics, scanning parameters, and analytic techniques utilized. However, oftentimes unconsidered are (1) the dynamic relationship between brain variables of interest and (2) how time since injury may factor into brain changes. With respect to the former, studies of normal and pathological aging have consistently demonstrated that cerebral blood flow (CBF) plays a pivotal role in the maintenance of white matter (WM) tissue integrity (Burzynska et al., 2015;Chen, Rosas, & Salat, 2013;O'Sullivan et al., 2002;Salat, 2014;Steketee et al., 2016). Reduced CBF has been demonstrated to not only precede, but also directly contribute to negative WM microand macro-structural changes in older adults (Bernbaum et al., 2015;Brickman et al., 2009;Promjunyakul et al., 2015;Promjunyakul et al., 2016;ten Dam et al., 2007). Importantly, while both CBF and WM changes have been independently examined within TBI (Delano-Wood et al., 2015;Ponto et al., 2016;Vas et al., 2016), few studies have explored relationships between CBF and WM within this population. This is especially critical given CBF reductions could serve to exacerbate or contribute to any trauma-induced WM alterations well beyond the time of initial injury. Unfortunately, the temporal course of the neuropathological consequences of TBI remains poorly understood (Greve & Zink, 2009;Povlishock & Katz, 2005). However, there is some evidence to suggest that both CBF and WM changes may differ depending upon phase of injury (Eierud et al., 2014;Niogi & Mukherjee, 2010). For example, though findings are mixed, fractional anisotropy (FA)-a marker of WM microstructural integrity derived from diffusion tensor imaging (DTI)-has been observed to be both elevated and decreased in various studies examining those with history of TBI in the acute phase of injury relative to those without history of head trauma (Croall et al., 2014;Ling et al., 2012;Mayer et al., 2012). On the other hand, decreased FA is more commonly reported in individuals with history of TBI in the chronic phase of injury (Miller et al., 2016;Wada, Asano, & Shinoda, 2012). Similarly, while studies vary in reporting either elevated or decreased CBF in the acute phase of injury (Doshi et al., 2015;Meier et al., 2015), decreased CBF is most commonly observed in those with history of TBI who are further removed for their initial injury when compared to controls (Fridley, Robertson, & Gopinath, 2015;Ge et al., 2009). While there is no general consensus as to what constitutes acute versus chronic phases of injury, most Veterans are many months to years removed from their initial injury (i.e., in the chronic phase) during assessment; although, there is considerable inter-subject variability in the time between injury and assessment within and across previous Veteran TBI studies (Delano-Wood et al., 2015;Jorge et al., 2012;Mac Donald et al., 2011;Miller et al., 2016). It is especially critical to take into account time since injury when relating CBF and WM integrity given there is some evidence-at least in the aging literature-to suggest that CBF changes may persist for some time before negative WM alterations are subsequently observed (Brickman et al., 2009;Promjunyakul et al., 2015;ten Dam et al., 2007). Therefore, there is a critical need to not only consider how CBF and WM relate to one another, but also how this relationship may depend on time since a TBI event. The current study sought to examine the link between resting CBF of the cingulate cortex and WM integrity of the cingulum bundle-two largely overlapping neuroanatomical regions that are known to be especially vulnerable to TBI effects (Bigler, 2007;Wu et al., 2010). Clarification of such relationships may assist in providing insight into factors influencing disparate brain findings in the TBI literature and elucidate findings that show WM degeneration may evolve over time during the chronic phase of injury (Bendlin et al., 2008;Yeh et al., 2017). We hypothesize that (1) decreased CBF of the cingulate cortex will be associated with reduced WM integrity of the cingulum bundle and (2), that this association will become more pronounced the further removed individuals are from their injuries. Importantly, findings may assist in clarifying mechanisms underlying the poor long-term outcomes and increased risk for stroke and dementia observed in those with history of TBI (Barnes et al., 2014;Burke et al., 2013;Chen, Kang, & Lin, 2011;Lee et al., 2013). Methods Study participants were 37 Operation Enduring Freedom, Operation Iraqi Freedom, and Operation New Dawn (OEF/OIF/OND) Veterans with history of mild or moderate TBI (mmTBI) recruited from outpatient clinics and posted recruitment flyers at the VA San Diego Hospital (VASDH) in La Jolla, California. The institutional review boards (IRBs) at the VA San Diego Healthcare System (VASDHS) and University of California, San Diego (UCSD) approved the study, and all study participants provided written and informed consent. Neuropsychological testing, TBI history interviews, and completion of questionnaires occurred at the Veterans Medical Research Foundation building located on the VASDHS campus. All MRI scanning took place at the UCSD Center for Functional MRI. TBI diagnostic procedure The Department of Defense (DoD)/VA TBI Task Force criteria (2009) was used for diagnosis of mild or moderate TBI. The criteria for mild TBI include loss of consciousness (LOC) b 30 min, or alteration of consciousness (AOC) or post traumatic amnesia (PTA) b 24 h, while the criteria for moderate TBI were LOC N30 min but b24 h, or AOC N 24 h or PTA N1 day but b7 days. Per Clark et al. (2016) trained graduate level and post-baccalaureate research assistants completed TBI history interviews. Each study participant was assessed for both military (i.e., during enlistment in the US armed services) and non-military (i.e., prior to or after discharge from the military) related head injuries. All reported military-related injuries also include assessment of whether the mechanism of injury was blunt or blast-related. For any injury that met diagnostic criteria for mild or moderate TBI, the date of occurrence was recorded and time since the most recent TBI and date of evaluation was calculated for use in subsequent analyses. The following exclusionary criteria were applied to the study sample overall: (1) (2) prior history of major medical illnesses (e.g., myocardial infarction) or neurological conditions (e.g., multiple sclerosis, stroke); (3) current active suicidal and/or homicidal ideation, intent, or plan requiring crisis intervention; (4) current or past history of DSM-IV diagnosis of bipolar disorder, schizophrenia, other psychotic disorder, or cognitive disorder due to a general medical condition other than TBI; (5) DSM-IV diagnosis of current substance/alcohol dependence or abuse; (6) a positive toxicology screen as measured by the Rapid Response 10-drug Test Panel; and (7) any contraindications that prevented MRI scanning. Participants were included in the study if they were OEF/OIF/OND Veterans between the ages of 18-65, completed neuropsychological testing, and received both DTI and MPPCASL sequences. Health status, combat exposure, & symptom rating scales All study participants completed a background health questionnaire and height, weight, and blood pressure was collected at the time of their study visit. Exposure to wartime stressors and combat situations while on deployment was assessed using the Combat Exposure Scale (CES; Keane et al., 1989). Symptom rating scales that quantified current levels of posttraumatic stress (PTSD Checklist [PCL-M]; (Weathers et al., 1993), depression (Beck-Depression Inventory-II [BDI-II]; (Beck et al., 1996), and neurological symptoms (Neurobehavioral Symptom Inventory [NSI]; King et al., 2012) were also completed. DTI: DTI images were collected via dual spin echo EPI acquisition (Reese, Heid, Weisskoff, & Wedeen, 2003) with the following parameters: FOV = 24 cm, slice thickness = 3 mm, matrix size 128 × 128, inplane resolution = 1.875 × 1.875 mm, TR = 8000 ms, TE = 88 ms, scan time: 12 min. Forty-three slices were acquired with 61 diffusion directions distributed on the surface of a sphere in conjunction with the electrostatic repulsion model (Jones, Horsfield, & Simmons, 1999) and a b value of 1500 s/mm 2 . Collection also included one T2 weighted image with no diffusion (b = 0). Distortions due to a lack of magnetic field homogeneity were reduced via field map corrections. Resting CBF: Time-of-flight angiogram was collected with a threedimensional spoiled gradient echo sequence (FOV = 22 × 16.5 cm, slice thickness = 1 mm, 0.57 × 0.74 × 1 mm 3 resolution, TE = 2.7 ms, TR = 20 ms, flip angle 15°) in order to define the location for PCASL labeling. The imaging volume was prescribed to visualize arteries above the vertebral crossing, but below the basilar artery. Axial images were used to select the slice most perpendicular to bilateral vertebral and carotid arteries and this location was then set as the labeling plane in an effort to achieve optimal tagging efficiency for the whole brain PCASL scan. Whole-brain ASL data was acquired during a resting state using an MPPCASL sequence. Importantly, MPPCASL mitigates the adverse effects of off-resonance fields and gradient imperfections on the inversion efficiency in traditional PCASL techniques (Jung, Wong, & Liu, 2010). In MPPCASL, the blood magnetization is modulated with multiple RF phase offsets, and the resulting signal is then fit to a model function to generate a CBF estimate. Parameters included 20 5 mm thick axial slices (1 mm gap), FOV = 24 cm, matrix 64 × 64, PCASL labeling duration = 2000 ms, post-labeling delay = 1600 ms, TR = 4200 ms, TE = minimum, volumes = 60, scan time = 5 min. To achieve CBF quantification in physiological units (mL/100 g-min), a 36-s cerebrospinal fluid (CSF) reference scan was obtained to estimate of the magnetization of CSF (TR = 4000 ms, TE = 3.3 ms, NEX = 9 90°excitation pulse which is turned off for first 8 repetitions to create PDW image contrast; Chalela et al., 2000). A 32-s minimum contrast scan was also acquired to adjust for coil inhomogeneities (TR = 2000 ms, TE = 11 ms, NEX = 2) during the CBF quantification step. Finally, a field map was acquired using a spoiled gradient echo sequence to correct for field inhomogeneities (TR = 500 ms, TE1 = 6.5 ms, TE2 = 8.5 ms, flip angle 45°, scan time = 1:10 min). 2.4. Neuroimaging data processing 2.4.1. T1-weighted anatomical image processing T1 anatomical images were reconstructed and parcellated into regions of interest using FreeSurfer software (Dale, Fischl, & Sereno, 1999). Manual edits were performed to ensure proper region of interest (ROI) segmentation and gray and white matter differentiation. DTI processing DTI preprocessing utilized the Oxford Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB) Software Library (FSL) (Smith et al., 2004). Two field maps were utilized to unwarp EPI acquisitions, and all images were motion corrected and visually inspected occurred for quality control purposes. The FSL program dtifit was used for voxel-by-voxel calculation of the diffusion eigenvalues and to provide fractional anisotropy (FA), a directional measure of diffusion ranging from 0 (isotropic diffusion) and 1 (perfectly anisotropic diffusion) that is reflective of fiber integrity. Tractography TrackVis (Wang et al., 2007), using the fiber assignment by continuous tracking (FACT) algorithm, was used to generate the left and right cingulum bundle for each participant. First, a color-coded map, seen by loading the principle eigenvector image in FSL, was generated to display each voxel's main orientation of diffusion. This information, in conjunction with a non-diffusion weighted map, allowed the rater to place seed points for fiber tracking. An initial seed was placed inferior to the cingulum gyrus and superior to the corpus callosum in the coronal plane. Next, three additional seeds in the anterior portion, the middle, and the posterior portion were placed following the description of Concha, Gross, & Beaulieu (2005) to generate the entire cingulum bundle for each hemisphere. Finally, mean FA was extracted from the length of each generated tract for use in statistical analyses. See Fig. 1 for depiction of left cingulum bundle ROI used in analyses. Resting CBF Each subject's raw ASL data, field map, and anatomical data were uploaded for processing to the Cerebral Blood Flow Biomedical Informatics Research Network (CBFBIRN; cbfbirn.ucsd.edu; Shin, Ozyurt, & Liu, 2013) established at the UCSD Center for Functional Imaging. Field map and motion correction, skull-stripping, tissue segmentation, and conversion to absolute physiological units of CBF (mL/100 g tissue/ min) were completed through CBFBIRN. Quantified CBF maps for each participant were downloaded to a local server where they were blurred to 4 mm full-width at half maximum. Next, T1 images and partial volume segmentations were registered to ASL space and down-sampled to the resolution of the ASL images using the Analysis of Functional NeuroImages (AFNI) package (Cox, 1996). A threshold was applied that removed values outside of the expected physiological range of CBF (b10 or N 150; (Bangen et al., 2014), then whole brain gray matter CBF and regional gray matter CBF values from the Desikan et al. (2006) atlas were extracted. Mean perfusion of the cingulate cortex was calculated as the average of the following gray matter ROIs for each hemisphere with each region's contribution to the average weighted by the volume of the region: rostral and caudal anterior cingulate, posterior cingulate and isthmus of the cingulate. See Fig. 1 for a lateralized depiction of the left cingulate cortex utilized in this study. Statistical analyses Multiple linear regressions were performed to determine (1) whether there was an association between CBF of the cingulate cortex and WM microstructural integrity of the cingulum bundle and (2) whether this association was modified by time since injury. A median split for time since injury was conducted to dichotomize TBI participants into two groups. Chi-squared analyses were utilized to compare the groups in terms of categorical variables and analysis of variance (ANOVA) was used for continuous variables. All statistical analyses were conducted using the Statistical Package for the Social Sciences (SPSS) version 21 (SPSS IBM, New York, USA). Results Participant demographics, TBI characteristics, and symptom rating scales for the sample are presented in Table 1. Participants were predominantly young (Mean age = 33.38 years) male (89%) Veterans who were blast-exposed (49%) and experienced moderate levels of combat exposure (Mean total score of 16.23 on Combat Exposure Scale) while on deployment. With respect to TBI injury characteristics, a greater proportion of Veterans experienced loss (65%) versus alteration of consciousness during their most significant TBI, these injuries were predominantly mild in severity (81%), and on average many months had passed since their most recent head injury (Mean time = 69.05). Symptom rating scales revealed participants endorsed subthreshold levels of posttraumatic stress symptoms (Mean PCL-M total score = 47.57) and depressive symptoms were moderate in severity (Mean BDI-II total score = 21.14). Resting CBF and WM associations A set of multiple linear regressions were performed for each hemisphere in an effort to determine if there was an association between resting CBF of the cingulate cortex and white matter microstructural integrity of the cingulum bundle. In each model age, sex, PCL-M total score, and resting CBF of the cingulate cortex were entered as predictors. Results revealed that neither the left (β = 0.08, p = 0.67) or right (β = 0.03, p = 0.88) cingulate cortex CBF predicted left or right cingulum bundle FA, respectively. Resting CBF, time since injury, and WM integrity A second set of multiple linear regressions were performed for each hemisphere to determine whether time since injury moderated the association between resting CBF of the cingulate cortex and cingulum bundle FA. In the first model, FA of the left cingulum bundle was entered as the dependent variable; age, sex, PCL-M total score, resting CBF of the WRAT-4 = wide range achievement test-4th edition; PCL-M = posttraumatic stress disorder checklist; BDI-II = Beck depression inventory 2nd edition; NSI = neurobehavioral symptom inventory; APOE-ε4 = apolipoprotein-ε4 carrier. Fig. 2). Examination of simple main effects revealed that there was a significant positive correlation between resting CBF of left cingulate cortex and left cingulum bundle FA (r = 0.48, p = 0.04, n = 19) in Veterans furthest removed from their time since injury (≥62 months). However, for Veterans whose injuries were more recent (b62 months), there was no significant association between resting CBF of the left cingulate cortex and left cingulum bundle FA (r = −0.19, p = 0.46, n = 18). Results did not differ when total number of TBIs was included as a covariate in a secondary set of analyses and total number of TBIs (β = 0.13, t = 0.67, p = 0.51) was not a significant predictor of FA of the left cingulum bundle in the model. When this set of analyses was performed for the right hemisphere, there was no significant resting CBF of right cingulate cortex X time since injury interaction on FA of the right cingulum bundle (β = −0.99, t = 0.44, p = 0.66). Group comparisons for phase of injury In an effort to further understand the significant association between resting CBF of the left cingulate cortex and FA of the left cingulum bundle, participant demographics, TBI injury characteristics, and symptom rating scales were compared for participants who were closer versus further removed from their time since injury (see Table 3). Results revealed the groups were comparable on all comparisons except for time since injury. Although not statistically significant, there were a greater proportion of individuals in those further removed from their TBI whose injuries were moderate (rather than mild) in severity. However, when a secondary set of analyses where TBI injury severity was included in the original regression model results remained the same and TBI injury severity was not a significant predictor of FA of the left cingulum bundle (β = 0.04, t = 0.25, p = 0.80). Moreover, sensitivity analyses revealed that when those with moderate TBIs (n = 7) were excluded from this analysis entirely, the significant interaction of resting CBF x time since injury on FA of the left cingulum bundle remained (β = 6.84, t = 3.58, p = 0.002). Discussion The current study explored (1) the association between neuroimaging biomarkers of CBF and WM, and (2) the potential influence of time since injury on this relationship in Veterans with history of mmTBI. Results showed an interaction between time since injury and CBF of the left cingulate cortex on WM integrity of the left cingulum bundle. Specifically, in Veterans who were furthest removed from their time since injury, decreased CBF was significantly associated with reduced FA of the cingulum region. These findings provide preliminary evidence for a dynamic association between CBF and WM that may also play a pivotal role in increased risk for negative health outcomes (e.g. stroke, dementia) commonly observed in individuals with history of TBI. It is possible that this dynamic association observed between CBF and WM may partially explain the mixed findings in the neuroimaging literature, particularly since the vast majority of existing studies have focused on a single neuroimaging modality. While our understanding of the pathophysiological consequences of TBI has improved, the time course of brain changes post-injury remains less well understood. Recent evidence suggests that TBI-related brain changes are not static, but may continue to evolve many months to years following the initial insult. For example, Venkatesan et al. (2015) utilized resting state functional MRI (rs-fMRI) to explore the trajectory of connectivity patterns between the acute and chronic phase of injury in individuals with history of moderate-to-severe TBI. Results revealed that, relative to controls, the TBI group not only demonstrated altered connectivity patterns, but that these differences intensified from the acute to chronic phase of injury. In the present study, CBF and WM associations were only evident in those furthest removed from injury. It may be that this association is a manifestation of pathological processes that are characteristic of more chronic injury phases. Alternatively, as the aging literature has shown, CBF reductions may need to persist for some time before WM alterations arise (ten Dam et al., 2007;Brickman et al., 2009;Promjunyakul et al., 2015). Importantly, we cannot ascribe our results to exact causal or directional etiologies given the cross-sectional nature of this study and future studies are needed to further elucidate the time course of these dynamic relationships. Moreover, given this sample reflects mild TBI, there is also a critical need to clarify to what extent the observed findings may apply to samples comprising primarily moderate or severe injuries. Our finding of an association between CBF and WM in those most remote from their injury aligns well with existing literature demonstrating a pronounced co-variation between WM integrity and vascular function in both healthy and pathological aging samples (Burzynska et al., 2015;Chen et al., 2013;O′Sullivan et al., 2002;Steketee et al., 2016). Within the context of TBI, CBF may play an important role in identifying those at risk for secondary WM changes following injury. For example, in an emergency room sample with mild TBI, decreased CBF at baseline assessment (within hours of injury) was tightly linked with reduced WM integrity at follow-up (on average 5 months post-injury; Metting, Cerliani, Rodiger, & van der Naalt, 2013). The establishment of a relationship between CBF and WM within the context of head injury is critical, as maintenance of vascular health may be a critical point of intervention in the prevention of additional brain damage in those with history of TBI. Indeed, population-based studies have demonstrated that history of TBI is associated with increased risk for stroke (Burke et al., 2013;Chen et al., 2011), which reportedly persists for many years following the initial trauma (Chen et al., 2011). Capturing brain changes in mild TBI is difficult, and it is possible that other neuroimaging metrics not directly examined here (e.g., cortical thickness) may also influence the CBF-WM associations observed in the current study. For example, a study by Duering et al. (2012) used longitudinal MRI methods to study how subcortical infarcts influence cortical morphology post-stroke. They found that damage to subcortical white matter initiated a secondary neurodegenerative process within cortical gray matter. Moreover, structural changes in the form of cerebral atrophy have also been linked to CBF reductions in other clinical populations (Appelman et al., 2008;Wirth et al., 2016). Unfortunately, work teasing apart primary and secondary injury processes within the context of TBI is still in its infancy, and prospective and longitudinal study designs with well-characterized samples are needed to tease apart how brain variables may interact with one another, especially over time, and ultimately influence behavioral outcomes. Our secondary analyses revealed that the observed CBF and WM relationship in those furthest removed from their injury was not driven by fundamental differences in psychological, post-concussive, health, or injury characteristics relative to those whom were closer in time to their injury. Interestingly, both CBF and WM alterations have also been observed in those with elevated vascular risk in mid-to-late life (Beason-Held et al., 2012;Bangen et al., 2014;Maillard et al., 2015;Wang et al., 2007); however, it is unclear to what extent TBI may increase the prevalence of vascular risk factors and whether individuals with elevated vascular risk are uniquely vulnerable to negative brain changes post-TBI. Future studies that include more comprehensive assessment of vascular risk are needed to understand how history of TBI and vascular risk factors may interact to affect the brain, cognition, and functional outcomes post-injury. To our knowledge, this is the first study to investigate both ASL and DTI in the context of military TBI. However, there are several limitations that warrant discussion. Given the cross-sectional nature of this study, we cannot determine causal relationships between reduced CBF and reduced FA. Secondly, we were unable to explore whether these associations differ across mechanism of injury (i.e., blast versus blunt) or with blast-exposure given sample size restrictions. As is common with military studies of TBI, diagnosis of mild or moderate TBI was based entirely upon retrospective self-report of injuries and may therefore be subject to recall bias. We chose to examine CBF and WM of two closely linked neuroanatomical regions that are known to be vulnerable to the effects of neurotrauma; however, future studies will need to examine these effects across the brain and with other DTI metrics (i.e., axial and radial diffusivity) to further elucidate CBF and WM relationships. Replication with larger sample sizes and longitudinal designs are also needed to provide more insight into the complex associations between WM integrity and CBF at different stages post-TBI. Conclusion Taken together, results indicate that, even after adjusting for psychiatric symptomatology, an association between CBF and WM exists in those with history of mmTBI. Although the exact nature and timeline of brain changes post-TBI is unclear, CBF and WM alterations may play a pivotal role in the increased risk for negative health outcomes (e.g. stroke, dementia) that are observed in individuals with history of TBI. Currently, there is an ever-pressing need to consider how brain changes may differ with time and what might mediate or moderate these changes following injury. These findings contribute to our understanding of the possible dynamic relationship between CBF and white matter integrity, and they enhance our understanding of potential pathophysiological mechanisms that exist in the post-acute phase of injury. Compliance with ethical standards & disclosures All procedures involved in this study were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975. Informed consent was obtained from all patients included in the study. Alexandra Clark, Katherine Bangen, Lisa Delano-Wood, Scott Sorg, Dawn Schiehser, Nicole Evangelista, and Thomas Liu declare no conflicts of interest.
6,749.4
2020-02-07T00:00:00.000
[ "Mathematics" ]
Light from Schwarzschild black holes in de Sitter expanding universe A new method is applied for deriving simultaneously the redshift and shadow of a Schwarzschild black hole moving freely in the de Sitter expanding universe as recorded by a remote co-moving observer. This method is mainly algebraic, focusing on the transformation of the conserved quantities under the de Sitter isometry relating the black hole co-moving frame to observer’s one. Hereby one extracts the general expressions of the redshifts and shadows of the black holes having peculiar velocities but their expressions are too extended to be written down here. Therefore, only some particular cases and intuitive expansions are presented while the complete results are given in an algebraic code (Cotăescu in Maple code BH01, https://physics.uvt.ro/~cota/CCFT/codes, 2020). Introduction The light emitted by cosmic objects is one of the principal sources of empirical data in astrophysics. An important accessible observable is the redshift which encapsulates information about the cosmic expansion and possible peculiar velocity of the observed object. For separating these two contributions one combined so far the Lemaître rule [2,3] of Hubble's law [4], governing the cosmological effect, [5][6][7] with the usual theory of the Doppler effect of special relativity [8] even though there is evidence that our universe is expanding. Recently, we proposed an improvement of this approach replacing special relativity with our de Sitter relativity [9,10] where local charts of the same type, playing the role of inertial frames, are related among themselves through de Sitter isometries. In the case of the longitudinal Doppler effect, when a point-like source is moving along the axis observersource, we obtained a redshift formula having a new term a e-mail<EMAIL_ADDRESS>(corresponding author) combining the cosmological and kinetic contributions in a non-trivial manner [11]. The next step is to extend this method to the black holes which, in general, are no point-like sources. The light emitted by a black hole comes from an apparent source situated on a sphere, surrounding the black hole, which is observed as the black hole shadow. When this is not negligible, as in the case of the object M87 [12,13], and the black hole may have a peculiar velocity, the Doppler effect is no longer longitudinal such that the transverse contributions due to the black hole shadow must be evaluated. This can be done only by studying simultaneously the Doppler effect and black hole shadow in the same theoretical framework. As in this paper we would like to study the relation between the redshifts and shadows of the Schwarzschild black holes having peculiar velocities in de Sitter expanding universe we must abandon the geometric method adopting the algebraic approach of Ref. [11] which is suitable for deriving the redshift. This is based on the de Sitter relativity where we may relate the moving black hole proper frames to those of remote co-moving observers freely falling in the de Sitter expanding universe. We start supposing that our expanding universe is satisfactory described by the expanding portion of a (1 + 3)dimensional de Sitter manifold. As the actual observations show with reasonable accuracy that this universe is spatially flat, we consider only local charts with Painlevé coordinates [44] since these have flat space sections. These local charts are de Sitter co-moving frames [45] where the coordinates are the cosmic time and Cartesian or spherical space coordinates. These frames may carry observers related among themselves through the de Sitter isometries which transform simultaneously the coordinates and the conserved quantities [9,10]. The Schwarzschild-de Sitter black holes are usually considered in proper frames with static coordinates and Kottler metric [46]. However, here we prefer the corresponding comoving frames with Painlevé coordinates whose metrics have the same asymptotic behavior as the metric of the observer co-moving frame. Then for a remote observer the black hole co-moving frame appears as an empty de Sitter one which can be related to observer's co-moving frame through a de Sitter isometry, in accordance with the relative motion of the black hole with respect to observer. We assume that at the initial moment when the black hole emits the photon this is translated and has a relative velocity with respect to the remote observer known as peculiar velocity. For avoiding extremely complicated calculations we assume that this velocity is longitudinal, in the black holeobserver direction. Thus we start with precise initial conditions determining the suitable isometry relating the black hole and observer proper frames. This will give the conserved quantities measured in the observer's frame we need for extracting physical results without resorting to geodesics or other geometric objects. For this reason we say that our method is algebraic, observing that there are some advantages among them the principal one is of a coherent framework offered by the de Sitter relativity which prevent us of using supplemental hypotheses. In this approach we derive the related redshift and shadow of a Schwarzschild black hole freely moving in the de Sitter expanding universe. Our principal new results are a closed formula of the black hole shadow depending on its peculiar velocity and the corrections to our new redshift formula derived in Ref. [11] due to the dimension of the black hole shadow. These corrections are too complicated to be written down here but can be derived with the help of an algebraic code on computer [1]. We start in the second section with a brief review of the metrics of the black hole and observer co-moving frames with Cartesian or spherical coordinates, revisiting the equation giving the geodesic shapes in the black hole co-moving frames. In the next section we present the solutions of this equation representing the null geodesics around the black hole. These are the circular geodesics on the photon sphere and the Darwin [47,48] spiral geodesics which determine the black hole shadow. The fourth section is devoted to the de Sitter isometries relating the conserved quantities measured in different co-moving frames. In the next section we obtain our new results assuming that a remote observer sees that the light is emitted by an apparent source on a null de Sitter geodesic whose conserved quantities can be determined. Furthermore, by using an isometry formed by a translation followed by a Lorentzian isometry we obtain the conserved quantities in the observer's co-moving frame from which we extract the observed redshift and shadow of the moving black hole. More specific, the redshift results from the observed energy while the angular radius of the black hole shadow is derived by using the components of the photon momentum in the observer's origin, where the photon angular momentum must vanish. These results are elementary but with a large number of terms that cannot be written here in the general case of a moving black hole. Consequently, we restrict ourselves to presenting here only their series expansions with respect to a common small parameter, the particular case when the relative velocity vanishes and the flat limit. As mentioned, the complete results which cannot be written here are given in an algebraic code on computer [1]. In order to convince oneself that the flat limit is correct we derive in Appendix B the results that can be obtained by applying our algebraic method to a Schwarzschild black hole in Minkowski flat space-time. Finally we present some concluding remarks. As our approach may be applied even in quantum theory we introduce a special notation denoting by ω H = 3 c the de Sitter Hubble constant (frequency) since H is reserved for the Hamiltonian operator [49]. Moreover, the Hubble time t H = 1 ω H and the Hubble length l H = c ω H will have the same form in the natural Planck units with c =h = G = 1 we use here. Here we focus on a Schwarzschild black hole of mass M embedded in the de Sitter expanding universe for which the metric (2) of its static frame has the Kottler [46] (or Schwarzschild-de Sitter) form with where, as mentioned before, ω H is the de Sitter Hubble constant in our notation. The corresponding frames with Painlevé coordinates have the asymptotic behavior of the de Sitter co- For this reason we say that the black hole frames with Painlevé coordinates, denoted by {t, x} BH and {t, r, θ, φ} BH , are the comoving frames of the Schwarzschild black hole in de Sitter expanding universe. The frames of the remote observers, {t, x} and {t, r, θ, φ}, located in the asymptotic zone, are genuine de Sitter comoving frames where the astronomical observations are performed and recorded. The observers stay at rest in the origins of their own frames evolving along the unique time-like Killing vector field of the de Sitter geometry which is not time-like everywhere but has this property just in the null cone where the observations are allowed [49]. Here we use simultaneously Cartesian and spherical coordinates since the Cartesian coordinates are suitable for studying the conserved quantities and the transformation rules under isometries while the spherical coordinates help one to integrate the geodesic equations. For example, the spherical symmetry is obvious in Cartesian coordinates where the metrics (1) and (3) are invariant under the global rotations x i → R i j x j such that we can use the vector notation. On the other hand, only in spherical coordinates one can integrate the geodesics equations in the black hole co-moving frame we revisit briefly in the next. In the frame {t, r, θ, φ} BH with the line element (4) the conserved quantities along geodesics are the energy E and angular momentum L. These give rise to the prime integrals of a geodesic of a particle of mass m moving in the equatorial plane of the black hole (with fixed θ = π 2 ) as where 'dot' denotes the derivatives with respect to the affine parameter λ which satisfies ds = m dλ. The third prime integral comes from the line element in the equatorial plane, which reads as it results from Eq. (4). Hereby one may derive the function After a little calculation, combining the above prime integrals, one obtains the well-known equation giving the geodesic shapes but which is not enough for finding the time behavior of the functions φ(t) and for which one must apply special methods [50]. Note that Eq. (11) derived in the co-moving frame is the same as that of the static frame since this equation is static giving only the shape of trajectory in the same space coordinates. In fact, the time evolution on geodesics is quite different in the static and co-moving frames. Light around black holes The problem of the gravitational lensing which has a long history [51] was studied in general relativity first by Einstein and Eddington [52] but was solved by Darwin [47,48] which derived the null geodesics around the photon sphere of a Schwarzschild black hole in the flat Minkowski space-time. Applying the same commonly used method [53][54][55][56][57][58] we may inspect briefly the null geodesics in the co-moving frame {t, r, θ, φ} B H of the Schwarzschild-de Sitter system. The shapes of the photon geodesics are given by the functions r (φ) which satisfy Eq. (11) with m = 0 that now reads This equation has two types of solutions, namely circular geodesics on the photon sphere and associated spiral geodesics [47]. The circular geodesics satisfy simultaneously the conditions Fig. 1 The functions r (±) (φ) of the spiral photon geodesics closest to the photon sphere of radius 3M giving the radius of the photon sphere r ph = 3M and the mandatory condition derived in Ref. [15]. Thus the photons with circular geodesics are trapped on the photon sphere without escaping outside. Furthermore, by substituting the condition (14) in Eq. (12) we obtain the equation which is independent on the Hubble de Sitter constant ω H . Apart from the circular geodesics, this equation allows the solutions known as the spiral geodesics [47]. These are determined up to a rotation, φ → φ − φ 0 , fixing the origin of this angular coordinate. For example, if we translate the arguments of the functions r (±) as φ → φ ± = φ ± ln 6M then we recover the elegant Darwin form [47] 1 of the spiral geodesics. Thus we may conclude that the presence of de Sitter gravity is encapsulated only in Eq. (14), while the photon sphere and the shapes of the spiral geodesics remain the same as in Minkowski flat space-time (when ω H = 0). The spiral geodesics are symmetric, In Fig. 1 we see that the functions , since between the vertical asymptotes their values are negative having no physical meaning. It is interesting that this opaque window is independent on the black hole mass, On the physical domain these trajectories remain outside the photon sphere, r (±) (φ) > 3M, but approaching to this for large |φ| since This gives us the image of the spiral geodesics rolled out around the photon sphere (as in Fig. 2) escaping outside only when φ is approaching to the values (18) where the functions r (±) can take larger values near singularities. The geodesics r (±) (φ) are the closest trajectories to the photon sphere of the first photons that can be observed at the limit of the black hole shadow. Therefore, for studying this shadow and the associated redshift we have to consider only these photons. de Sitter isometries The de Sitter co-moving frames play the role of inertial frames being related among themselves through de Sitter isometries as in our de Sitter relativity [9,10]. Moreover, the black hole frames have the asymptotic de Sitter symmetry which governs the relative motion of the black hole with respect to remote observers such that we may use these isometries for relating the observer co-moving frames to the black hole one. The de Sitter isometries can be studied easily since this manifold is a hyperboloid of radius 1/ω H embedded in the five-dimensional flat space-time (M 5 , η 5 ) of coordinates z A (labelled by the indices A, B, . . . = 0, 1, 2, 3, 4) and metric η 5 = diag(1, −1, −1, −1, −1). The local charts can be introduced giving the set of functions z A (x) which solve the hyperboloid equation, giving the line element The functions that introduce our Painlevé coordinates are The de Sitter isometry group is just the stable group SO(1, 4) of the embedding manifold (M 5 , η 5 ) that leave invariant its metric and implicitly Eq. (21). Therefore, given a system of coordinates defined by the functions z = z(x), each transformation g ∈ SO(1, 4) gives rise to the isometry x → x = φ g (x ) derived from the system The local charts related through these isometries play the same role as the inertial frames of special relativity. The classical conserved quantities under de Sitter isometries are given by the Killing vectors k (AB) of the de Sitter manifold [49] that are related to those of (M 5 , η 5 ) as allowing us to derive the covariant components of the Killing vectors in an arbitrary chart {x} of the de Sitter space-time as where z A = η AB z B . The conserved quantities along the timelike geodesic of a particle of mass m have the general form K (AB) (x, P) = ω H k (AB) μẋ μ . The conserved quantities with physical meaning are the energy E, momentum P, angular momentum L and a specific vector Q that we call the adjoint momentum [49]. A geodesic in the co-moving frame {t, x} [59], depends only on the momentum P (P = |P|) and the initial condition x(t 0 ) = x 0 fixed at the time t 0 . The conserved quantities in an arbitrary point (t, x(t)) of this geodesic read [9,59] satisfying the obvious identity corresponding to the first Casimir invariant of the SO(1, 4) algebra [49]. In the flat limit, when ω H → 0, we have Q → P such that this identity becomes just the usual mass-shell condition, E 2 − P 2 = m 2 , of special relativity. The conserved quantities E, P and the new ones, form a skew-symmetric tensor on M 5 , whose components transform under the isometries x → x = φ g (x ) defined by Eq. (24) as where g = η 5 g η 5 [9]. Summarizing, we can say that the de Sitter isometries are generated globally by the SO (1, 4) transformations which determine the transformations of the coordinates and conserved quantities. We have thus a specific relativity on the de Sitter space-time allowing us to study different relativistic effects in the presence of the de Sitter gravity. In what follows we use the Lorentzian isometries defined in Ref. [9] and the translations presented in the Appendix A. Observing light from black holes Le us consider now a mobile black hole in its proper comoving frame {t , x } B H with the origin in O B H and a fixed remote observer in his own co-moving frame {t, x} having the origin in O. We consider that the space Cartesian axes of these frames remain parallel with the basis of unit vectors (e 1 , e 2 , e 3 ) such that the geodesic of the emitted photon is in the plane (e 1 , e 2 ). In this geometry we assume that the photon is emitted at the initial moment t = t = 0 when the origin O B H is translated with d and has the relative velocity V = e 1 V with respect to O. Note that the velocity V = P M is conserved depending on the conserved momentum P of the black hole geodesic observed by O. Related conserved quantities A remote observer sees the photon of momentum k = n k k, energy E ph = |k| and angular momentum (14), as emitted from an apparent source S of position vector n S r S on the sphere of radius which is just the apparent radius of the black hole shadow. Therefore, r S is the radius of the sphere hosting different photon sources S that can be observed nearest to the black hole shadow (as in Fig. 3). When ω H → 0 this becomes just the shadow radius 3 √ 3M derived in special relativity [47]. The apparent trajectory of the emitted photon is a de Sitter null geodesic of momentum k that, according to Eq. (27), reads complying with the initial condition x ph (0) = n S r S . This geodesic depends on the orthogonal unit vectors, which can be represented as n k = −e 1 cos α − e 2 sin α, (37) n S = −e 1 sin α + e 2 cos α, (38) where the angle α, giving the apparent direction of the photon, will depend on observer's position. We have thus the opportunity of defining the de Sitter conserved quantities on this geodesic as in an apparent de Sitter empty comoving frame {t , x } associated to {t , x } B H . Here one vector is missing, namely the adjoint momentum Q that can be derived simply at the time t = 0 according to Eq. (30). We complete thus the set of conserved quantities, which satisfy the condition (31). Now we can deduce how these conserved quantities are measured by the fixed observer O since the observer frame {t, x} and the apparent black hole one, {t , x }, are related through an isometry, x = φ g (x ), of the de Sitter relativity [9]. According to our hypotheses, this is generated by the SO(1,4) transformation, formed by a translation (A.1) of parameter a = e 1 d = (d, 0, 0), having the form [9,60] g(a) = followed by the Lorentz boost of the particular Lorentzian isometry we need here [9]. Applying then the transformation (34) with g given by Eq. (43) we obtain the conserved quantities observed by O. This calculation is elementary but complicated, involving many terms that can be manipulated only by using suitable algebraic codes on computer. For presenting the final result it is convenient to introduce the notation which allows us to write down the conserved quantities in the observer's frame as while L 1 = L 2 = P 3 = Q 3 = 0. As expected, these quantities satisfy the invariant identity (31). In other respects, we observe that all the vector components we derived above can take any real values in contrast with the energy which must remain positive definite. This condition is fulfilled only if This is in fact the mandatory condition for observing the photon in O at finite time. When the relative velocity V exceeds this limit then the photon cannot arrive in O at finite time because of the background expansion. Thus V lim defines a new velocity horizon restricting the velocities such that for very far sources with α = 0 and δ = ω H d ∼ 1 this limit vanishes. Shadow and redshift The angle α, which depends on the relative position between black hole and observer, can be found simply imposing the condition L 3 = 0 when the photon is passing through the point O. Solving this equation for L 3 given by Eq. (52) we find Substituting then this angle in Eqs. (47)-(53) we obtain all the conserved quantities measured by O and the maximal velocity V lim . Hereby we can extract the quantities of general interest, namely sin α, measured in the black hole proper frame, the angular radius sin α obs = |P 2 | P , P = P 1 2 + P 2 2 (55) of the shadow, measured in the observer's frame, and the redshift z defined as Unfortunately, the exact expressions of these quantities have a huge number of terms that cannot be written here but can be manipulated on computer [1] for extracting significant particular cases or intuitive approximations. The simplest particular case is when the black hole does not have a initial relative velocity with respect to O. Then by setting V = 0 we obtain the simple formulas showing how the observations of the shadow and redshift are related each other. In the general case of V = 0, we observe that the expansions around ξ = 0 are useful since this is the only parameter which remains very small in astronomical observations as long as the other ones have larger ranges, 0 < δ = ω H d < 1 and 0 < V < V lim . Computing these series we write down here only the lowest terms which are still comprehensible and can be interpreted. Similarly, for the velocity limit we have The principal novelty here is that sin α obs depends on the relative velocity V as in Eq. (60) just from the first order of the expansion which may be observationally accessible. In contrast, the expansion (61) has a first term independent on ξ recovering just the redshift formula we derived recently for a point-like source moving along the e 1 axis [11]. Therefore, the influence of the black hole dimensions could be observed only when the second term of the order O(ξ 2 ) could be measured with a satisfactory accuracy. Flat limit The limit of the de Sitter relativity when the de Sitter-Hubble constant ω H vanishes is just the usual version of special relativity. Then all the co-moving frames of the de Sitter relativity become inertial frames in Minkowski space-time without affecting the black hole geometry in its proper frame. Now a remote observer sees a photon emitted in S as having an apparent rectilinear trajectory with momentum k and energy E ph = k. Then all the measured quantities can be obtained from the de Sitter ones in the limit ω H → 0. In this limit our principal parameters becomes while the other quantities have the limits where As expected, hereby we recover the usual aberration formulas In addition, we obtain the limit velocitŷ which does not make sense since this can exceed the speed of light. Thus the limitation of velocities disappears together with the de Sitter event horizon. For interpreting these results a good choice is Δ 1 as the parameter ξ remains very small. Then Eq. (66) is just the redshift due to the Doppler effect in special relativity. Moreover, we may convince ourselves that all the limits derived above are just the results that may be obtained by applying our method in special relativity, as presented briefly in Appendix B. Concluding remarks We studied how a co-moving observer measures simultaneously the shadow and redshift of a Scwarzschild black hole freely falling in the de Sitter expanding universe. For this purpose we used a new algebraic method offered by our de Sitter relativity that provides us with suitable isometries transforming the conserved quantities of the emitted photon into those recorded by a remote co-moving observer. In this manner we obtained the closed formula (60) of the shadow depending on the peculiar velocity and the corrections to the new redshift formula (61) that can be calculated on computer by using the code [1]. Another advantage of our method is that this is somewhat independent on the coordinates which are involved only in imposing the initial conditions. For example, the choice of the co-moving frames with Painlevé coordinates simplifies the calculations since then we use the translation (2) which does not affect the time. In contrast, in static coordinates, defined by Eq. (4), the same translation gives the transformation (6) which affects the time such that it is more difficult to synchronise the clocks by setting common initial conditions when V = 0. However, this is not a real impediment as long as we know how the coordinates transform among themselves. This relative independence on coordinates can be tested in the case of V = 0 applying our method to a static black hole. Our preliminary calculations indicate that the shadow of the static black hole is given by Eq. (57) just as in the case of the co-moving frames with Painlevé coordinates. This stability comes from the fact that in both cases we start with the same conserved quantities (39)-(42) transformed by the same translation (44) whose parameter d is the physical distance between black hole and observer. Under such circumstances we may compare how the algebraic and geometric methods work in determining the black hole shadow at least in the case of V = 0. The shadow formula derived in Ref. [15] by using the geometric method can be written in our notation as sin α obs = r S d f (d ) where d is now the radial coordinate of the fixed observer. Thus we see that in our approach the factor f (d ) is missing. This means that between these two methods there are some minor differences that may come from the approximation of remote observers on which the algebraic method is based and from the fact that in the static frame the radial coordinate d does not coincide with the physical distance d. However, now it is premature to say more about this relationship before analyzing many examples in de Sitter relativity. In other respects, we must specify that the study of the conserved quantities is not enough for understanding the entire information carried out by the light emitted by moving black holes. There are important observable quantities resulted from the coordinate transformations under isometries as, for example, the photon propagation time or the real distance between observer and black hole at the time when the photon is measured. In Ref. [11] we derived such quantities in the longitudinal case of a point-like source moving along the observer-black hole direction. Therefore, when we apply the algebraic method the coordinate transformations under isometries or other geometric tools may complete our investigation. We hope that the algebraic method proposed here will improve the general geometric approach for getting over the difficulties in analyzing the light emitted by various cosmic objects moving in the de Sitter expanding universe. Data Availability Statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: There are no experimental or numerical data. The code [1] is exclusively algebraic.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . Solving again Eq. (24) for these functions and the transformation (A.1) we find the transformation rules under translations, which we present here for the first time. Appendix B: Moving black holes in special relativity Let us consider now a Schwarzschild black hole embedded in the flat space-time where the remote observers stay in inertial frames. We apply our algebraic method assuming that at the initial time t (45). Performing this isometry we obtain the four-vectors x = x and p = p whose components give the energy and angular momentum observed in O as The condition L 3 = 0 of the photon passing through O gives
7,002.8
2021-01-01T00:00:00.000
[ "Physics" ]
DESIGN OF DISTRIBUTION OPTIMIZATION APPLICATION USING FIREFLY ALGORITHM - The goal of this research was to optimize the distribution of goods and computerize. The method consisted of problem identification, analysis, implementation, and evaluation. Firefly algorithm was used as a method for optimizing the distribution of goods. The results achieved are the shortest distribution route of goods in accordance with the existing constraints and low cost of distribution. It can be concluded that the research can beused too ptimizethe distribution of goods and to minimize distributioncost. I. INTRODUCTION In this modern age, the economic activities can not be separated from the distribution of goods. Distribution of goods can be supported by various modes of transport such asland transportation, water transportation, and air transportation.If the distribution of goods is not done properly, it can increase the cost of goods distribution and also the prices of goods distributed (Onwubiko, 2000;Yang, 2010) To optimize the distribution of goods, the different methods such as particle swarm optimization, genetic algorithm, orfirefly algorithm can be used. The method sused to solve problems optimization of the distribution of goods in this research is firefly. Firefly Algorithm used is the development of discrete versions by updating firefly position with the name of discrete firefly algorithm with edge-based movement (Fister et al., 2013;Jati et al., 2013;Johari et al., 2013;Sayadi et al., 2013). Discrete firefly algorithm with edge-based movement is used to cope with the distribution of goods by determining the shortest route. The movement in this algorithm also produces better movement than the random movement. The algorithm is in the form of a desk top application to solve the problems of optimization of the goods distribution to the stores. It is expected to distribute customer good easier and produce a more optimal distribution costs. (Pan et al., 2013;Pornsing, 2014;Yesodha, 2015). II. METHODS The methodology of this research consists of several steps as follows. First, this step is the identification of the main issues that will be discussed on the distribution cost optimization like how tominimize the cost of goods distribution. Identifying problems is done with the distribution in Christian Store with interviews and brainstorming. From problems, researchers determine the boundary problem that will be the scope of the research. Second, it is literature review. There searchers collect some of there ferences that can help in the research. The literature study is conducted on the books, journals, the Internet and other literature. From literature, it is expected to find the theoretical basis for data processing, model-making, selection of methods and making the program so that it can solve problems of goods distribution in Christian Store. The method used is discrete firefly algorithm with edge-based movement and program with a desktop-based application with the Python programming language. Third, by creating a cost from the goods distribution that is identified and based on the literature study in the form of a mathematical model, it can represent the real system. A mathematical model is adjusted with problem in the research. The distribution cost model is as follows. Total Cost of Distribution = min (∑(fixed cost + variable cost)) (1) Fixed cost is the cost of vehicle maintenance and salary of driver and helper. Meanwhile,variable cost is the total cost of gasoline used. Fourth, there is data collection. The researchers collect data in accordance with the scope of the problem processed to solve the problems of goods distribution. The relevant data in this research is the number, types, and volume of the vehicles used for distribution; the name and address of the depot and store; the costs distribution of operational expenses such as gasoline, vehicles maintenance, and salary of driver and helper; the distance between the depot and each store and the distance between each store; and other data that support the research. Fifth, it is the application of discrete firefly algorithm with edge-based movement in modeling cost distribution. Once the required data is obtained, the simulation of the calculations can be done. Formulation of a mathematical model created is implemented using discretefirefly algorithm with edge-based movement (Fister et al., 2013). Sixth, the designing and making application for a desktop-based application to optimizethe distribution of goods with discrete firefly algorithm with edge-based movement uses the Python programming language.The program is designed with prototyping development methods with the conditions. The program created stores the data with a file-based approach because the data is not many and fast regarding development. Seventh, the application program is tested whether the programisbug-free andin accordance withthe objectives of the identified problems. Eighth, the evaluation program is to determine whether it produces optimal results according to the calculations made as well as to see the suitability of the program to the needs of Christian Store,and the ease of operation. If the results are optimal, it will enter the implementation phase and prepare there ports. If it is no, it will be checked again on the implementation phase of the discrete firefly algorithm with edge-based movement to find optimal results. Ninth, there search objective is achieved, the program will be implemented, and the process is complete. Then, it is followed by drafting the report. Next, it is the mathematical model. The objective function is in Equation 2. (2) The barrier function defines each customer node that visits only one time by one vehicle (Equation 3) and definesthe number of vehicles entering and exiting the same depot (Equation 4). (3) If the customer visits the vehicle, the vehicle must come out. It uses Equation 5. Then, the number of vehicles can be calculated using Equation 6. The objective function of the model is to minimize the total cost of distribution. One of the ways is by describing the total number of distribution costs. The description of the equation is as follows. i, j = index of customers i = 1…n j = 1…n 0 as depot k = index of the vehicles k = 1…m c ij = distance between customer i and j di = order from customer i Q k = capacity of vehicle k p k = the price of fuel from vehicle k r k = the ratio of the fuel needs of vehicle k f k = maintenance costs as well as driver and helper salaries of vehicle k Then, the decision variables are Then, the researchers can idealize some of the flashing characteristics of fireflies to develop firefly-inspired algorithms. For simplicity in describing the new firefly algorithm, the researchers use three idealized rules. First, all fireflies are unisex, so one firefly will be attracted to other fireflies regardless of their sex. Second, attractiveness is related to the brightness. Thus, the less bright firefly will move towards the brighter one. The brightness decreases as its distance increases. If there is no brighter firefly than the particular firefly, it will move randomly. Third, the brightness of a firefly is affected or determined by the landscape of the objective function. For maximizing the problem, the brightness can be in line with the value of the objective function. The brightness can also be defined in a similar way to the fitness function in genetic algorithms. Based on these three rules, the basic steps offirefly algorithm canbe summarized as the pseudo code shown in the algorithm below. In a certain sense, there is some conceptual similarity between firefly algorithm and the Bacterial Foraging Algorithm (BFA). In BFA, the attraction among bacteria is based on their fitness and distance. Meanwhile, in firefly algorithm, the attractiveness is linked to their objective function and monotonic decay of the attractiveness with distance. However, the agents in firefly algorithm haveadjustable visibility and more versatile in attractiveness variations, which usually leads to higher mobility. Thus, the search space can be explored more efficiently (Yang, 2009). III. RESULTS AND DISCUSSIONS In designing this program, the researchers use four UML diagrams for defining the structure of the program. Those are use case diagram, activity diagram, class diagram, and sequence diagram. The use case diagram of the application tobe developed is shown in Figure 1. Figure 1 Use Case diagram An activity diagram is another important diagram in UML to describe the dynamic aspects of the system. It is a flowchart to represent the flow from one activity to another activity. The activity can be described as an operation of the system. Then, the control flow is drawn from one operation to another. This flow can be sequential, branched, or concurrent. Activity diagram designed in this research consists of six activity diagram. Those are the home activity diagram, store activity diagram, vehicle activity diagram, product activity diagram, order activity diagram, and calculate activity diagram. Home activity diagram is the activity that occurs when userrun the application and choose the Home menu. When the user presses the Home menu, the system displays the Home page on the screen. Meanwhile, store activity diagram is an activity when the user chooses the Store menu. When the user presses the Store menu, the system shows Store page on screen. On the Store page, the user needs to enter a name and the address of depot, name, and address of customer shops, and the distance data between the shop and shop depot before pressing the Save button to store data. Similarly, vehicle activity diagram is when the user chooses the Vehicle menu. When the user presses the Vehicle menu, the system shows the Vehicle page on the screen. On the Vehicle page, the user needs toenter the name, volume, fuel ratio (Km/L), fuel price (Rp/L), maintenance cost (Rp) per one road of the vehicle, and salary of the driver and helper per one road (Rp) before pushing Save button to save data. Then, product activity diagram when the user selects the Product menu. Pressing the Product menu, the users can see the Product page on the screen. On Product page, users need to enter the name of goods and volume of goods before pressing the Save button to save data. Meanwhile, order activity diagram is an activity when the user selects the Order menu. When the user presses the Order menu, the system will check whether there is stored data of vehicles and goods to display the Order page on the screen. On the Order page, the user needs to select the store and enters the order amount of each item. Then, the user can press OK and continue with the next store until all orders are entered. Activity calculation diagram is when the user chooses to press Calculate on the order menu. The system will display the calculation result on the screen. Next, class diagram models the static structure of a system. It shows the relationships between classes, objects, attributes, and operations. A brief description of each classis is shown in Figure 2. Sequence diagrams describe the interactions among classes regarding an exchange of messages over time. It is also called as event diagrams. A sequence diagram is a good way to visualize and validate various runtime scenarios. It can help to predict how a system will function and to discover responsibilities a class may need to have in the process of modeling a new system. Like activity diagram, the sequence diagram designed consists of six sequence diagrams. Those arehome sequence diagram, store sequence diagram, vehicle sequence diagram, product sequence diagram, order sequence diagram, and calculate sequence diagram. The example is shown in Figure 3. Figure 4 shows the sequence when the user runs the setupUi via User_Interface. User_Interface invokes the show_store and displays the store along with the data from the Store. Users enter data store through User_Interface followed by User_Interface run set store in Store. Users also enter a store distance through User_Interface, and User_Interface runs the distance between stores in the Store. There are also options for delete, edit, delete all, and save that can be done by the users on User_Interface. Figure 5 shows the sequence when the user runs the setupUi via User_Interface. User_Interface shows show_vehicle and displays the vehicle along with the data from Vehicle. Users enter the vehicle data through User_ Interface. Then, it input the vehicle set on the Vehicle. There are also options for delete, edit, delete all, and save that can be done by the users on User_Interface. Figure 6 shows the sequence when the user runs setupUi through User_Interface. Then, User_Interface invokes show_List_Store and displays the order along with the data from the store. The user selects the data stored on User_Interface followed by User_Interface invokes the show_order product and displays the detailed order along with the data from the product. Last, users enter data order through User_Interface by clicking the order set on Order. The order delivers the data to the User_Interface followed by displaying the return order. Figure 7 shows the sequence when the user runs show_result to see the calculation result through User_ Interface. User_Interface invokes calculate on firefly. Then, firefly runs distance between store on store. The store will give data to firefly, followed by running firefly to get all vehicles. The vehicle delivers the data to firefly. Data that has been processed is sentfrom firefly to User_Interface, which User_Interface. It shows the result to user. There is also an option to print and save in the form of PDF that can be done by the users on User_Interface. The applications arecreatedwith fivemain menus. Home is to see the initial page containing how to use the program. Then, store is to set the store and depot data. The vehicle is to set the vehicle data and Product is to organize product data. Last, Order is for users to enter the order data which is followed by calculations on the program that generates the data distribution costs. These select routes and distances. For the simulation, the result uses a case solved using the firefly algorithm. There are five randomly selected customers from ten customer data. Table 1 is the name of the customer and order quantities of each type of goods. Then, Table 2 is the distance from one store to another. Table 3 is the volume of goods ordered in everystore. After the data in the simulation is calculated, the cost of the vehicle and volume of goods are known. Those are shown in Table 4 and Table 5. The total volume of the delivered goods does not exceed the capacity of the vehicle (14:34<17,3). Thus, it can be sent. If it uses a route optimization with firefly algorithm edge-based movement, it will display the results as follows. To determine the legitimate or illegitimate route, it should be seen from the volume of each store's goods and the capacity of each vehicle. The capacity of the first chosen vehicle should be smaller and so on. The volume of goods is added to the first vehicle until it can not be added anymore because it exceeds the capacity of the vehicle. Then, the goods will be added to the next vehicle. The next example uses iteration0, and fireflyA:1>5>2>3>4. The first selected vehicle is Toyota Dyna with a capacity of 14,4m3. Then, Tk. Aulia has goods about 2,86m 3 , so it still has capacity of 11,54m 3 to Tk. Hamim. Next, it carries goods about 3,18 m 3 from Tk. Hamim to Tk. H. Endin. The available capacity is 8,36 m 3 . Then, it will carry goods about 2,44 m 3 to Tk. H. Mahmud. The available capacity is 5,92 m 3 . It brings 2,68 m 3 volume of goods too. The last is to Tk. Sinar Prianganwith 3,18 m3. Thus, the available capacity is 0,06 m 3 and it uses only Toyota Dyna. To determine the light intensity, it uses the objective function (Equation 1 and 2). After the iteration is complete, it can be seen that the optimal route is the Christian Store> . If the case study shows that the results of manual calculations are Rp423.000,00 with assumption as Rp352.000,00 + Rp71.000,00 and the calculations with firefly algorithm are Rp366.420,00. The Christian Store can save about Rp56.580,00. The example of the calculation in the program can be seen in Figure 8. Total Cost = f k + Average the use of gasoline on manual calculation (9) IV. CONCLUSIONS The processing time by using the discretefirefly algorithm with edge-based movement on vehicle routing problem solve the problem faster than with the manual process. It can be obtained by optimal route in distribution of goods.This application can help shops to solve problems in the number of customers, requests for different item seach customer, the difference in volume, the ratio of fuel, the price of fuel, maintenance of each vehicle, salaries of driver and helper, and the distance between the customer with the depot. This application can help the store to determine vehicles and routes taken as the minimum total distribution costs. Based on the theories that have been discussed and the test results in this application, some suggestions can be used to increase thefurther development of the application. It is recommended to add features in the programs, so it can enter distance data more easily, display the estimated travel time and the route in detail, and use maps.
3,988.6
2017-09-30T00:00:00.000
[ "Computer Science", "Engineering" ]
Effect of Three Different Dentifrices on Enamel by Automated Brushing Simulator- In vitro Profilometric Study Bacterial plaque control is critical in maintenance of oral health because dental plaque is the primary etiological factor for both caries and periodontal disease. Toothbrush and dentifrices play an integral part in accomplishing plaque removal. The aim of the study was to assess the enamel surface abrasion caused by three different dentifrices using automated brushing simulator and profilometer. A total of 24 samples (N=24) which are extracted for orthodontic purposes were divided into three groups based on the dentifrices used. They are Group 1-Colgate Swarnavedsakthi (n=8), Group 2-Dabur Herbal(n=8),Group 3-Ayush(n=8). Samples were subjected to take pre profilometric readings and brushing was done by an automated brushing simulator. A Laser 3D profilometer was used to detect the wear in the enamel surface. Pre and Post profilometric readings were compared. Statistically significant differences (p<0.05) were observed in the values of enamel abrasion among the Group 1(Colgate Original Research Article Keerthana and Ramesh; JPRI, 32(20): 1-12, 2020; Article no.JPRI.59840 2 Swarnavedsakthi) and Group 3(Ayush). After analysing the profilometric values, significant differences were found among the Ayush group while comparing with other groups such as Colgate Swarnavedsakthi and Dabur herbal. This indicates the higher enamel surface abrasion in the ayush group. INTRODUCTION As dental plaque is the primary etiological factor in the initiation and development of both dental caries and periodontal disease, effective plaque control is critical in the maintenance of oral health [1]. The combination of the toothbrush and dentifrices has been a greater cleansing tool in plaque removal as necessary part of disease control [2]. And they aid in mechanical plaque control because of its positive chemical effects and delivery of various therapeutic agents. The ideal dentifrice should provide the greatest possible cleaning action on tooth surfaces with the lowest possible abrasion rates on the tooth surfaces [3,4]. Dentifrices (toothpaste and tooth powders) are complex formulations, and it is necessary to achieve a fine balance to provide cosmetic and oral health benefits, while limiting chemical and/or physical damage to teeth [5,6]. Toothpowder is the most common form of oral hygiene practice in semi urban and rural areas of India for economic reasons as well as due to misconception that these indigenous herbal products may be beneficial for dental and gingival health [7,8]. Abrasives are the insoluble components added to dentifrices to aid the physical removal of stains, plaque, and food debris. The most commonly used abrasives are silica and calcium carbonate. A high quality dentifrice contains silica, but its use increases the cost and hence low quality calcium carbonate, iron oxide, etc., are used to bring down the cost [9,10]. Toothpowders, in general, are known to be 5 times more abrasive than toothpastes due to the quantity of abrasives used (95%) and their particle size. Hence, concern has been expressed about its detrimental effects on tooth substance which pose an important oral health problem. The chemical composition of most of the tooth powders is not known, but they may contain chemicals of low pH, which could cause softening of the dental hard tissues [11,12]. Tooth wear is a complex process which is dependent on the interaction between the wearing agent and the sinuous surface of teeth. Tooth wear is classified as erosion (due to acids), abrasion (due to external mechanical factors, such as toothbrushing) and attrition (due to tooth to tooth contact) but tooth wear is rarely attributed to a single cause as the concomitant effects of erosion from dietary acids and tooth brush abrasion results in worse wear lesions [13,14]. Erosion is not only a surface but also a sub-surface process, penetrating up to 5µm into the sub-surface of the enamel and this subsurface layer can be removed easily by further tooth brush abrasion. This effect can be made worse depending on the duration of the erosive challenge as the depth of the sub-surface effect is greater with longer acidic exposures [15,16]. Toothbrush abrasion is classified in two types: two-body abrasion, between the bristles of the toothbrush and the teeth, and three-body abrasion where the toothpaste slurry containing abrasive particles and loose enamel or dentine chips, acts as the third body of wear. The increasing prevalence of tooth wear at all age groups highlights the importance of prevention to avoid the long detrimental effects of wear and difficulties in restoring worn dentitions. Buccal surfaces of teeth are more prone to abrasion due to adverse brushing. Abrasion is most commonly associated with toothbrushing on the cervical margins of teeth. An upper limit of 250 for relative dentin abrasivity (RDA) or 40 for relative enamel abrasivity for a toothpaste is considered safe for everyday use in adults' International Organization for Standardization (ISO) [17,18]. To evaluate toothpaste abrasivity, many different techniques have been used, for example, the RDA method, weight, and volume loss techniques which are quantitative techniques, measuring the amount of abraded material removed, as well as profilometer and light reflection techniques, which are qualitative techniques measuring the roughness of the abraded material. The aim of this study was to evaluate the enamel surface abrasion using four different dentifrices and a customized automated brushing machine under a profilometer. Infection Control Protocol Immediately after extraction, the soft tissue attached to the tooth surface was carefully removed with wet cotton. Occupational safety and health administration (OSHA) and the Centers for Disease Control and Prevention (CDC) recommendations and guidelines were followed. After collection, the samples were transferred to 100 ml of 5.25% sodium hypochlorite solution (Prime Dental Products Pvt. Ltd, Thane, India) stored in an amber-colored bottle. The solution was discarded after 30 min, and the teeth were transferred into separate jars containing artificial saliva (Wet Mouth, ICPA Health Products Ltd) to simulate the oral environment. The samples were removed with cotton pliers and rinsed in tap water. The samples were dried by placing them over paper towels and blotted for a few minutes before using them for study. Study Criteria Natural teeth which were extracted for orthodontic purposes are included in this study. Teeth extracted due to caries, periodontal problems were excluded from this study. Groups A total of 24 samples were selected for this study (N=24). The specimens were allotted to three groups based on the dentifrices used and they are, Specimen Collection and Preparation A total of 24 samples (N=24) which are extracted for orthodontic purposes and were divided into three groups based on the dentifrices used. After infection control of natural teeth, each one was poured in rubber mould with dental stone. The mould is round in shape and it is checked to fit in both brushing simulator and profilometer. Dental stone is preferred for pouring the mould as it sustains prolonged duration of forces by brushing simulator. The specimens were subjected to take pre profilometric readings. They were noted in two and three dimensional views and noted. Brushing Simulator The toothbrushing station (DentTest, Germany) was developed for the simulation of the tooth cleaning process using both power and manual toothbrushes. The tooth brushing machine included eight holders for toothbrushes. Each toothbrush worked on up to three specimens. The specimens were mounted with standardized key lock fixations. The tooth brushes with soft bristles were used in brushing simulator. The bristles of the toothbrush were aligned without pressure contacting the specimen surface in perpendicular fashion [19]. Fig. 2. Specimens in Brushing Simulator A linear cleaning movement of 3 cm length and zig zag movement was selected for the experiments with power and manual toothbrushes. The movement length was sufficient to cover the specimens surfaces. A force of 2 N was chosen for brushing. Specimens were randomly allocated to three groups. 8 specimens were assigned to each toothbrush. The total brushing strokes were calculated to be equivalent to 10 years of brushing, based on a brushing time of 160 seconds twice-daily of all teeth. Based on this estimation, the maximum contact time for one tooth surface per day is 8 seconds. The total brushing time was calculated to be 320 min. The brush head should be replaced after 45 days (a typical time period to replace the brush). This represents 270 minutes of cumulative use for 24 teeth with 8 s brushing per day. The movement of the power toothbrushes differs from brushing with a manual toothbrush. With oscillatingrotating technology, the brush head oscillates from a center point but does not rotate in a full circle. Considering these differences in brushing movement, each sample was submitted to 42,200 brushing strokes at a rate of 150 strokes per minute for manual toothbrushes. Brushing movements were executed with the slurry applied to the surface of the specimens. The flow rate of the slurry was set at 10 ml/minute. Specimens were rinsed with tap water for 30 seconds and received new slurry automatically every 2 minutes. After the final cleaning run all samples were stored in saline to avoid sample disintegration due to dehydration. Profilometric Analysis A noncontact type optical three-dimensional (3D) profilometer (R Tech Universal 3D profilometer, Nipkow confocal technologies, Japan) was used for taking profilometric readout for each group subjected to brushing and also for the control group. Before each measurement, the sample's surface was covered with distilled water for 30 s. Excess of water was blotted with absorbent tissue without touching the specimen surface and checked for any remnants macroscopically. One-way analysis of variance (ANOVA) was done to compare the mean values across the five groups for numerical data (using the F distribution) followed by post hoc Tukey's test which was performed with the help of critical difference or least significant difference at 5% and 1% levels of significance to compare the mean values. Abrasivity should be sufficient to remove surface deposits including dental plaque, but it should not damage enamel. Typically, this requires that particle size and shape of abrasive agents should be in a desirable range (i.e., 1-20 µm or 5-15 µm) and should not be sharp or angular [19,20]. Crude red ochre, which typically contains clay minerals and/or other impurities in addition to the red iron oxide, may not be suitable for the purpose [21,22]. The samples were also tested by adding them in artificial saliva to simulate the oral conditions during brushing, as saliva contains a buffer to resist changes in pH and also provides a constant supply of ions to the tooth surface. It also favors lubrication, hence, it may also control the abrasive wear to some extent. Similarly, many factors favours caries progression and the endodontic and conservative procedures in treating carious lesions [23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41]. The accuracy of the scanner was measured by repeated scanning of a calibrated gauge of known dimensions and was found to be accurate to 3.1µm. Therefore, samples were scanned by oversampling with a step-over distance of 50µm in order to increase the accuracy of the scan. The chosen stylus would have affected the accuracy of horizontal measurements to a greater degree than vertical measurements so this effect would not affect our results greatly [42,43]. The brushing protocols simulated accumulated wear over 1 year and a force of 2N was chosen as it simulates normal toothbrushing habits and has previously been used by other authors [44]. Profilometry has offered an opportunity to clinical academics to characterize dental disease and its progression to attempt correlation with social, pathological environmental and ultimately, genetic factors. Accurate quantification of changes in dental tissues may guide diagnosis and aid in treatment planning to dental phenomics will have a larger role in clinical practice [45,46]. Techniques such as quantitative light fluorescence are playing a larger role in accurate quantification of the carious process whereas two-dimensional and three-dimensional imaging can shed more information in craniofacial development or pathological processes such as tooth wear or phenotyping of genetic disorders such as hypodontia. Contacting and noncontacting profilometry has been widely used to measure tooth wear in vitro, in situ and in vivo with the use of surface metrology or surface matching software [47,48]. Conventionally, enamel samples are polished flat and a protected section of the sample serves as a reference area to measure wear. This twodimensional method gives rise to bias as profile selection to measure wear is reliant on the operator. Further to this, interpretations of wear from a whole sample are made from a finite number of profiles, which disregard wear from the rest of the sample. Finally, the polishing procedure removes surface irregularities and the aprismatic layer of enamel which has been shown to offer resistance to the wear episodes [49,50], so wear measurements may be overestimated as the protective effect of this layer cannot be assessed. Measurements of wear using native enamel may be desirable for a better understanding of the complex interactions between the wearing agents and the tooth surface. To this effect, surface metrology software allows automatic superimposition and measurements of wear of samples, thus removing operator bias and giving better indication of the complex wear process over a sample with an intact surface [51]. The three-dimensional method described on this article using surface matching software with external datum (ball bearings) allowed measurements of wear over the whole surface and this technique can be used to measure wear from areas of interest such as cervical wear or can be used to measure the effect of the aprismatic layer on resistance to wear. Furthermore, our method removes operator bias as wear is measured from whole samples rather than individual profiles. The difference in wear between enamel and dentine may be attributed to the difference in abrasivity in the dentifrices and the structural differences of enamel versus dentine. Although this was not investigated in this study, it is hypothesised that the silica particles in Ayush may have caused dislodgement of eroded dentine and these particles could have acted as pumice, thus worsening wear. Enamel is a harder substrate compared to dentine so it is more resistant to dislodgement of particles by the abrasive agents in CT. Previous research has shown that dentifrices with medium and high relative enamel abrasion values wear enamel to similar levels. Dentine is a softer substrate which is more susceptible than enamel to erosive/abrasive wear and dentine loss appears to correlate with increased toothpaste abrasivity [52]. The particle size and the type of abrasive in each of the dentifrices might have caused the difference in the results among the experimental groups. All dentifrices selected were aqueous based to keep the vehicle constant to avoid any discrepancy in the result obtained. Among the constituents mentioned by the manufacturer, silica particles present in Ayush might have resulted in the highest enamel abrasion which is statistically significan t(P < 0.05), followed by Colgate Vedshakti and Dabur. CONCLUSION After analysing the pre and post readings of the profilometer, significant differences in values were found among the Ayush toothpaste while comparing with other groups such as Colgate Swarnavedsakthi and Dabur herbal as abrasive content is higher in Ayush toothpaste. This indicates the higher enamel surface abrasion in the ayush toothpaste. CLINICAL SIGNIFICANCE Abrasivity in dentifrices should be sufficient to remove the surface deposits including dental plaque, but it should not damage the enamel. Thus, the choice of dentifrices should be made with utmost care as it influences oral health. LIMITATIONS Natural teeth obtained from different age groups might give differences in abrasion results while using different dentifrices and this study involved smaller sample sizes. FUTURE SCOPE A larger sample size could be taken into consideration in future studies and preferably natural teeth of the same age group can be considered for experimentation. DISCLAIMER The products used for this research are commonly and predominantly use products in our area of research and country. There is absolutely no conflict of interest between the authors and producers of the products because we do not intend to use these products as an avenue for any litigation but for the advancement of knowledge. Also, the research was not funded by the producing company rather it was funded by personal efforts of the authors. CONSENT It is not applicable. ETHICAL APPROVAL It is not applicable. ACKNOWLEDGEMENT With Sincere gratitude, we acknowledge the staff members of the department of Conservative Dentistry and Endodontics and Saveetha Dental College and study participants for their extended support towards the completion of research.
3,774.4
2020-08-26T00:00:00.000
[ "Medicine", "Materials Science" ]
Ethanol-Induced Transcriptional Activation of Programmed Cell Death 4 (Pdcd4) Is Mediated by GSK-3β Signaling in Rat Cortical Neuroblasts Ingestion of ethanol (ETOH) during pregnancy induces grave abnormalities in developing fetal brain. We have previously reported that ETOH induces programmed cell death 4 (PDCD4), a critical regulator of cell growth, in cultured fetal cerebral cortical neurons (PCNs) and in the cerebral cortex in vivo and affect protein synthesis as observed in Fetal Alcohol Spectrum Disorder (FASD). However, the mechanism which activates PDCD4 in neuronal systems is unclear and understanding this regulation may provide a counteractive strategy to correct the protein synthesis associated developmental changes seen in FASD. The present study investigates the molecular mechanism by which ethanol regulates PDCD4 in cortical neuroblasts, the immediate precursor of neurons. ETOH treatment significantly increased PDCD4 protein and transcript expression in spontaneously immortalized rat brain neuroblasts. Since PDCD4 is regulated at both the post-translational and post-transcriptional level, we assessed ETOH’s effect on PDCD4 protein and mRNA stability. Chase experiments demonstrated that ETOH does not significantly impact either PDCD4 protein or mRNA stabilization. PDCD4 promoter-reporter assays confirmed that PDCD4 is transcriptionally regulated by ETOH in neuroblasts. Given a critical role of glycogen synthase kinase 3β (GSK-3β) signaling in regulating protein synthesis and neurotoxic mechanisms, we investigated the involvement of GSK-3β and showed that multifunctional GSK-3β was significantly activated in response to ETOH in neuroblasts. In addition, we found that ETOH-induced activation of PDCD4 was inhibited by pharmacologic blockade of GSK-3β using inhibitors, lithium chloride (LiCl) and SB-216763 or siRNA mediated silencing of GSK-3β. These results suggest that ethanol transcriptionally upregulates PDCD4 by enhancing GSK-3β signaling in cortical neuroblasts. Further, we demonstrate that canonical Wnt-3a/GSK-3β signaling is involved in regulating PDCD4 protein expression. Altogether, we provide evidence that GSK-3β/PDCD4 network may represent a critical modulatory point to manage the protein synthetic anomalies and growth aberrations of neural cells seen in FASD. Introduction Fetal alcohol spectrum disorder (FASD) is a global health problem. FASD encompasses a gamut of permanent birth defects caused by maternal alcohol consumption during pregnancy affecting 1 in every 100 live births in United States and Europe [1,2]. The most severe scale of FASD is symbolized by fetal alcohol syndrome (FAS) which is exemplified by facial dysmorphology, aberrations in growth and central nervous system (CNS) impairment. Regardless of widely disseminated knowledge about potential adverse effects of alcohol, a large number of women consume alcohol during pregnancy. Especially important, is that 18% of pregnant women abuse alcohol during their first trimester of pregnancy [3]. The effects of FAS are serious and irreparable and survivors may have to endure life-long disabilities including but not limited to developmental and birth defects as well as behavioral disorders [4,5]. The CNS is a major target for alcohol's actions and neurological/functional abnormalities include microencephaly, reduced frontal cortex, mental retardation and attention-deficits [6][7][8][9][10]. Several mechanisms apparently contribute to the alcohol-induced disruption of fetal brain development. Among these mechanisms are suppression of protein and DNA synthesis [11,12], inhibition of cell adhesion molecules [13] interference with cell cycle progression [14], alteration in receptor function [15][16][17], increased oxidative stress [18][19][20] altered glucose metabolism [21,22] disruption of endoplasmic reticulum [23], altered activity of growth factors [24] or other cell-signaling pathways [25] and abberant developmental regulation of gene expression [26]. PDCD4 is a tumor suppressor, known to control critical cellular growth events predominantly by suppressing cap dependent translation via its inhibition of eukaryotic initiation factor (eIF4A) [27] and blocking of transcriptional activity of pro-survival transcription factors, AP-1 and Twist by physical interaction [28,29]. Besides, its role in protein synthesis, PDCD4 also controls numerous genes that are implicated in cell cycle and differentiation. Studies demonstrate that PDCD4 has an inhibitory effect on cell proliferation and arrests cell cycle progression [30]. Recent studies suggest that expression of PDCD4 contributes to differentiation of skin (epidermal and hair follicles) which originates from ectoderm, which is also the origin of CNS [31]. Additionally, recent findings in Drosophila melanogaster germ stem cells uncovered the role for PDCD4 in stem cell maintenance and differentiation [32,33]. Narasimhan et al., (2013) from our laboratory has demonstrated that PDCD4 is robustly expressed in rat brain cerebral cortex, cortical neurons. Importantly, developmental ethanol exposure up-regulates the expression of PDCD4 in fetal cerebral cortical neurons which mediates the inhibitory effect of the drug on protein synthesis [11]. However, the molecular mechanism underlying ethanol-induced regulation of PDCD4 is currently not clear. Glycogen synthase kinase 3 (GSK-3) signaling pathway has been elegantly investigated with respect to embryonic brain development, regulating several downstream targets controlling diverse neural functions such as neurogenesis, neural polarization and outgrowth, synaptogenesis and neuronal migration (reviewed in [34,35]). Activation of GSK-3b signaling in response to ETOH has been documented in cerebellar neurons [36], CNS derived PNET2 cells (primitive neuroectodermal tumor 2) [37] and neuroblastoma cells [38]. The fact that GSK-3b is imperative during neural development and that ethanol exposure modulates its activity led us to hypothesize that alcohol-induced PDCD4 is regulated via alterations of GSK-3b signaling pathway. To test this, we utilized cortical neuronal progenitors (neuroblasts)possessing inherent characteristics of proliferation ultimately differentiating into post-mitotic neurons. Gene expression, stability, promoter based transcriptional studies showed that PDCD4 is transcriptionally upregulated by alcohol. Further using loss-offunction and pharmacological inhibition of GSK-3b, we have provided the first evidence that alcohol-enhanced PDCD4 is GSK-3b dependent. This study provides additional insights into the mechanism underlying alcohol-induced brain abnormalities occurring during early phases of fetal brain development. Cell Culture Rat brain cortical neuroblasts. We utilized spontaneously immortalized rat brain neuroblasts obtained from cerebral cortices of 18-day fetal rats (E18 neuroblasts). These cells were generously provided by Dr. Alberto Muñ oz (Instituto de Investigaciones Biomédicas, CSIC, Madrid, Spain) and have been previously characterized to exhibit primitive neuronal marker nestin and NF-68 and not expresssing astrocyte marker glia fibrillary acidic protein (GFAP). They express neuron markers such as NF-145, NF-220 and neuron specific enolase after differentiation induction with dibutyryl-cAMP [39]. Cells were cultured in Ham's F-12 media enriched with 10% FBS, L-glutamine (2 mM), streptomycin (100 mg/ml), penicillin (100 units/ml) and plasmocin (5 mg/ml). Cells were kept in an incubator maintained at 37 0 C under an atmosphere of 95% air and 5% CO 2. All experiments were conducted within passages 2-8. SH-SY5Y culture. SH-SY5Y cells were sub-cultured using equal mixture of minimum essential medium and F-12 HAM nutrient mixture supplemented with 10% FBS, antibiotic/ antimycotic and plasmocin. Cells were maintained at 37uC in a 5% CO 2 incubator. Passages between 26-31 were used. Ethanol (ETOH) Treatment Majority of the experiments were performed using ETOH concentration of 4 mg/ml (, 86 mM). Dose-dependent experiments were carried out using three different concentrations of 1 mg/ml (,21 mM), 2.5 mg/ml (, 54 mM) and 4 mg/ml (, 86 mM) ETOH. To maintain ethanol concentrations in the media, we kept ETOH-treated cells in the incubator previously saturated with 100% ethanol (200 proof) and media concentration was measured using Analox AM1 alcohol analyzer (Analox Instruments, MA, USA) [11]. Control cells were maintained in ethanol-free incubator. ETOH dosage used in the study is within the physiological range and also achieved by chronic alcoholics [40]. Cycloheximide (CHX) and Actinomycin D (Act D) Treatment For assessment of protein stability, neuroblasts were treated with 4 mg/ml ETOH for 12 h and at the 12 th h CHX (20 mM) was added to inhibit protein synthesis. CHX treatment was for 1, 2 and 4 h. After treatment, cells were harvested for Western blotting analysis. To establish protein degradation (turnover) rate, the densitometrically quantified PDCD4 protein levels normalized to tubulin, were analyzed relative to levels at the beginning of CHX treatment. Likewise, for the evaluation of mRNA stability, neuroblasts were incubated with or without ETOH (4 mg/ml) for 12 h followed by Act D (1 ug/ml) treatment for 4, 8, 12 and 24 h and processed for Western and qRT-PCR analyses. No adverse effects of Act D and CHX were observed with the concentrations and time points used in the present study. RNA Isolation, cDNA Synthesis and Quantitative Realtime PCR (qRT-PCR) Total RNA was extracted from neuroblasts using TRIzol reagent according to the manufacturer's instructions (Invitrogen). 1 -1.5 mg of total RNA was subjected to genomic DNA elimination and utilized for cDNA synthesis using Quantitect reverse transcription kit. Subsequently, cDNA samples were reverse transcribed following manufacturer's instructions (Qiagen, Valencia, CA). For quantitative RT-PCR, 1/10 th of cDNA was used for amplification using predesigned Taqman gene expression assay for rat PDCD4 (Rn00573954_m1) and GAPDH (Rn01775763_g1). The cycling conditions were as follows: 50uC for 2 min, 95uC for 10 min, 95uC for 15 sec and 60uC for 1 min. Step 3-Step 4 were repeated till 39 cycles. Data was collected using CFX Manager Software and analyzed by 2 2DDCt method to calculate relative fold change in mRNA expression. Western Blotting Briefly, cells were washed in 1X PBS and lysed in radioimmunoprecipitation assay (RIPA) buffer supplemented with 1 X protease inhibitor cocktail (Sigma), sonicated (Sonics, vibra-cell ultrasonic processor) for 5 sec at an amplitude of 25% and centrifuged at 14000 rpm for 20 min at 4uC. Clarified supernatants was estimated for protein concentration and 30 mg protein was electrophoretically separated using sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and electrotransferred to polyvinylidene difluoride (PVDF) membrane (Bio-Rad, CA). Non-specific binding to membrane was blocked by 5% nonfat dry milk powder in PBST. Membranes were then incubated with primary antibodies against PDCD4, phospho-p70S6Kinase, p70S6Kinase, phospho-mTOR, mTOR, phospho-GSK-3b, GSK-3b, GAPDH and Tubulin (1:1000 or 1:500 concentration) for 3 h or overnight. After 3 PBST washes, membranes were incubated with anti-rabbit or anti-mouse IgG secondary antibody conjugated with horseradish peroxidase (1:10000) for 1 h. Blots were extensively washed in PBST and were developed with ECL chemiluminescence Western blot kit (Thermo scientific, IL, USA) and the signals were quantitated using Scion Image software (Scion Corporation, Frederick, Maryland, USA). The relative intensity of bands was normalized to the loading control, GAPDH or tubulin. Identification and Cloning of rat Pdcd4 Promoter The University of California, Santa Cruz (UCSC) Rat Genome Browser (Nov 2004 assembly) maps and Rat PDCD4 mRNA sequence from Genbank were used as references for the PDCD4 gene structure analysis (BC167751) (http://genome.ucsc.edu). We analyzed the genomic sequence 1046 bp upstream of the 59 terminus of first exon of PDCD4 mRNA (BC167751) corresponding to Rattus norvegicus chromosome 1 assembly. 1046 bp upstream from the transcriptional start site was amplified by PCR using bacterial artificial chromosome (BAC) clone (Assembly: RGSC_v3.4; Chr: 1; Begin-End: 259983790-260217550; Library selection: CH230). PCR was performed with Prime STAR Max premix using primers containing the cushion bases followed by flanking enzymes (bold and underlined) (forward: 59-aataatggtaccgagccgtgagctgtcctagt-39; reverse: 59-atataagctagccgctcgctctgtttgttttt-39). PCR was performed for 32 cycles under the following conditions: 98uC for 10 sec and 60uC for 10 sec and 72uC for 30 sec. The resulting PCR promoter fragment was purified and digested with KpnI and NheI restriction enzymes and ligated into the promoterless pGL4.16 firefly luciferase reporter plasmid to generate the PD PROM luc promoter construct. After verifying the fragment by restriction digestion and DNA sequence analysis (GENEWIZ Inc, South Plainfield, NJ), the plasmid was transformed in a NEB 10-beta competent cell E.Coli and were purified using the Plasmid Maxi Kit (Qiagen). Transient Transfection and Luciferase Assays Cells were transfected using Fugene HD or XtremeGENE HP DNA transfection reagent (Roche Applied Science, IN). 200 ng/ well of DNA construct (pGL4.16 or PD PROM), 3 ng of pRL-TK (Renilla luciferase for transfection efficiency), were transfected using 0.5 ml of Fugene HD or XtremeGENE HP DNA transfection reagent. Transfection was performed in Opti-MEM 1-reduced serum medium, according to the manufacturer's protocol. 24 h post-transfection of pGL4.16 and PD PROM constructs, cells were treated with or without ETOH (4 mg/ml) for 12 h or 24 h and were lysed using reporter lysis buffer (Promega). The lysates were clarified at 14,000 rpm for 10 min and the supernatants were used for dual luciferase assay using the Dual Luciferase Reporter Assay Protocol (Promega) in Glomax 20/20 Luminometer (Promega). For analysis, Firefly luciferase enzyme activity was normalized to corresponding Renilla luciferase enzyme activity. Lithium Chloride (LiCl) and SB-216763 Treatment Neuroblasts were seeded in 6 well plates at a density of 3610 5 cells/well. Following day, cells were treated with or without LiCl (10 mM) or SB-216763 (20 mM) for 1 h prior to 12 or 24 h ETOH (4 mg/ml) treatment. In the case of luciferase assay the inhibitor and ETOH treatment was performed 24 h posttransfection of PD PROM constructs as described above. On completion of experiments, cells were processed for downstream applications (luciferase assay, Western and quantitative real time RT-PCR). Small Interfering RNA (siRNA) Transfection Cells were seeded in 6 well plate at a density of 3610 5 cells/ well. Following day, cells were transfected with either nontargeting siRNA (scr siRNA, 100 nM) or siRNA targeting GSK-3b (si GSK-3b, 100 nM). Prior to transfection, cells were replaced with 800 ml of fresh media and transfected with 200 ml of transfection complex. 24 h later, cells were treated with or without 4 mg/ml ETOH for additional 24 h and processed for either Western blotting or RT-PCR analysis. For experiments involving siRNA followed by luciferase assays, cells were seeded in 12 well plates at a density of 1.5610 5 cells/ well. Cells were pre-transfected with scr siRNA or si GSK-3b for 24 h and followed by reporter construct transfection. Posttransfection of the constructs, cells were exposed to ETOH (4 mg/ml) for 24 h. Lysates were then processed for dual luciferase assays as described above. Statistical Analysis All results are expressed as mean 6 SEM. For comparing more than two groups, one way analysis of variance (ANOVA) followed by Student-Newman-Keul's post hoc analysis was used to determine statistical significance. For some experiments two way analysis of variance followed by Bonferroni post hoc tests were used. Student's t-test was used for experiments involving only two groups. p,0.05 was considered as statistically significant. All statistical analysis was conducted using GraphPad Prism software. The ''n'' number given in the figure legend is common to individual panels within a figure. Ethanol Induces PDCD4 Protein Expression in Cortical Neuroblasts Neuron generation during brain development involves neuroblast lineage progression that encompasses assymetrical division of neural progenitors, cell cycle exit and differentiation [41,42]. Our laboratory has previously shown that PDCD4 plays a critical role in ethanol-induced dysregulation of protein synthesis in PCNs and in utero binge alcohol model [11]. Since neuron development is progeny dependent, in the current study we investigated whether ethanol-induced PDCD4 changes are conserved in mitotic neuroblasts, the immediate precursor of neurons. This would facilitate understanding to what extent and how early the damage to the developing brain is inflicted by ethanol. We used spontaneously immortalized rat brain neuroblasts, an established in vitro model to study developmental brain signaling events [43]. We utilized a range of ETOH concentrations (21 mM to 86 mM) that has been demonstrated to produce in vitro changes that are comparable to alcohol-exposed animals with blood alcohol levels of ,150 mg/dl and also achieved by binge alcohol consumption [44]. Treatment of neuroblasts with 4 mg/ml of ETOH for different periods resulted in increased PDCD4 protein levels. ETOH treatment resulted in a statistically significant (p,0.05) increase by ,2.0 and ,4.0 fold at 12 and 24 h respectively ( Figure 1A). We next used three different concentrations of ETOH (1.0, 2.5 and 4 mg/ml) to determine the dose dependent effects on PDCD4 protein expression. Figure 1B demonstrates that ETOH treatment dose dependently increased PDCD4 protein expression in neuroblasts (p,0.05 vs. control, compare lanes -2,3,4 vs 1). These results indicates that ETOH time and dose-dependently upregulates PDCD4 protein in proliferating neuroblasts, similar to our prior findings in postmitotic neurons. This further suggests that in developmental ethanol toxicity, PDCD4 could play a critical role and its regulation is likely conserved. Ethanol Up-regulation of PDCD4 Expression is not caused by Increased Protein Stability The two major events that could control gene expression are transcription (mRNA synthesis) [45,46] and translation (protein synthesis) [47]. Post-transcriptional mechanisms involving decreased mRNA degradation or increased mRNA stability and those involving decreased protein degradation or increased protein stability, could also influence gene expression [48,49]. Therefore, we examined how PDCD4 could be regulated by ethanol in cortical neuroblasts. mTOR/p70S6Kinase-mediated phosphorylation of PDCD4 results in degradation of PDCD4 by the ubiquitin ligase bTRCP [50]. Since PDCD4 protein is a target for degradation, we first investigated whether ETOH-induced increase in PDCD4 protein levels is due to protein stability using cycloheximide (CHX) experiments. Densitometric quantification of Western blotting analysis showed a significant increase in the expression of PDCD4 with ETOH treatment compared to control (p,0.05, compare lanes 8-11 vs lanes 1-4; Figure 2 A). However, a delayed trend was noted in the rate of PDCD4 decay (ETOH+ CHX-t1/2 , 2.52 h vs untreated+CHX-,2.25 h; t1/2 in Figure 2A, lower panel) following the addition of CHX. To further confirm that stability based mechanisms are not primarily involved in PDCD4 upregulation by ETOH, immunoblotting analysis for activation of mTOR and p70S6Kinase was performed using phosphorylation specific antibodies for mTOR (S2448) and p70S6Kinase (Thr389) and we observed no statistical changes in their phosphorylation ( Figure 2B and 2C). This excludes a role for the mTOR/p70S6K pathway in controlling ETOH-induced PDCD4 regulation. If stability based control were in play, one would have expected the increased PDCD4 protein to be maintained in the presence of CHX+ETOH (indicated by dotted PLOS ONE | www.plosone.org line). Altogether, these data suggests that ETOH mediated PDCD4 protein increase is not due to PDCD4 protein stability (Figure 2A, B and C). Ethanol Induces PDCD4 mRNA and does not Affect mRNA Stability In view of the fact that PDCD4 is regulated by miR-21 [51,52], we next determined if ethanol regulates PDCD4 at the posttranscriptional level. To assess this, we first tested whether ETOH and mTOR from untreated and ETOH treated cell lysates. The signals for the bands were quantitated densitometrically and the intensity of phospho-mTOR relative to the levels of mTOR protein expression was calculated (ns not significant when compared to control as determined by student's t-test) (lower panel). (C) Western blot analysis of phospho-p70S6Kinase (Thr 389) and p70S6Kinase on untreated and ETOH treated whole cell lysates (top panel). Bottom panel illustrate the densitometric quantification of phospho-p70S6Kinase to total p70S6Kinase (ns, not significant compared with untreated control as analyzed by student's t-test). n = 3. doi:10.1371/journal.pone.0098080.g002 induces PDCD4 message using real time PCR analysis. ETOH time dependently effected a significant (p,0.05) increase in PDCD4 mRNA ( Figure 3A). The effect was observed only beyond 4 h of ETOH treatment (4 h data not shown). To further elucidate whether ETOH-dependent increase in PDCD4 mRNA levels is due to an increased half-life of the transcripts, mRNA stability experiments were performed using Act D to arrest de novo mRNA synthesis. Real time-PCR analysis demonstrated that PDCD4 mRNA levels decreased with Act D treatment irrespective of ETOH exposure in neuroblasts ( Figure 3B). This suggests that ETOH mediated PDCD4 mRNA increase is not due to PDCD4 mRNA stability. Further analysis of the Act D exposed samples for PDCD4 protein expression revealed a delayed decay in response to ETOH (,80% decay in control, 4 h vs ,15% in ETOH, 4 h) ( Figure 3C). This delay in PDCD4 protein decay appears to be maintained until 12 h (control ,95% vs ,60%). Interestingly, this trend is seen when PDCD4 mRNA is decreased pointing out a compensation based sustenance of PDCD4 protein (when mRNA is blocked) during combined stress (ACT D+ETOH) ( Figure 3C). These results also highlight regardless of co-existence of any other cellular stress along with ETOH (Act D/CHX), PDCD4 changes are sensitive to ETOH. To note, Lu et al. [52] has reported that miR-21 translationally represses PDCD4 as against the normal post-transcriptional based silencing. At this point, though the mechanism is not clear, we speculate that during a global inhibition of de novo mRNA synthesis using Act D, miR-21 biogenesis could also be affected. Given these facts, in this context, we hypothesize an Act-D induced miR-21 reduction might relieve the translation check that it had on PDCD4. Overall, these data suggests that ETOH-induced PDCD4 changes are not dependent on mRNA stability. Ethanol Transcriptionally Activates PDCD4 Expression in Cortical Neuroblasts We have shown that ETOH induced PDCD4 transcript is not influenced by mRNA stability which suggests that the regulation could be at the transcriptional level. As described above in the experimental section, UCSC genome browser and Genbank (Accession No. BC167751) were used as references for the prediction of putative Pdcd4 promoter. The rat Pdcd4 gene is located on chromosome assembly 1q55 and contains 12 exons (E1-E12) where E1 is a non-coding exon which forms the 59 untranslated region ( Figure 4A) and so far, two splice variants of rat Pdcd4 is known [53]. The 59-most nucleotide of the largest cDNA clone available (Genbank accession # BC167751.1) was designated as transcriptional start site. Analysis of this region indicated the presence of a CpG island which is represented by the horizontal bar above E1 of rat Pdcd4 gene. In general, CpG islands are typically found near transcription start sites (TSS, Fig. 4A), and are considered to be one of the most reliable predictors of promoter in the mammalian genome other than TATA box and initiator region (Inr) [54]. Subsequently, the putative promoter chosen by us for this study was also validated using two bioinformatics based tool provided by RIKEN and http://rulai. cshl.edu (Accession No.86712). A 1046 bp segment upstream of the 59 flanking region of E1 of Pdcd4 gene representing putative rat Pdcd4 promoter (PD PROM) was PCR amplified using BAC clone ( Figure S1) which was subsequently cloned into a reporter construct and sequence verified. Transient transfection of PD PROM demonstrated increased luciferase activity compared to pGL4.16 ( Figure 4B) confirming that indeed the genomic fragment exhibited transcriptional activity. Next, we examined the effect of ETOH on PD PROM activity. As shown in Figure 4C, PD PROM reporter activity was significantly increased (p,0.001) by , 2-fold when exposed to ETOH. These results indicate that ETOH induced Pdcd4 expression occurs at the level of gene transcription. Ethanol Activates GSK-3b in Cortical Neuroblasts Having established that ETOH transcriptionally activates PDCD4, we next explored for the possible regulator involved in this control. Recent studies have documented that ETOH promotes glycogen synthase kinase 3b (GSK-3b) signaling in CNS and modifies critical neurogenetic processes by regulating downstream targets. Therefore, we tested whether ETOH's induction of PDCD4 is mediated by GSK-3b activation. To test this assertion, we performed Western blotting on untreated and ETOH-treated cell lysates to determine GSK-3b kinase phosphorylation. Phosphorylation at Ser 9 negatively regulates the activity of GSK-3b whereas phosphorylation at Tyr 216 positively regulates its activity [55]. Using GSK-3b Ser 9 specific phospho antibody, we demonstrated that ETOH-treatment significantly decreased the inhibitory phosphorylation starting from 2 to 24 h compared with the control (Figure 5A and 5B). The later time points demonstrated a remarkable reduction in Ser 9 phopshorylation indicating enhanced activity of GSK-3b (compare lanes 5, 6 vs 1; Figure 5B). While no changes in GAPDH normalized total GSK-3b levels were observed ( Figure 5C). In addition, Tyr 216 phosphorylation of GSK-3b was found to be unchanged in response to ETOH treatment ( Figure S2). As a GSK-3b functional assay, phosphorylation of one of its substrates, b-catenin at Ser33/ Ser37/Thr41, was assessed using phospho-specific (Ser33/Ser37/ Thr41) antibody. Phosphorylation at these sites by GSK-3b destabilizes and degrades b-catenin [56]. Evidently we observe a significant decrease in b-catenin protein expression on alcohol treatment ( Figure S3). This was paralleled by an increase in GSK-3b specific phosphorylation of b-catenin at Ser33/Ser37/Thr41 ( Figure S3). This suggests that Tyr 216 phosphorylation does not contribute to the activity of GSK-3b and in fact, the decrease in ser 9 inhibitory phosphorylation (Fig. 5) is sufficient to keep GSK-3b active. It has been suggested that activation of GSK-3b could occur independent of changes observed in Tyr 216 or Ser 9 involving several post-translational mechanisms [57,58]. As Wnt-3a is a negative regulator of GSK-3b, we next assessed the role for Wnt-3a in GSK-3b-mediated PDCD4 regulation utilizing recombinant Wnt-3a experiments. We noted that Wnt-3a treatment decreased resting PDCD4 expression suggesting a Wnt3a/GSK-3 signaling in PDCD4 regulation under basal conditions (lane 1 vs 3; Figure S4). Further, Wnt-3a pretreatment significantly decreased ETOH-induced PDCD4 protein expression (lane 2 vs 4; Figure S4). In support to this finding, Vangipuram and Lyman (2012) have documented that ethanol has a negative impact on Wnt/GSK-3b/b-catenin signaling pathway in human neural stem cells [59]. Our future study will address as to how GSK-3/catenin signaling downstream of Wnt-3 regulates PDCD4 expression. Altogether, these results suggest that Wnt-3/GSK-3b/catenin pathway might control PDCD4 regulation and that a decrease in GSK-3b ser9 phosphorylation status along with unknown post-translational modifications might have a predominant influence on activating GSK-3b in response to ETOH in neuroblasts. Pharmacological Inhibition of GSK-3b Blocks ETOHinduced PDCD4 So far we have shown that ETOH upregulates PDCD4 beyond 8 h at the same time activating GSK-3b as early as 2 h. Therefore, we next tested whether this GSK-3b activation regulates PDCD4 using pharmacological inhibitors, lithium chloride (LiCl) and SB-216763. GSK-3b activity was evaluated using an antibody specific for inhibitory phosphorylation at Ser-9 which was observed to be enhanced on chemical inhibition using LiCl ( Figure S5 Figure 6C & F). Altogether these data suggests that ETOH-induced PDCD4 upregulation is under the control of GSK-3b. Since we did not observe a total blockade of PDCD4 expression by GSK-3b inhibition, we do not exclude the interplay of other mechanisms in regulating PDCD4. Specificity concerns in usage of pharmacological inhibitors led us to use gene silencing based strategy to elucidate the molecular involvement of GSK-3b in ETOH-induced PDCD4 regulation. First, the efficiency of GSK-3b specific siRNA in knocking down GSK-3b expression was tested using immunoblotting. A clear decrease in GSK-3b levels by ,50% was observed in cells transfected with GSK-3b siRNA when compared to scrambled nontargeting siRNA ( Figure 7A). Using this loss-of-function strategy, we next demonstrated that GSK-3b siRNA transfection by itself significantly decreased the expression of PDCD4 (lane 3 vs lane 1; Figure 7B) (p,0.05) suggesting a role for GSK-3b in basal PDCD4 expression. Though a significant downregulation of PDCD4 is achieved by blockade of GSK-3b, a notable residual level of PDCD4 is still observed which may be attributed by incomplete GSK-3b silencing (lane 2 vs lane 1; Figure 7A). Further, ETOH-induced PDCD4 was also significantly (p,0.05) blocked by the downregulation of GSK-3b ( Figure 7B; lane 4 vs lane 2). In a similar manner, downregulation of GSK-3b resulted in a significant reduction of both basal (lane 1 vs lane 3; Figure 7C) as well as ETOH-induced PDCD4 mRNA levels (p,0.05) (lane 2 vs lane 4; Figure 7C). Additionally, we determined whether the above GSK-3b dependent changes in Pdcd4 message is influenced at the level of gene transcription using promoter assays. Figure 7D shows a significant decrease in PD PROM luciferase activity in cells transfected with GSK-3b siRNA when compared to scrambled siRNA (p,0.05) (lane 1 vs lane 3). Over and above ETOH-induced PD PROM activity was remarkably blocked in GSK-3b silenced cells (lane 2 vs lane 4; Figure 7D). Altogether, these data strongly indicate a molecular involvement of GSK-3b in regulating Pdcd4 gene expression under resting as well as ETOHinducible state in cortical neuroblasts. Discussion Consumption of alcohol during the sensitive periods of neurogenesis can lead to developmental brain disabilities associated with FAS [60]. In post-mitotic neurons and in cerebral cortex of a rat FAS model, we previously reported a novel molecular mechanism involving PDCD4 by which ETOH suppresses protein synthesis, a process that is critical for brain development. PDCD4 endowed with a capability of suppressing translation is known to play a pivotal role in cell proliferation, differentiation, migration, consists of 12 exons (E1-E12) out of which E2-E12 are coding exons and E1 is a non-coding exon which forms the 59 untranslated region. Sequences 1046 bp upstream of the transcriptional start site (TSS) regarded as Pdcd4 promoter was used in the present study. The transcriptional start site is designated as +1 and CpG island is represented by the horizontal bar above E1 of rat Pdcd4 gene. (B) Cells were transfected with either pGL4.16 or PD PROM constructs and luciferase activity was evaluated 24 h post-transfection using dual luciferase reporter assay system. Data was analyzed using Student's t-test. *p,0.001. (C) Neuroblasts were transfected with either pGL4.16, promoterless or PD PROM (21046) for 24 h. Following transfection, cells were treated with or without 4 mg/ml of ETOH for 12 h and were processed for luciferase activity. Data was analyzed using one-way ANOVA and Newman-Keul's posthoc test.*p,0.001. n = 6. doi:10.1371/journal.pone.0098080.g004 invasion, inflammation, apoptosis, drug sensitivity, tumorigenesis and progression [61]. Alcohol has been shown to affect essential processes such as neuronal migration, neurogenesis and gliogenesis during the early phases of brain development [62]. Given this premise, along with scant information, it is essential to uncover the potential mechanisms underlying PDCD4 regulation during ethanol-neurotoxicity. To this end, we identified for the first time, a role for GSK-3b in the transcriptional control of PDCD4 during normal as well as ETOH-stressed neurogenetic processes using immediate neuronal precursor cellular system. Our current study reinforces the notion that ETOH disrupts PDCD4 expression in neuroblasts (Figure 1) akin to our previous findings in primary cortical neurons suggesting that PDCD4 could be alcohol-sensitive and a developmentally regulated gene in brain. Therefore, any distortion in the PDCD4 regulatory network during the vulnerable period of cortex development by ETOH is expected to strongly impact fetal cortical architecture. Our study provides a crucial link between ethanol and a critical molecule that is involved in cellular differentiation. PDCD4 has been widely reported to be regulated post-translationally by mTOR/p70S6kinase/b-TRCP dependent proteasomal degradation and posttranscriptionally by miR-21 [50,63]. However, the current study using CHX and ACT D excludes the role for above mechanisms in ETOH-induced upregulation of PDCD4 in neuroblasts (Figure 2 & 3). On the contrary, we observe increase in PDCD4 levels even when the existing transcripts are rapidly eliminated (ETOH+ACT D -t1/2 , 8h vs untreated+ACT D -t1/2 ,15h). These observations explain the existence of inductive phenomenon for ETOH-specific PDCD4 regulation in addition to supporting the claim that transcripts with faster decay undergo larger induction [64]. Such a relationship has already been documented as a fundamental principle for a category of mRNAs involved in transcription, signal transduction and stress-response [64,65]. Transcriptional induction is the first level of gene regulatory control. In line with this, our studies with PDCD4 promoter demonstrated that ETOH transcriptionally upregulates PDCD4 gene expression in brain neuroblasts ( Figure 4) and points out involvement for the ,1 kb proximal promoter fragment upstream of transcription start site. PDCD4 has been already reported to be transcriptionally controlled by v-myb, Sp1, ZBP-89, Smad3, RAR-a [66][67][68][69]) in non-neuronal systems. Our experiment with neuroblasts point out that GSK-3b specific phosphorylation of bcatenin is increased and subsequently, the expression of b-catenin is reduced in response to ETOH treatment ( Figure S3). It has been reported that Wnt/b-catenin signaling is appropriately regulated by the protein level of b-catenin that in turn is modulated by its phosphorylation [70]. This suggests that the decrease in b-catenin level dependent on its phosphorylation could likely derepress the effect that it had on Pdcd4 promoter resulting in an increase in Pdcd4 expression. Several studies have shown that b-catenin could repress genes such as Tcf3, NFkB, 15-PGDH which is derepressed and activated upon knockdown or transcriptional inactivation of b-catenin [71][72][73]. Gleaned from these studies and our data, we speculate that PDCD4 could be yet another derepressed target of b-catenin signaling which is under investigation. In general, stimuli from extracellular milieu are conveyed via signaling intermediates involving several protein kinases to regulate gene expression. In this context, exploring for possible regulators in the form of stress kinases led to the discovery of GSK-3b in controlling ethanol induced regulation of PDCD4 in neuroblasts. Interestingly, pharmacological inhibition and molecular loss-of-function approaches showed a role for GSK-3b in baseline regulation of PDCD4 in rat cortical neuroblasts. GSK-3b dependent basal regulation of PDCD4 was also noted in human dopaminergic SH-SY5Y neuronal cell model signifying the Figure 6. Chemical inhibition of GSK-3b signaling pathway increases PDCD4 expression. Cells were pre-treated with with LiCl (10 mM) or SB-216763 (20 mM) for 1 h and then exposed with ETOH for 12 h. Panel A shows Western blot and densitometric scanning analysis of PDCD4 and tubulin with LiCl and ETOH treatment. (B) At the end of LiCl and ETOH treatment, cells were processed for qRT-PCR analysis to determine PDCD4 mRNA expression. Results are expressed as fold change in mRNA relative to GAPDH control. (C) Cells were transfected with either promoterless, pGL4.16 plasmid or 1046 bp PD PROM construct for 24 h. Following transfection, cells were pretreated with LiCl for 1 h and then exposed with ETOH for 12 h. After treatment, cells were processed for determining luciferase activity. Data are expressed as fold change relative to control. D, E and F depicts the Western blot analysis, RT-PCR analysis (expressed in fold change) and relative luciferase activity (expressed in fold change) when treated with SB-216763, a known GSK-3b inhibitor. In A-F, statistical analysis was performed using one-way ANOVA followed by Newman-Keul's posthoc correction. *is statistically significant at p,0.05. n = 6. doi:10.1371/journal.pone.0098080.g006 conservancy of PDCD4 regulation among different species (rat and human) along with developmental stages of neuronal lineage (neuro-precursor mitotic neuroblasts and differentiated neurons) ( Figure S6 & Figure S7). GSK-3b is a key constituent of canonical Wnt/b-catenin signaling pathway in vertebrates and wingless signaling pathway in Drosophila and is long known to play significant role in embryonic brain development (reviewed in [74,75]). Alterations in GSK-3b activity produces abnormalities in neurogenesis, synaptogenesis, cell polarity, neuronal migration, axon growth, neuronal plasticity, which has also been pertinent with ethanol insult (reviewed in [34,35,76,77]. Though a positive role for GSK-3b involvement in PDCD4 protein regulation has been recently reported in lung cancer cells [78], our studies provide the first evidence for transcriptional regulation of PDCD4 by GSK-3b in a developmental neuronal setting. Furthermore, our results point to an abnormal expression of PDCD4 induced by ETOH via GSK-3b signaling pathway may underlie neurogenetic abnormalities seen with FAS. Of note, the importance of PDCD4 in brain cell proliferation, maintenance or differentiation is been investigated in our laboratory. Taken together, we propose that ablation of GSK-3b/PDCD4 network and/or identification of vital regulatory motifs of the PDCD4 promoter that are responsive to ETOH will enable generation of better molecular checkpoints in mitigating neurotoxic effects of ETOH during development. Figure S1 Amplification of 21046 bp rat Pdcd4 promoter fragment. Agarose gel electrophoresis showing the amplification of putative Pdcd4 promoter fragment of 1046 (PD PROM) using the primers described in methods section. PCR product was resolved in 1% agarose gel and visualized by staining with ethidium bromide. Lane 1 and lane 2 depicts 1 kb ladder and PCR amplification product respectively. (TIF) Figure S2 Effect of ethanol on GSK-3b Tyr 216 phosphorylation. Neuroblasts were treated with ETOH (4 mg/ml) for 2, 4, 8, 12 and 24 h. Tyrosine phosphorylation of GSK-3b was determined in control and ETOH treated cells by Western blot analysis using p-GSK3Tyr 216 specific antibody (top). Statistical significance was evaluated by normalizing with GSK-3b and tubulin (bottom). Statistical analysis was performed using one-way ANOVA followed by Newman-Keul's posthoc test. Datapoints were not significant when compared to untreated control, n = 3. (TIF) Figure S3 Ethanol enhances phosphorylation and degradation of b-catenin. Neuroblasts were treated with ETOH (4 mg/ml) for indicated time points. Extent of phosphorylation of b-catenin were determined in control and ETOH treated cells by Western blot analysis (top) using phospho-specific antibody against b-catenin (Ser33/Ser37/Thr41). Phosphorylation of b-catenin was evaluated by normalizing with b-catenin and b-catenin expression levels were normalized using tubulin (bottom). Statistical analysis was performed using one-way ANOVA followed by Newman-Keul's posthoc test. *denotes p,0.05 when compared with untreated control, n = 3. (TIF) Figure S4 Wnt-3a inhibits basal and ETOH-induced PDCD4 protein expression. Neuroblasts were pre-treated with recombinant Wnt-3a (25 ng/ml) for 1 h followed by treatment of ETOH (4 mg/ml) for 12 h. At the end of the experiment, lysates were immunoblotted for PDCD4 and tubulin expression. PDCD4 expression was evaluated by normalizing with tubulin. Statistical analysis was performed using one-way ANOVA followed by Newman-Keul's posthoc test. *denotes p,0.05, n = 3. (TIF) Figure S5 Effect of LiCl on GSK-3b Ser 9 phosphorylation. SH-SY5Y cells were pretreated with or without 10 mM LiCl for 1 h followed by ETOH treatment for 12 h and were probed with anti-phospho-GSK-3b (Ser 9) and GAPDH. (TIF) Figure S6 Chemical inhibition of GSK-3b with SB-216763 decreases PDCD4 protein expression in SH-SY5Y. Cells were treated with or without SB-216763 (20 mM) for 12 h and were immunoblotted for PDCD4 and GAPDH.
8,568
2014-05-16T00:00:00.000
[ "Biology", "Medicine" ]
My body and other objects: The internal limits of self-ownership Eur J Philos. 2019;1–18. Abstract Common practices such as donating blood or selling hair assume rights of disposal over oneself that are similar to, if not indistinguishable from, property rights. However, a simple view of self‐ownership fails to capture relevant moral differences between parts of a person and other objects. In light of this, we require some account of the continuity in the form of ownership rights a person has over herself and other objects, which also acknowledges the normative differences between constitutive parts of a person, on the one hand, and external objects, on the other. This paper provides such an account by arguing that there are reasons internal to a general justification of property rights to limit the extent of powers included in ownership of different kinds of object, depending on how the person is situated in relation to them. Rejecting a typical Hohfeldian view of property as a univocal, gradable concept allows us to make space for a new approach to property and self‐ ownership: one which can make sense of various uses of the body as property without entailing that our relation to those parts is exhaustively characterised by an ordinary property right. | INTRODUCTION The standard view of property rights among legal scholars and philosophers alike is a Hohfeldian one. According to this view, property consists of rights, not things, and ownership consists in a bundle of claims, privileges, powers, and immunities held against others with respect to some thing. 1 This bundle theory of property is conducive to thinking of ownership as a gradable concept, whereby to have "full" ownership of a given object entails having the logically strongest set of possible incidents over that thing. According to this assumption, any bundle of property rights that holds less than this maximum set of incidents counts as something less than "full" ownership. This understanding of the structure of property rights has strong implications for theorising self-ownership as a principle of autonomy. For if the principle of self-ownership is supposed to be constitutive or explanatory of autonomy, then it would seem that anything less than "full" self-ownership would equate to something less than "full" autonomy. 2 Any denial or restriction of any logically possible stick in the bundle of property rights that could make up self-ownership would need special justification. This view treats the idea of property as a univocal concept which takes the same form no matter what kind of object property rights are taken to range over. In the literature on self-ownership, this leads to a symmetry being assumed between the power of selfownership and the power of ownership over ordinary objects-that in both cases, the more incidents of property rights one holds over the thing that is owned, the "fuller" the right of ownership. 3 It is this assumption that underlies some of the most controversial conclusions about self-ownership, such as Nozick's permissive stance on voluntary slavery, or his argument that taxation amounts to a form of slavery. This jars with critics such as Alan Ryan (1992): It is just because we take a relaxed view about people's rights over their cars, bicycles, books and the rest that Nozick can suggest that if these are my lungs, I can do what I like with them. Conversely, it is just because we don't take a relaxed view about people's rights over their bodies that Nozick can suggest that we have no right to tax people against their will, just as we have no right to force them to marry against their will. The crux of Ryan's point is that property rights, coming as they do with powers to alienate, are specifically applicable to ordinary, disposable objects precisely because they have relatively little moral importance. Our bodies, on the other hand, require a different set of rights to adequately reflect their different moral nature. To assert a symmetry between "full" property rights and "full" self-ownership as Nozick does is to try to play these conflicting intuitions both ways. Ryan and others take this as a point of departure from the idea of self-ownership, satisfied that this tension undermines the case for using the concept of property as a framework with which to express a fundamental principle of autonomy. This paper proposes an alternative response to Ryan's criticism, by suggesting that there is room within the Hohfeldian conception to resist the univocal view of property sketched above. Instead of thinking of "full selfownership" as consisting in the logically maximal set of Hohfeldian incidents each person could hold over herself, I argue that we can identify reasons internal to a general justification of property rights that justify limiting the extent of powers included in ownership of different kinds of object (including parts of one's body). This approach paves the way for an alternative view of ownership rights, one which provides the tools to explain differential limits to powers of ownership and alienation with respect to different kinds of object, depending on the way in which we relate to the object in question, and variable across different contexts. 4 The term "object" here is meant in a broad sense to include intangible goods which may become objects of intellectual property through trademarks, copyrights, or patents. Rejecting the homogeneity of the structure of property in this way allows us to make sense of various uses of the body as property in some contexts, without committing to the view that our relation to those parts is exhaustively characterised by a fixed "full" bundle of property rights. The central proposal is as follows. Property rights are justified on the basis of protecting a basic normative power of control for persons and of enlarging the scope of objects over which a person is able to exert that control. 5 In order to determine the limits of property rights, we need to understand the way in which we relate to various kinds of object (including tangible and intangible objects, as well as parts of one's body) and how different uses of those objects impact that basic power. This pushes back against the assumption that a "full" property right in anything must entail having the maximum set of incidents of ownership that is logically conceivable 6 and that allowing any welfare-based considerations to bear on restricting that set is to invoke utilitarian trade-offs. Instead, it provides an alternative way of understanding what constitutes "full" ownership of some object, by explaining how object-and context-dependent limits on powers of ownership arise from reasons that are internal to a general justification of property rights. The internal limits in question can be determined by the extent to which a given framework of rights serves to protect or enhance this basic power when deployed as a way of regulating our use of different kinds of object. Part of this project involves disentangling the notions of "property" and "ownership" by examining more closely the structure of relations of ownership-that is, the relations in which an owner stands to the objects she calls property, as well as to other people. I suggest that in order to understand property as a relational framework, it is necessary to explore further the nature of the relation between the person, the body, and other objects and to explain the significance of how the person is situated in this relation. And it is the very nature of these relations in different contexts which places internal limits on the way in which ownership can be construed. This allows us to conceive of the structure of ownership rights over oneself as continuous with the structure of ownership rights over ordinary objects, without assuming that the concept of ownership applies univocally in all cases. We can still recognise that there may be other values which come into play in the wider debate about how to shape and limit property rights. But understanding the way in which systems of property may be limited on their own terms allows us to get a grip on complex debates surrounding how to regulate uses of the body and other objects of personal importance, without having to draw on controversial essentialist claims about the value of certain things. It also provides an effective riposte to the libertarian who would staunchly oppose any limits on self-ownership on the basis of the primacy of the value of autonomy. | PERSONS VERSUS PROPERTY Some initial work is required to further motivate this approach. Looking back to Ryan's critique, one plausible way to understand the mistake attributed to Nozick is that it stems from a failure to recognise that there is a sharp distinction between persons and objects. And it is a mistake to treat the body as one among the many objects we can have property over, because when we are talking about rights, the body falls within, or perhaps constitutes, the boundary of the concept "person." Once we recognise this, it might seem obvious that the kinds of rights we attribute to persons differ substantially to the kinds of rights that persons can have over objects. Persons are the subjects of rights, whereas things are the objects of rights held by persons. In Ryan's words, we do not take a relaxed view about people's rights over themselves. As such, it would be appropriate to draw a sharp distinction between property (rights over objects) and rights over one's own person. This interpretation ignores a crucial complicating factor, which is that people's bodies are objects too, and there are many respects in which we treat them as such. Notably, one thing that does distinguish one's body from other objects is that it is not something one must acquire in order to have rights in it. It is a constitutive part of one's person. This pretheoretic difference may be sufficient to establish natural rights of exclusive control for each individual over her own body. However, it leaves open the question of whether and to what extent those rights may be alienable, as property rights in external objects are ordinarily taken to be. Moreover, many of the ways in which we treat our bodies or body parts as objects do not involve any intellectual or moral mistake but rather arise from a reasonable interest in doing so. Examples of such cases bring out the way in which the supposedly distinct spheres of person and object are much more closely interwoven than is often acknowledged. Although the link between the structure of rights over persons and those over objects may not be as continuous as assumed in the traditional literature on self-ownership, there is a need to be able to bridge the gap between the two spheres. I suggest that there is some basis for thinking of the structure of rights over the person as continuous with the structure of property rights over objects, while distinguishing differential limits to the scope of those rights as they pertain to different objects. That is, certain rights of the person are conceived as inalienable, some rights over objects (including aspects of the body) as potentially alienable, and others as straightforwardly so. We can do this without assuming that any limitation on the scope of an ownership right must arise from a trade-off between values external to the justification of property and the interests it centrally protects. An example that illustrates the problem of assimilating the body under the abstract concept "person" and distinguishing this sharply from mere objects is the case of blood transfusions. Jean-Pierre Baud (2007) explains that this distinction came under real scrutiny in French law towards the middle of the 20th century, as technological advances changed how blood transfusions were carried out. Under French civil law, the body had been protected under the umbrella of the abstract legal notion of the person. As such, the body and its parts were not considered alienable, except in cases of slavery, where such alienation was just a consequence of the total alienation of the person. In cases where body parts had been severed, these were treated as minor cadavers, with funeral rites accordingly imposed for "notable parts of the body." When blood transfusions became possible at the beginning of the 20th century, they were initially carried out arm to arm and were therefore not construed as donations. Instead, they were characterised as a medical procedure carried out by a doctor and made possible by the act of a "donor." 6 By the 1950s, however, procedures in which blood was collected and stored outside the human body before being transfused into the patient had become commonplace. Reluctance to recognise the blood as an object under the law meant that sale of human blood and blood products was not viewed as a transfer of property but instead labelled as "deliverance against payment" ("délivrance à titre onéreux"). Pharmacies did not buy blood in order to sell it on but rather accepted products to be "deposited in dispensaries." All this equivocating stemmed from an overarching concern to protect the dignity of the person and the sanctity of the body. The legal status of blood was thus largely set out in negative terms, on the insistence that blood was not a commodity and should not be considered to be a medicine-blood was not a thing. The result was that human blood and blood products were distributed for 40 odd years without having a determinate legal status. As Baud explains, from a legal point of view, it was as though the blood did not exist. This was made evident in a most serious manner in the wake of the contaminated blood scandal, in which it was revealed that the Centre National de Transfusion Sanguine had knowingly provided transfusions of blood contaminated with HIV to haemophilia patients in 1984 and 1985. Representatives of the transfusion centre argued on the basis of the legal non-existence of blood that the plaintiffs had no recourse against having been provided contaminated material. In a perverse twist of argument, they challenged as follows: How dare the court sanction such an affront to the dignity of the person by deigning to recognise blood as having the status of a dangerous product. To do so would entail considering the body to be a thing (Baud, 2007, p. 774). In response, the Paris Court of Appeal and a judgement of the High Court of Toulouse finally ruled in 1991 and 1992 that blood was a thing that could be bought and sold, categorising it somewhat inappropriately as a "substance hazardous to human health." 6 Donating parts of one's body amounts to the kind of act of disposal that we usually take to be the prerogative of a property owner. Baud explains that some theorists would like us to distinguish between "property ownership" and "belonging" ("la propriété" and "l'appartenance"), such that someone's hair can be said to belong to her, yet only becomes her property once cut from her head. Baud suggests that this is mere word play. If the rights entailed by my hair belonging to me include the right to cut it off and sell or donate it, then I must have had the right to alienate my hair as property all along. Any form of "belonging" which includes an exclusive right to dispose of the body part in question already includes those features of alienation which come with property ownership. The critics' point is perhaps more clearly pressed as the suggestion that I could not transfer rights of ownership over my hair until it had become detached from my head. However, simply appealing to the fact that my hair, while still on my head, remains a constitutive part of me does not yet provide us with a principled reason to draw the distinction between belonging and property ownership in this way. Consider, for example, a person who promised to donate her hair to someone or contracted to do so. We can think of those as cases where the person alienates various claims, privileges, powers, and immunities with respect to her hair before it is cut from her head. This kind of alienation is functionally similar to transferring ownership before the hair is detached. And yet, simply appealing to the fact that the hair is still part of the person in this case, I suggest, provides no compelling reason to object to such practices. Some further explanation is required to account for the continuity between ownership/alienability of the detached body parts and ownership/alienability of those same parts while still integrated within the body. The picture cannot be as simple as always treating bodily materials as objects like any others, however. As a case in point, take a judgement under French criminal law which prosecuted cases of voluntary contamination of HIV through sex under the title "administration of a harmful substance." Something has gone awry if we start to define sexual interactions as a transaction of substances. To insist on this description of the wrongful action narrows the scope of the wrong in such a way as to separate the act of sexual intercourse from the "administration of the substance." This surely obscures something about the nature of the wrong, which is that the sex itself in such cases is a distinct kind of attack, regardless of whether the harmful pathogen in question is administered or not. Baud offers a possible explanation for this, which is that in cases where parts of the body such as blood are characterised as objects, a legal fiction is required to confer that status on them. Unless something has happened to cause a certain part of the body to be recognised as an object under law (as separate from the person), exchanges of certain bodily fluids are simply biological processes which occur in the background and which fall outside the domain of legal regulation. This seems right: when we regulate interactions between persons, the proper objects of regulation are the actions of the people involved. The biological processes which occur within the body during those actions are most often incidental to the description of the action done by the person, though the intention to transmit a pathogen might factor in the assessment of distinguishing a sexual assault from a consensual encounter. The point in question for my present purposes is that there are cases which compel us to recognise some aspect of the human body as an object under law-one which could be traded and exchanged under the same regulatory framework as property. And the reasons for doing so arise precisely from the importance of protecting something like the dignity of the person. But these reasons do not reach as far as recommending that we always treat those aspects of the body all simply as objects of property which are exchanged between people. This brings out the puzzle that needs to be addressed. On the one hand, there are some uses of bodies and their parts which require us to treat them as transferable things, recognising their status as objects. In these cases, it is not clear that we can simply draw a sharp line to say that ownership only begins once the substance is separated from the body. Otherwise, we will be at a loss to explain why it is that I should be able to decide to donate some blood before it is extracted from me. On the other hand, there are many uses of the body which seem to belong squarely in the sphere of actions undertaken by persons and which it strikes us as a mistake (an intellectual, as much as a legal or moral mistake) to understand in terms of a transaction of materials akin to the gift or sale of a piece of property. A question arises here about the nature of the link between persons and property. If it seems reasonable that a certain part of the body can be brought into the realm of property, we need an account of property rights that can explain when and why this is warranted. We need to be able to explain the continuity between the rights of disposal a person has over herself and the way in which she is able to alienate certain parts of herself as property. But we also need to be able to explain where and why we ought to place limits on that kind of alienability. The way in which rights of disposal over oneself and property rights become interwoven in the ways described above no doubt helps to explain why many find theories of self-ownership so appealing, as they purport to provide perfect continuity between property and the person. However, as Ryan points out, this supposedly neat link appears to play on conflicting sets of intuitions and leads us to question why the standard framework of object property as we know it is assumed as paradigmatic for understanding self-ownership. 9 A tempting alternative might be to opt for an approach which distinguishes more sharply between the normative status of persons and external objects. Kantian theory, for instance, draws a distinction between rights to your own person (the innate right of humanity) and acquired rights (rights to external objects of choice). Though the latter are not reducible to the rights to your own person, the normative basis of acquired rights is taken to depend on the right to one's own person. 10 However, this approach is not well placed to explain the continuity between a person's rights over various body parts and the right to alienate those parts as property as in the examples discussed above. For example, Pallikkathayil (2017) argues that there is some room in the Kantian account to explain how a detached body part could make the transition from part of a person to an object of property rights. However, the only legitimate way for this to happen would be for the body part in question to be in principle open to anybody to acquire as property after initial detachment from the originator's body. In this way, the Kantian approach is unable to account for the idea that I should have any privileged claim, above anybody else, to sell the hair that came from my own head. I would only do so if I happened to be the first person to perform the institutionally recognised act of acquisition to lay claim to the hair after it was cut. Moreover, the Kantian approach still ends up drawing the normative link between property and the person too closely in its treatment of the significance of property violations, which on Arthur Ripstein's account end up amounting to coercion of the person. Ripstein (2010) explains how property rights depend on the innate right of humanity as follows: For Kant, property in an external thing-something other than your own person-is simply the right to have that thing at your disposal with which to set and pursue your own ends. Secure title in things is prerequisite to the capacity to use an object to set and pursue ends. (p. 67) Property rights, then, depend on purposiveness and provide a necessary framework within which people can rightfully enact their purposiveness in the world. The basic idea that the point of property is to enable us to make use of things for our own purposes, by creating and protecting a sphere of exclusive use, chimes with most conceptions of property. So does the claim that the structure of property rights parallels one's rights with respect to one's own person. But the extent to which Ripstein draws the link between purposiveness and property proves problematic. For Ripstein, the link between property and purposiveness is such that any unauthorised interference with my In a perhaps surprising streak of similarity with Nozickian theories of self-ownership, this Kantian line of thinking draws the parallel between the person and her property too strongly. If a colleague drinks out of my mug while I am out of the office, washes it up afterwards, and returns it to my desk so I am never any the wiser, she might have done something wrong insofar as she has violated some property right of mine but has she really coerced me? Just as Nozick's self-owner becomes partially enslaved by taxation, Ripstein's sovereign individual is made a slave to the mug thief. By presenting artificial property rights as the institutional solution that is required to give content to a fundamental natural right, the Kantian account imbues any legitimate system of property rights with the full normative significance of that fundamental right. I agree that we can think of institutional systems of property as mapping spheres of exclusive choice for persons with respect to their property. The point I am pressing is that it's a mistake to think that those institutional rights, once established, instantiate a sphere of freedom of such normative significance that any infringements on that sphere carry the same moral weight as interferences with my person. This is what is implied by bringing interferences with property under the umbrella of coercion. Allowing for continuous transition between personal rights and property rights, on the one hand, while being able to distinguish an order of normative significance between personal interferences and property interferences, on the other, is key to making sense of transitional cases where we have reason to treat something straightforwardly as property in some contexts, but not in others. For example, consider prosthetic limbs-we have reason to allow a person to sell her prosthetic limb as a commodity, and yet also treat an attack on that object as assault, if it happens while she is wearing it. 11 A key feature of such cases is that certain kinds of interference with those objects constitute a wrong against the person which would not adequately be captured by describing it as a violation of a property right. I suggest that a different kind of institutional account is better placed to deal with such cases. Rather than justifying systems of property on the basis of some fundamental natural right, we should think of property rights as justified on the basis that they serve some important interests of ours. The shape of any given framework of property rights will be determined by the extent it serves those interests and constrained by the need to protect people from being wronged in various serious ways. This approach requires some account of the structure of the underlying moral landscape, in order to anchor and constrain the justification of institutional rules of property. T. M. Scanlon has suggested a way of doing this using his contractualist principle of reasonable rejection. 12 The test of reasonable rejection is able to give us something like a concept of moral property rights, by allowing us to ask whether a principle allowing various kinds of interference with objects in which a person stands in some kind of relation could reasonably be rejected by the victim of that interference. This allows us to deal with typical "state of nature" scenarios, where, for example, a person has worked some land to cultivate crops, leaving plenty of land and resources around for others to do the same. If another person were to come and reap all her crops at harvest time for himself, the test of reasonable rejection could tell us she had been wronged by that interference. When it comes to justifying general institutional frameworks for formalising property rights, the picture becomes more complex. On the one hand, we have to weigh the general reasons in favour of choosing certain forms of property rights against the reasons which count against them. But we also have to recognise that whether or not the rejection of a given proposal would be reasonable will be shaped to some extent by conventional systems already in place, including certain contingent sociological facts about the ways in which we value certain objects. Moreover, the scope of a formal system of property rights will reach beyond the fundamental moral requirements of non-interference in the state of nature. This happens where some rule is needed to solve something like a coordination problem and where there is an acceptable range of forms that solution could take. 13 Though we can start with a moral basis of property using the test of reasonable rejection to furnish us with principles protecting against interference with some objects under certain circumstances, there will be other cases where the choice of principles is not so clear cut, and a number of different rules could reasonably be adopted. For example, if two people set upon working a piece of land at the same time, one of them trying to lay foundations for a building and the other clearing stones to plant crops, there would appear to be no principled moral way to determine which has a legitimate claim to the land. Such cases may be solved by the creation of conventions to determine which kinds of activities take precedence over others in terms of conferring ownership claims. There may not be a moral basis for choosing one particular convention over another, but enforcement of the convention could nevertheless be justified to the extent that it serves our interests in solving a coordination problem. Once a convention has arisen, it can then feature as a reason relevant to the judgement of what kinds of intrusions or interferences may be reasonably rejected in that domain, based on the expectations they engender. So we can recognise that some problems in the state of nature might give rise to a range of justifiable institutional solutions, rather than one determinate blueprint for property rights. We can then notice that whichever solution is chosen and implemented from the range of possible policies for a given society, that chosen policy itself may have bearing on which interferences are those under that specific framework of rights which it would be reasonable to reject. A proper understanding of the institutional justification of property thus has to take into account three distinct layers of analysis in the assessment of whether a given set of property rights serves our interests while protecting people from interferences which they could reasonably reject: (a) the function of the institutional structures which enforce rules for how we interact with one another; (b) the social conventions which provide informal regulation of our behaviours and influence cultural attitudes to certain objects and activities; and (c) the basic moral structure of our interactions with one another. Justifications for regulating certain activities through institutional structures such as property cannot therefore be made solely on the basis of a presocial understanding of rational beings in the state of nature. Rejecting the standard Kantian and self-ownership accounts of property in favour of the approach sketched above suggests that the ways in which we relate to various kinds of objects matters to the arguments we give in favour of or against uses of property rights in different contexts. Moreover, the way in which this relational aspect figures in those arguments will be internal to the general justification of property rights as an institutional framework, rather than relying on appeals to competing values. A key aspect of this approach is the claim that institutional property rights do not map perfectly onto the underlying moral structure of our interactions. While there is a way of understanding ownership as a moral concept, the way in which general institutional rules of property are mapped out goes beyond a fundamental moral principle. Moreover, the way in which the institutional framework is constrained by the basic moral structure of our interactions with one another gives us reason to posit that the extent of property rights may be more or less limited depending on the kind of object in question. This approach involves rejecting the view that property is a univocal concept, whereby any limitation on the number of "sticks in the bundle" of a property right entails less than full ownership. An important question arises at this point, which is to what extent this would require a revision of the standard Hohfeldian conception of the structure of property rights. The next section shows how the institutional account can be developed within a Hohfeldian understanding of the structure property rights. | PROPERTY VERSUS OWNERSHIP Hohfeld famously argued that the correct conception of property is as rights, not things (Hohfeld, 1919). Thus, any interference with a person's property rights would constitute an interference with her property, regardless of whether any physical interference had occurred to the object over which she held those rights. A property right on the Hohfeldian view is not a single homogenous right, but rather consists of several distinct incidents, sorted into four basic components: (1) Privileges: A has a privilege to φ if and only if A has no duty not to φ. (2) Claims: A has a claim that B φ if and only if B has a duty to A to φ. (3) Powers: A has a power if and only if A has the ability to alter her own or another's Hohfeldian incidents. (4) Immunities: A has an immunity if and only if B lacks the ability to alter A's Hohfeldian incidents. (Wenar, 2015) Though the four main incidents can be said to be determinate, the way in which they apply in different contexts to establish different elements that make up a given right is flexible or indeterminate. Leif Wenar (2015) helpfully categorises these four basic components into first-and second-order rights. The firstorder rights (the privileges and claims) are those which hold directly over the object, while the second-order rights (the powers and immunities) are those which concern the alteration of the first-order rights. The second-order rights help to make sense of the idea that property is exclusionary while taking account of the fact that this exclusion of others is not static but that the owner can determine where, when, how, and in respect to whom that exclusion applies. The Hohfeldian analysis explains the complex normative structure and function of the concept of ownership, and the theoretical merits of this approach have led to it becoming the established theory of property within contemporary philosophy and legal theory. Despite its merits, it is not without problems. One of the implications of the view of property as rights, not things, is that any interference with any of the incidents that make up a given property right counts as an interference with property. Among other things, this leads to problematically conservative interpretations of the Takings Clause of the Constitution of the United States of America (Wenar, 1997). Wenar points out that on the Hohfeldian view, any alteration or annulling of any of the incidents that make up a given property cluster would count as a taking of property, in the sense that this right would have been "taken" from the bundle held by the original owner. 14 In response to this problem, Wenar proposes that we can retain the complex Hohfeldian characterisation of property rights without abandoning completely the idea that property can be things, too. He suggests that we make the simple distinction between rights and the object of those rights-between property rights and property: private property is all those things over which private property rights are held. And private property rights are just those rights in that two-leveled structure of Hohfeldian rights (…) Property is what property rights are rights over. (Wenar, 1997(Wenar, p. 1944 The distinction between property rights and property allows us to stay firmly within the Hohfeldian framework of property rights, while retaining an interpretation of what constitutes a "taking" of property that is still grounded in the common-sense concept of removing some object from someone's possession. This separation of the ideas of ownership (or property rights) and property (things) allows us to make an important first step towards dismissing the univocal view of property. It makes it possible to challenge the notion that "full ownership" must entail the largest conceivable bundle of Hohfeldian incidents. To recap, the view that property is rights implies a univocal understanding of ownership. To compare two different sets of ownership rights in order to determine which is fuller, one need only compare the two bundles of incidents to see which contains the longest list of privileges, powers, claims, and immunities. The kind of object owned would not come into question, because the comparison of property just is a comparison of rights. The object only figures in a secondary sense-in that the right is held against other people, in reference to some object. Once we have acknowledged that ownership rights may be altered without this entailing a taking of the property in question, we can also suggest that having less than the full abstract list of possible Hohfeldian incidents in a property right bundle need not entail that one has less than full ownership over a given piece of property. In particular, we can start to press the idea that different kinds of object by their nature impose different conceptual restrictions on what can be considered the maximum number of incidents in the bundle of ownership rights for that property. While it is in one sense conceivable to think of self-ownership in the maximising way described above, there is a separate question as to whether this is coherently conceivable within a given theory of the justification of property. It is this latter question which I pursue here. I have suggested above that an institutional approach to property based on a contractualist principle of reasonable rejection is best placed to tackle questions about the limits of ownership of different objects on a case-by-case basis by reference to the way in which enforcing property rights serves our interests in different contexts. Before addressing how it helps us with specific cases, it is necessary to provide an explanation of the general form of justification one might give for a system of private property to underpin the institutional account. That is, we need some explanation of which basic interests can be posited as justification for having a system of property in the first place and how those interests are taken to ground such a system. The kind of general justification I have in mind is one which identifies a certain problem arising from some basic interests of individuals and posits institutional frameworks of property as a pragmatic solution to that problem. It starts from a familiar assumption that individuals have an interest in having secure use of some objects but that unless a person is physically holding a given object, her claim to possession of it is both indeterminate and precarious. 15 Having conventions to recognise certain rules for relations of possession can be seen as a solution to this problem. Hume suggests that we can view such conventions as arising in much the same way as languages develop to facilitate communication. They simply provide a pragmatic solution to a coordination problem. So far, that sketch of the basic interest in secure use of objects and the problem of how to secure possession against others gives no determinate indication of how those interests may best be advanced. I suggested above, however, that there is a way of thinking about such conventions as having a moral basis, without entailing that justifiable property rights instantiate a boundary the crossing of which violates a basic moral principle. A contractualist approach to understanding the moral constraints on the possible sets of rules could narrow down a range of justifiable options, without entailing that any solution is morally required. To understand how the institutional solutions may be justified by reference to some moral considerations, while not being directly reducible to natural rights, we can note that the way in which principles of possession of objects are established is closely aligned with principles of non-interference against the person. In particular, the cases where there will most clearly be a direct moral basis for claims to exclusive possession of some object will be those in which a person relies on that object to provide essential nourishment or shelter. 16 The reasons we have to protect ourselves against unwanted interference thus provide the strongest basis for justifying a system of private property in the first place. In this sense, we can think of the way in which we recognise artificial relations of possession over objects as both approximating and protecting the kind of natural control we have over our own bodies. 17 With that basis established, it can be extended by considering the reasons we have to value some system of rules for establishing secure possession of items beyond what is necessary for basic nourishment and shelter. Again, in the assessment of the way in which such general rules serve our interests, the bar for justification may be that they are ones which it is (all things considered) reasonable to adopt. However, it does not follow from the general justification of a rule in this way that any particular action which breaks that rule must be one which could itself be reasonably rejected. Consider Joel Feinberg's example of a backpacker who is caught in a life-threatening blizzard and breaks into a nearby cabin to weather the storm, helping himself to food from the cupboards and burning furniture for warmth. 18 Although the owner's property right might feature in the assessment of the permissibility of the backpacker's act, that reason, I take it, would not be sufficient to outweigh the reason he had to save his life. Despite the owner's property right in the furniture, it would not be reasonable for him to reject a moral principle allowing the backpacker to burn it to save his life. Feinberg explains this case using Judith Jarvis Thomson's distinction between infringing and violating a right: while there may be cases in which it is justifiable to infringe a right, one violates someone's right if one infringes it without justification. 19 On Feinberg and Thomson's view, although rights infringements are justified, they leave a moral residue which requires the rights infringer to compensate the person whose rights were infringed. However, one might well question why compensation in this case would be morally required. If our backpacker was destitute, and the furniture he burned a priceless antique, one might think it reasonable for him to reject a principle requiring him to take on the significant debt of replacing the antique in order to save his life. Conversely, one might well think it unreasonable for the property owner to reject a principle requiring him to sacrifice his antique chair in order to save someone's life unless that principle included a compensation clause. A contractualist could thus make the case that the cabin owner has a moral duty of aid to the backpacker that includes no claim to compensation. However, it is compatible to hold that the owner has a moral duty to sacrifice his property in aid of the backpacker's life, while questioning whether this duty is enforceable. Thomson puts the question this way: why should the property owner be forced to pay the cost of saving the backpacker's life? After all, the point of property rights is to secure a sphere of exclusive control so that I can choose to exclude or include at my own discretion, whether or not my choices are perfectly morally motivated. The proposed approach to property rights allows the possibility of an institutional justification for the owner's claim to compensation, even if compensation is not required by a basic moral principle. In order to meet people's interests in stable and secure possession of things like holiday cabins and antique furniture, it may be justifiable for a system of property rights to include compensation claims for owners in situations like the one described above. There could, however, be a range of justifiable ways of structuring these claims. Instead of requiring the backpacker to compensate the cabin owner directly, a system could collect tax to support a compensation fund for such eventualities. Another option might be to require all individuals to carry public liability insurance to ensure that they be able to pay compensation without incurring significant amounts of debt. These would be ways of building in compensation claims for property infringements, without imposing unreasonable burdens on those who are morally justified in infringing someone's property rights. On this view, we could say that the backpacker infringed the cabin owner's property right but did not wrong him in doing so. The examples explored in Section 2 of this paper suggested that there may be cases where it makes sense to treat the body under the framework of property, given our interest in having access to things like blood banks. Clearly, the relation between a person and her body is not a relation between two distinct objects, insofar as my body is (a constitutive part of) me. And yet, the link is a somewhat complicated one-bodily stuff can become a separate "thing," and ordinary "things" can be amalgamated to the body. So there is a certain amount of physical detachability that makes at least parts of our bodies more akin to mere objects, and thus perhaps appropriately subject to the framework of property, on the model described above. Beyond this, a parallel problem to the precariousness of the possession of objects arises in the realm of persons and their actions, too. Namely, it is useful to be able to treat certain undetachable aspects of our person as alienable-to create an artificial relation of alienability, where no actual detachability exists. The structural frameworks which allow us to securely trade with others stretch beyond the realm of mere objects, so a similar story can be told about how conventions arise to secure individuals' claims against each other for the provision of services. The way in which these arise can be construed as addressing a parallel problem to that of securing possession over objects. Namely, that the control a person has over her own body or actions is too secure and that we need to be able to construct artificial relations of alienation in order to be able to give others secure and determinate claims over one's actions or services. The mechanisms for this can be fleshed out using the Hohfeldian rights framework. Take the example of a person, Alice, who agrees to work on Bert's farm for a fee. Alice has agreed to work under Bert's orders for the day on general farmyard tasks. The Hohfeldian framework allows us to track the changes in the normative relation between Alice and Bert with respect to some of Alice's actions. Namely, before Alice agreed to work for Bert that day, Alice would have had a privilege to go fishing if she chose to do so. Now, however, Alice has effectively undertaken a duty to Bert not to go fishing and so has waived that privilege. Alice has also transferred to Bert a claim that Alice labour on the farm that day, because Alice has undertaken a duty to do so. In entering the agreement, Alice was utilising her powers to alter her Hohfeldian incidents with respect to Bert. In doing so, we can see that Alice has also transferred certain powers to Bert and thereby lost certain corresponding immunities. For example, Bert now has the power to order Alice to muck out the stables, or to harvest the corn, or to shear the sheep. At the moment that Bert gives any of these orders (providing they fall under the remit of the original agreement), this imposes on Alice a duty to perform the action that has been asked of her and gives Bert a new claim against her. Alice thereby lacks the immunity from Bert altering her Hohfeldian incidents within these parameters. Again, we could conceive of the object of the agreement in this case as being Alice's consent to transfer certain normative relations holding between her and her actions to Bert and the agreement not to renege on the transference of these incidents. The Hohfeldian structure thus allows us to conceive of a certain artificial relation holding between Alice and her mental and physical capacities. This relation allows us to make sense of exchanges in services and actions by conceiving of Alice engaging in a transfer of the relation of possession of her actions along the same lines as she can transfer her property in external objects. This mirrors the way in which the artificial relation of possession was justified as serving our interests in securing exclusive use of external objects. Just as the relation between person and object can be thought of as modelled on the fixed and constant control a person has over her mind and body, so in turn can an artificial relation of alienability emerge between a person and her mental and physical capacities, that is modelled on the artificial relation of possession. The way in which these two relations mirror each other might be put in simple terms as follows: external objects are too easily transferable, so our control of them needs to be made as stable as possible by approximating the stability of our control over our bodies and minds. The exclusivity of a person's control over her body and mind, on the other hand, is too stable, so we need a way of giving other people claims over our actions and services in a similar way to the way in which we can easily hand over control of external objects to them. This allows us to square the circle as to whether the shape of property over external objects is derived from the structure of bodily rights, or vice versa. But it allows us to do so in a way that takes account of the fact that such property relations arise as an artificial feature within the account of the justification of formal regulative structures. We construct these relations because they are a useful way to formalise our interactions with others in a way that can be publically recognised and enforced by a third party to provide the security and stability to protect our interests. It formalises our interactions as transactions. This picture allows us to establish context-dependent limits to property rights. These emerge from the way that the account provides a justification of property based on the assumption of certain fundamental interests in noninterference and the way in which formal structures attributing relations of possession protect this interest. The Hohfeldian framework is helpful in this regard. If we take another look at the Hohfeldian incidents, we can make a further distinction over and above the one between first-and second-order rights. Namely, incidents (1), (2), and (4) can be considered static ones. 20 That is not to say that there can be no change in these incidents, rather, these incidents only change when somebody has wielded a power to change them. We might think, then, that the flexibility of an ownership right is really characterised by the powers included in that ownership right (incident 3). These are what allow you to make various uses of your property in cooperation or trade with other individuals, by altering the specific Hohfeldian incidents that hold against others. I suggest that differential limits on powers of ownership can be grounded in a certain understanding of the importance of Hohfeldian powers from the point of view of an institutional approach to property rights. One of the questions that Ryan prompts us to ask is: can I really do whatever I want with my lungs? This a question of alienation: ought I have the right to destroy my lungs, or to give or sell them to somebody else? Before addressing this question about the legitimacy of alienating specific parts of one's body, it will be easier to address a more general question: if we conceive of a person as having ownership rights over herself, ought she be able to alienate those rights entirely? We should distinguish here between alienating incidents within specific parameters (e.g., Alice transferring to Bert the power to give Alice work orders for the day) and alienating all one's incidents by alienating one's powers over them all. For ordinary objects, full alienation of the second sort usually involves actual transference of the object. This is where Wenar's distinction between property rights and property becomes helpful. With this distinction, we can say that alienation of property rights usually also involves alienation of the property itself. In simple terms, if I transfer to you all property rights over a bottle of wine, you will most probably also take that bottle of wine home with you (and away from me). So while transference of the relation of property might happen in theoretical terms while I am still holding the wine bottle in my hands, the object then can be, and usually is, physically removed from me. In contrast, while it is conceivable to think that a person could alienate all her Hohfeldian incidents to someone else, there is clearly no corresponding physical detachment of the person from her body or mind that can take place. 21 In that case, we might think that alienation of the ownership rights over the person would be possible, but not alienation of the property. It might seem trivial to labour this point but viewed in the context of the account of property we have been exploring, it helps us to make an important distinction. Some theorists have suggested that what distinguishes property rights from personal rights is that the former are in principle alienable, while the latter are not. 22 Philosophical disagreement over the question of voluntary slavery, however, shows that personal rights are at least conceptually alienable, in principle. 23 The challenge is to provide an answer as to why there might be limits to the ways in which we can alienate certain rights over ourselves. The account of property rights I have proposed provides a way of doing this by drawing on reasons that are internal to the general justification of systems of ownership rights. Certain ways of alienating control of parts of a person that cannot be separated from herself, although conceptually and practically possible, will undermine the basic interest which a system of ownership rights is predicated upon protecting. At the heart of the account is the assumption that individuals have a basic interest in having secure use of certain external objects, modelled on the kind of security of control we enjoy over our own physical and mental capacities. This is no coincidence, because secure use of our physical and mental capacities is also the precondition of us being able to establish secure use of external objects. 24 Another way to characterise this basic interest, then, is as an interest in having the basic power to exclusively control various things (starting with our own bodies/minds and extending out to external objects). This can be seen as the basic power that it is assumed must be attributed to the individual as the political atom in this account. It is by attributing this power to individuals that frameworks of rights provide the security from unwanted interference which is taken in many liberal accounts as the starting point for providing a general justification of the enforcement of regulative structures by the state. The fact that external objects are physically detachable from the person means that when one alienates a piece of property to another person, this has no impact on that basic power. It straightforwardly serves our interests to be able to discard some items of property completely. This lack of impact on the basic power is precisely because there is no real connection between the person and the external object. But now we can contrast this to thinking what it would be to completely alienate all incidents over one's body in the same way. This would be to demand of the state that it no longer attribute to that person the very same power that was posited as requiring institutional protection in order to secure a fundamental interest of the individual. In order to understand exactly what kind of mistake this involves, we can turn to an approach suggested by Scanlon on how to assess the concept of self-ownership. examines the argument that self-ownership entails that taxation is akin to slavery because an individual is entitled to the full amount of what another is willing to pay her for her labour: To assess this argument we need to ask what makes the idea of self-ownership appealing, and whether the reasons that lie behind its appeal support the idea that individuals are entitled to the full amount of what others are willing to pay for their services. (p. 114) This is a method which Scanlon applies to various ideas, including the ideas of equality, liberty, and coercion. His approach is "to try to identify the reasons that give these concepts their importance and to ask when these reasons apply." The considerations which give legitimacy to the idea of self-ownership in one's labour are "the reasons people have to choose their occupation, to be able to quit a job if they wish, and so on." 25 Taking these to be the reasons which support the appeal of the concept of self-ownership in one's labour, Scanlon suggests that these very same reasons support the regulation of markets and some level of redistribution through taxation. This is because unfettered markets and lack of taxation will lead to high levels of inequality, which in turn create negative externalities. The crucial point is that the negative externalities created are ones which threaten people's abilities to choose their occupation and to be able to quit a job if they wish to-the very considerations which Scanlon takes to be central to the appeal of the concept of ownership of one's labour. He concludes from this that the reasons that support the concept of self-ownership support a system which has inbuilt limits on the claims which certain people can make on the total pot of resources. Taxes are not viewed as deductions from a person's income, in the sense that some of his money is taken away from him, "Rather, these taxes reflect the limits on the claims to resources that he can come to have within a legitimate system of property and market exchange." 26 In other words, the reasons that support the basic concept of self-ownership serve to shape the structure of institutional frameworks which are justified on the basis of that concept. And those reasons might place certain internal limits on the claims which an individual can reasonably make within that system. We can take the same approach here, with the question framed in a slightly different way: what makes the existence of frameworks of property appealing, and what are the reasons behind attributing powers of ownership to individuals? The first reasons identified above were based in a fundamental interest of individuals in establishing security of control over themselves and other objects. I suggested that systems of property can be thought of as creating artificial relations of possession over external objects to approximate the security of a person's control of her mental and physical capacities. Furthermore, the powers of alienation that come with property rights can be seen as responding to reasons that emerge after security of possession has been established, namely, the interest in trading with others. It is only once there is some system to protect secure, exclusive possession that the notion of being able to trade possessions makes sense. The reasons that support the possibility of alienation are in this regard dependent on the reasons that support establishing security of possession. This hierarchical way of conceiving of the reasons supporting systems of property allows us to think more clearly about the kind of structure of property rights which could legitimately be supported on the basis of these reasons. In particular, we can suggest that these reasons support internal limits on a system of property rights to ensure that powers of alienation do not go so far as to undermine the basic security of the individual's possession of her mental and physical capacities on which the whole thing was predicated. From this point of view, we can say that there is nothing in the reasons that make the existence of property frameworks appealing that would support enabling the kind of wholesale alienation of a person's body explored above. This gives us reason to posit a sliding scale of differing limits for the ownership powers that we can conceive of the individual exerting over her own mind and body and the kind that we can conceive of her holding over external objects. Namely, on the terms of this kind of justification of property rights, there is no reason to think that if we allow individuals to subject aspects of their body to the framework of property, this would have to include attributing to the individual the full power to alienate in the same way that ownership rights over external objects include this power. Nevertheless, the analysis showed a mirroring between the power held over one's body and mind and that held over objects. The way that these are interlinked, I suggest, paired with the challenges posed by the examples discussed in Section 2, motivate the proposal that it could be useful to make provision for individuals to subject their bodies to the framework of property in certain cases, even if the structure of the power of ownership that is attributed to an individual over her body is not symmetrical to ownership of other objects. Furthermore, this kind of provision would be compatible with the Scanlonian analysis of the basic reasons which support the existence of frameworks of property discussed above. One might object that one could well accept the Hohfeldian characterisation of artificial relations of alienability being established in the way described above but still resist bringing this all under the umbrella of property. Instead, we could simply accept that there are different clusters of Hohfeldian rights for different objects, and for persons and their actions, perhaps even for different parts of people in different circumstances, but we needn't talk in terms of property rights in order to understand this. In response to this point, I needn't commit to the strong claim that these rights over our person are property rights in some fundamental sense. What the account above brings out is the fact that this basic Hohfeldian structure allows one to treat one's body as though it is property, by engaging in these artificial relations of alienation. Or, in the cases where parts of the body are physically detached and transferred from one person to another, by engaging in actual alienation of the thing as property. Once we are engaging in these kinds of transactions, we can say that we are operating within the framework of property, broadly construed. This allows for a transition between bodily rights and property rights via the notion of ownership, without committing to a direct symmetry between the two. | CONCLUSION It is perhaps a curious upshot of this picture that it leaves us with little reason to think that there is any particular moral significance to labelling something as "property." 27 This, I suggest, is a strength of the approach. We saw above that the reluctance to recognise certain bodily materials as property was based in a concern that doing so would be an affront to the dignity of the human body as constitutive of the person. This kind of worry, I take it, has often led to a framing of debates around the treatment of various bodily materials in terms of what ought to count as property or not. These debates are often conducted as though to label something as "property" in any context is to confer on it a specific set of values which must be exhaustive of its identity. This leads to a kind of essentialism about parts of the body which makes it very difficult, if not impossible, to make sense of the changing ways we make use of various parts of ourselves in different contexts. That approach gets things precisely the wrong way around. We should instead think of Hohfeldian rights as offering a certain transactional framework, the limits of which can be understood as context dependent in the way explained above. We start first with an understanding of the way in which we value parts of ourselves in different contexts and how that figures in the assessment of reasons in favour of various regulative frameworks. The limits to the way we transact with various things will vary with respect to different objects and contexts, but the basic form of rights as understood with respect to both people and objects is continuous. 28 By avoiding the kind of essentialism described above, this paves the way for a pragmatic approach to clarifying debates around how far persons ought to be able to subject themselves and their bodies to various markets or contracts. We might ask, for instance, how far do certain activities such as prostitution or contract surrogacy involve an alienation of powers that comes close to threatening the basic capacity for control that was identified as a fundamental basis of the justification of ownership? Rather than providing a yes/no answer to the question of whether such activities could be legitimate, the approach would allow an examination of the conditions under which it might be possible for such contracts to be regulated in specific ways which would protect, rather than threaten, that power. It also provides a way of explaining why certain transactions may be more problematic than others. For example, if I sell my kidney to somebody else while it is still in me, this will have implications not only for my powers over my kidney but also for my normative power of control over the rest of my body. If the transaction gives the new owner a claim to take possession of my kidney, that affects claims I hold over the rest of my body against physical interference. The importance of protecting those claims against threats from this kind of alienation will be proportionate to the danger posed to the basic power of control which we established was inalienable. This approach requires us to take into consideration structural aspects of certain markets and the power relations within them. For example, following Scanlon, if we think it important that a person have the exclusive right to decide what happens to her body, this may give us reason to think that nobody else, including the state, may interfere with her decision to sell her kidney. On the other hand, there may be structural features of such markets that make us think again. We may think that the high price offered for a kidney exerts undue force on the decisions of those who are very poor, effectively threatening their power to control their own bodies. 29 These are suggestion of how the account provides a more nuanced approach to important practical questions. These are of course complex matters and will need more work to fine-tune the details. The purpose of this paper has been to lay the groundwork for proceeding with a view of ownership based on an institutional account of property that can begin to make sense of the continuity between the structure of rights of disposal over one's person and rights over objects, without ignoring the normatively significant points of differentiation between the two. 5 The normative power of control I have in mind is similar to that proposed by Christopher Essert (2016). Essert argues that we ought to think of systems of property rights as justified on the basis of their necessity for protecting a person's normative control, lack of which would leave her open to domination.
15,755.6
2019-02-28T00:00:00.000
[ "Philosophy" ]
From “Hello, World!” to Fourier transformations: Teaching linguistics undergraduates to code in ten weeks or less Reed Blaylock* Abstract. I used Backward Design to scaffold ten weeks of assignments that taught students how to perform sine wave vowel synthesis and a Fourier transformation I used Backward Design to scaffold ten weeks of assignments that taught students how to perform sine wave vowel synthesis and a Fourier transformation approximation using just a few fundamental programming concepts. This strategy gave all students, regardless of their previous programming experience, the opportunity to implement algorithms related to core concepts in phonetics and speech technology. Reflecting on the course, it seems that the coding assignments were generally well-received by students and contributed to students programming something complex and meaningful. Introduction. As a graduate student, I served as teaching assistant for a general education undergraduate Speech Technology course taught in a Linguistics department for six terms and three different instructors of record. Students in this course learned the linguistic underpinnings, social impact, and commonly-used algorithms (conceptually) of speech synthesis and recognition technology, as well as how to program parts of speech technology algorithms. In my time as a teaching assistant, however, I noticed-though I am certainly not the first (e.g., Jenkins & Davy, 2002)-that students without previous programming experience were at a real disadvantage in the course. Even though we started teaching code from the fundamentals, students had relatively few chances to practice coding; and we never spent enough time on coding for students to be able to program an actual piece of speech technology (i.e., vowel synthesis or recognition). In Spring 2020 I became instructor of record for a ten week junior-level course on Speech Technology. The course had originally been scheduled as face-to-face, but ended up being a completely asynchronous style of emergency remote teaching due to the COVID-19 pandemic. The students in the course were 17 Linguistics majors, all of whom had taken at least one introductory phonetics class as a prerequisite. They entered the class with a variety of programming experience: some had none, some had plenty, and some were concurrently enrolled in a Computational Linguistics class. I used this opportunity to try to create a progression of coding assignments that would be simple enough to be accessible to students with no programming experience, meaningful enough that everyone felt they had learned new and valuable skills in the domain of phonetics and speech technology, and practical enough that students could try to get jobs in the speech technology industry (if they wanted). (Wiggins & McTighe, 1998;Cho & Allan, 2005) to scaffold weekly programming assignments that culminated in a final summative project of implementing a speech synthesis algorithm and a speech recognition algorithm. For this course, speech synthesis was vowel synthesis-having a computer create a list of numbers that, when played as audio by a computer, sounded to the human ear like a steady-state vowel. Speech recognition was a Fourier transform approximation-having a computer recognize a vowel by identifying which simple frequencies in the complex vowel signal are most prominent. Strategy. I used Backward Design 2.1. SCAFFOLDING AND BACKWARD DESIGN. Scaffolding (e.g., Wood et al. 1976;Verenikina 2008) requires a distal "end goal" learning objective that the teacher and learner are aiming for as well as more proximal "next step" learning objectives that are challenging but accomplishable for the learner. The teacher initially assists the learner as they become more familiar with each new skill; as the learner's competence with a skill increases, the teacher shifts their assistance toward the next skill on the path toward expertise. A scaffolding strategy therefore pairs well with the Backward Design technique of identifying learning objectives and developing appropriate lessons and assessments for those objectives. Backward Design begins with determining the "end goal" learning objectives of a course, then deconstructing those objectives to identify what skills and knowledge students would need to accomplish those objectives. These intermediate learning objectives can be further deconstructed recursively down to the skills that students are expected to already know when they start the course. Scaffolded teaching guides the student forward along the path of learning objectives outlined with Backward Design. Scaffolding is typically described as a collaboration between learner and teacher. Teachers have to pay attention to each learner's skill level to determine when it is appropriate to remove scaffolding on one skill and add scaffolding to another. That way, learners are faced with challenges they are ready for without feeling that the pace of learning is too fast or too slow. However, given the asynchronous format of the course and difficulties with course preparation in the COVID-19 pandemic (see Blaylock et al. 2021), I felt unable to offer an appropriately flexible scaffolding experience to my students. Instead, I aimed to introduce scaffolded coding skills at a pace that I hoped would be appropriately challenging for novice programmers. I did this with a set of weekly programming assignments to lead students from the very basics of programming-by tradition, getting a computer to output the statement "Hello, World!"-to implementing algorithms for synthesizing and identifying vowels. Each assignment had a "Read, write, reflect" structure: students read and explained samples of code, wrote algorithms using code they had read, and reflected on their work (Selby 2011;Zavala 2016). Each assignment's algorithm was useful in the scope of phonetics (e.g., synthesize one sine wave) and a component of the final assessment (e.g., synthesize a vowel). 2.2. Vowel SYNTHESIS AND FOURIER TRANSFORMATION APPROXIMATION. The course featured two major sections: speech synthesis and speech recognition. The summative assessment featured a synthesis task and a recognition task that could both be performed using just a few relatively simple coding concepts (i.e., lists and matrices, for loops, maybe a built-in function or two). Detailed descriptions of each assignment are given in the Appendix; the assignments themselves are available as supplemental material. In this class, vowel synthesis was performed by adding simple sine waves (representing harmonics) together into a complex wave (the vowel). Simple sine waves were synthesized from scratch using mass-spring dynamical systems (Cromer 1981). Dynamical systems can be challenging for learners unused to the math, so I introduced them early in class and in code because they would show up thematically throughout the class-starting with sine waves, then later algorithms like dynamic programming (for concatenative synthesis) and stochastic gradient descent (for neural network speech recognition). In code, the core skills for this vowel synthesis included constructing for loops, creating lists, retrieving an element from a list, appending an element to a list, and simple arithmetic-all of which are crucial skills for any novice programmer. The speech recognition built on the coding techniques learned for vowel synthesis. I chose an approximation of a Fourier transformation that identifies which simple frequencies are most present in a complex wave-or in other words, an algorithm that identifies the harmonics of a vowel. This technique required three pieces: a vowel, a set of simple sine waves to compare against the vowel, and a comparison computation like the inner product. Students by this point had already learned how to create sine waves, and vowels, so the only major new coding tasks were 2-dimensional matrices for storing the sine waves-which are essentially just the lists we used in vowel synthesis-and a summing function required for the inner product. A major coding skill emphasized throughout the whole course was commenting code. Modern computer programs are often created with two types of information: commands in a programming language meant for a computer to interpret and execute, and human-oriented text called "comments" that explain what the code is supposed to do. Consistent and informative comments are considered good coding practice (e.g., Drevik 1996). I demonstrated commenting technique with line-by-line instructions in the "Write" part of each assignment so students could focus on coding one step at a time instead of trying to assemble programs from scratch (a skill I was not teaching them). In the "Read" parts of assignments, students were usually asked to write their own comments to practice explaining code. Sine waves and Fourier transformations are foundational components of phonetics and speech technology, and the coding techniques used for these algorithms (i.e., nested for loops and iteratively manipulating lists and matrices) are quite common in programming at all levels. The aim of the course was that by the time of the summative assessment, students would feel as though they had learned to code something useful. 2.3. SUMMATIVE ASSESSMENT. As I had planned through Backward Design, the summative assessment was an extension of the previous coding assignments. The task was to do some simple speech recognition by approximating a Fourier transformation-specifically, to compare the complex wave of a vowel against many simple signals and using the inner product to identify which simple signals were most prominent in the vowel. In addition to coding the Fourier transform approximation, students were also responsible for synthesizing the vowels that would be recognized and the simple signals that each vowel would be compared against. Every part of the summative assessment had been introduced by a previous coding assignment (see the Appendix), but also required combining some coding skills in new ways. For example, students had to synthesize not one but three different vowels using the vowel synthesis technique from Assignment 6 and storing the vowel vectors in a 2-dimensional matrix as they had learned in Assignment 7. Students had not coded a Fourier transform approximation by this time (though they had learned about the strategy in short lecture videos), but the steps to do it were taken from Assignments 5, 7, and 9. In this way, students were assessed on their ability to apply coding techniques they had learned to new but familiar tasks. In short, over the course of nine weeks, students encountered all the skills they needed to implement a reasonably complicated speech synthesis and recognition program. (This was unlike the previous courses I had been a teaching assistant for in which students were never expected to be able to program this way.) Result. Students generally responded positively to the coding assignments in a graded, openended reflection activity that was assigned at the end of the term. I did not ask about the coding assignments specifically in this assignment, but instead invited students to write about whatever impacted them the most in the course. Reflections from the 13 students who submitted them indicated that students were grateful for the coding experience and the pleasure of implementing course content as code. Between these generally positive reflections and overall good performance on the code assignments, I believe the learning was successful. Given the complexity of the code for vowel synthesis and Fourier transform approximation, I believe students learned an astonishing amount in this short time. Specific feedback included the following: • Several students appreciated having a new programming skill. • Three students appreciated thinking about code differently; one of them was an experienced programmer who appreciated the emphasis on extensively commenting code, and wish they'd learned that skill earlier. • A few students enjoyed learning the path-finding algorithms (coded in Assignment 8). • Some experienced programmers found the early assignments too trivial. • Two students appreciated the assignments with "tangible" results, like synthesizing an [i] vowel and coding a greedy search algorithm. • One experienced programmer enjoyed making vowels from scratch, and looked forward to applying path-finding algorithms to games they make. • One student appreciated the hands-on connection to topics they had learned about in an introductory phonetics class. • One student enjoyed learning the equations and then implementing them in code. Without prompting, two students mentioned the scaffolding of the assignments in their reflections: a student with little to no coding background reported surprise at how successfully they had assembled code from previous assessments they had found challenging; and, a student concurrently taking a computational linguistics course was grateful for how the extensive scaffolding in each assignment made it easier to understand the algorithm they were asked to implement. Telling the students what to write line by line may sound to some like too much "handholding", but students still had to work hard and ask questions-especially for the more complex algorithms. Successful code was built on an understanding of MATLAB syntax and algorithms we had discussed in the abstract; students struggled when they misunderstood part of the algorithm being coded, or when they didn't connect previous code assignments to the one they were working on. Instructor's reflection. Although a main motivation for this scaffolded approach was to make coding more accessible for students without programming experience or in precarious learning situations (pandemic-induced or otherwise), it seemed that these students were still more likely to struggle with code assignments than those who had coding experience and supportive learning environments. At the same time, some students who had coded before found the initial assignments annoyingly trivial. A future version of this class could benefit from letting students self-select into different "streams" that suit their level of experience, interest, and work bandwidth as suggested by Jenkins & Davy (2002): students would all learn together the same linguistic underpinnings, social impact, and commonly-used algorithms (conceptually) of speech synthesis and recognition technology; but what they would learn to code and how quickly they would be expected to learn it would depend from stream to stream. For example, the less intensive stream for this course might lead students to sine wave vowel synthesis over the course of all ten weeks instead of being compressed into six; the more advanced stream could move through code faster and encourage students to implement more complicated algorithms like dynamic programming and their own neural networks. Students in both streams would be expected to learn how the complicated algorithms work conceptually, but only students with previous programming experience would be expected to implement them. With multiple streams, students would face coding challenges more suitable to their current abilities (instead of a one-size-fits-nobody approach); this would help to ensure that students learn to code something meaningful that is also appropriate for their level. In sum, thoughtful course design helped my students learn a practical set of coding skills in an impressively short time. But through this experience I was reminded that thoughtfully flexible course design would serve students even better (e.g., Rose & Meyer 2002; see also the discussion in Blaylock et al. 2021). (Assignment 9) used the loops from Unit 1 and the matrices from Unit 3 to compare vectors from different matrices, a technique used for spectrogram comparison and a Fourier transform approximation. In Unit 1 (Assignments 1 through 4), students were introduced to the coding environment (MATLAB Online) and common code techniques including the use of comments, lists, and for loops. These were foundational skills that would be used in every assignment afterward. Students were also introduced in these assignments to the practice of reading a chunk of code and analyzing it by attempting to predict its output, skills which are important both for understanding new code and debugging their own. Assignment 1 tasks included: • Open the coding environment (MATLAB Online) • Open, edit, and save MATLAB file • Upload a MATLAB file to the course learning management system Assignment 2 tasks included: • Read single lines of code that involve basic list operations (creating lists, appending list elements, retrieving a list element with numerical and variable indices, replacing a list element) and arithmetic operators. Their previous work with lists and for loops evolved into modeling spring and mass dynamical systems separately, including modeling formant transitions with the spring systems; then we synthesize a single sine wave with a mass-spring system, and finally the synthesize a whole vowel with the superposition of several mass-spring systems. As planned through Backward Design, Assignments 1-5 were tailored to give students everything they needed to do vowel synthesis in Assignment 6. Their ability to do vowel synthesis in this way was later re-assessed (with minor changes) in the summative assessment. Assignment 4 tasks: • Read code implementations of dynamical systems and identify whether they are mass or spring systems, what the goal of the system is (if it's a spring system), and describe the updating function in words • Write a sequence of dynamical spring systems (using for loops) that roughly approximate formant transitions in the phrase "I owe you" Assignment 5 tasks: • Read nested for loops, describe the the code in line-by-line comments, and predict the output of the loop • Create simple sine waves by coding mass-spring dynamical systems Assignment 6 tasks: • Record your voice (an [i] vowel) in Praat • Take a spectral slice of your [i] vowel • Manually measure the frequency and amplitude of every harmonic in your spectral slice between 0-5000 Hz • Synthesize your own [i] vowel by creating a simple sine wave (from a mass-spring system) for each harmonic and adding the sine waves together Assignments 7-8 (Unit 3) introduce students to multidimensional data in the form of 2dimensional, 3-dimensional, and 4-dimensional matrices, as well as the min() and sum() functions. The 4-dimensional matrices represent the costs of paths in a network used for concatenative synthesis, which students traverse by coding a greedy search algorithm (in which path choices are made by the min() function and the final path cost is calculated with the sum() function). Path-finding algorithms like the greedy search algorithm are crucial for understanding how concatenative synthesis works. Assignment 7 tasks: • Read, write, and predict the outputs of 2-dimensional, 3-dimensional, and 4-dimensional matrices • Create a 4-dimensional matrix to represent a network that could be used for the best-path problem Assignment 8 tasks: • Read and predict the output of code that uses the min() and sum() functions for lists and 2-dimensional, 3-dimensional, and 4-dimensional matrices • Code the greedy search algorithm for the best-path problem using a for loop, the min() and sum() functions, and a 4-dimensional matrix that represents path segment costs Assignment 9 (Unit 4) was the last homework assignment before the summative assessment. It introduced the inner product computation (which involved the sum() function learned earlier) for calculating similarity between vectors; the inner product was then used to create a tiny speech recognition system that identified which of two spectrograms (represented as different 2-dimensional matrices) was more similar to a third by comparing the spectrograms moment-by-moment. The strategy of using the inner product to compare vectors of matrices was then a crucial component of the Fourier transform approximation in the summative assessment. Assignment 9 tasks: • Calculate the inner product between two lists using the max() and sum() functions • Represent a spectrogram as a 2-dimensional matrix • Evaluate the similarity of two spectrograms using the inner product in a for loop I did not assign a code assignment in the tenth (and last) week of class in the hopes that students would take the time to catch up on earlier assignments, ask questions about the things they didn't yet understand, get an early start on the summative assessment, and spend more of their mental bandwidth on the end-of-term reflection assignment. If I were teaching the course again, I would shift Assignment 9 a week earlier and get rid of Assignment 8 altogether. This would give students an opportunity to implement part of the Fourier transform weeks before they have to try it for the summative assessment. I found that students had conceptual issues with the Fourier transform approximation that were unrelated to the structure of the code and didn't emerge until students were already working on the summative assessment; if they had had a chance to practice earlier, their questions could have been resolved earlier. Although Assignment 8 was relevant to the course content overall, it contributed the least to the scaffolding leading up to the summative assessment.
4,597.6
2021-11-04T00:00:00.000
[ "Computer Science" ]
Moral controversies and academic public health: Notes on navigating and surviving academic freedom challenges Schools of public health often serve both as public health advocacy organizations and as academic units within a university. These two roles, however, can sometimes come into conflict. I experienced this conflict directly at the Harvard T. H. Chan School of Public Health in holding and expressing unpopular minority viewpoints on certain moral controversies. In this essay I describe my experiences and their relation to questions of academic freedom, population health promotion, and efforts at working together across differing moral systems. Introduction The Harvard T. H. Chan School of Public Health (HSPH) has been my academic home for eighteen years, first for four years as a doctoral student, and then later these past fourteen years as a faculty member.Over those years, the School has provided a stimulating and supportive environment.However, during the past months, various events have altered my experience and understanding of the School.In March of 2023, a series of Twitter posts were published by public health academics, principally concerning an amicus curiae brief I had signed in 2015 [1] in the Obergefell vs. Hodges case in the Supreme Court.The Brief argued that (i) there were two competing views of marriage at play, one more grounded in procreation and providing a stable family environment for children, and another more focused on the bond and personal fulfillment of partners; that (ii) the Constitution itself did not specify a view of marriage, and thus that (iii) it would be better if the matter were taken up by the states and their people rather than by the courts.My signing of the brief was linked in the Twitter posts to a commentary that I had published in JAMA Psychiatry on abortion and mental health [2].That commentary had argued that the abortion and mental health literature had been weaponized by both sides of the abortion policy debate; that the moral contours of the policy debate lay elsewhere concerning the moral status of a fetus on the one hand versus autonomy, control, privacy and the rights of women on the other; and that the abortion and mental health literature should thus be more oriented towards providing for the mental health needs of women regardless of their views.The Twitter posts led to turmoil at HSPH including calls for my tenure to be revoked and for me to be fired, along with public condemnations of my views by prominent academic administrators. In this essay, I would like to describe the course of events; consider whether the positions that were the source of controversy should be admissible within academic public health; and take up the issues of academic freedom, viewpoint diversity, and their relation to broader society and public health efforts.While the events described here are of course very specific, they bring up issues that are more general [3][4][5][6].They give rise to questions concerning the extent to which a research university is able to facilitate a free exchange of potentially opposing ideas within the context of intellectual diversity and civil discourse, and the extent to which university administrators are willing to publicly support the university in this role.This seems especially important within public health, in a context in which there appears to be growing alienation between many academics and billions of others throughout the world who hold differing views on a number of important issues.I will explicitly take up these matters in the second and third sections of this essay, but will first give a personal account of the events, as I experienced them, that gave rise to these reflections and concerns. Events at HSPH The Twitter posts began to be published on March 11th, 2023, and although they centered on the amicus brief from 2015, there was also considerable slander towards me, innuendo, and general disparagement.Most of the activity died down within a couple of days, but it reached over 40,000 viewers.The posts were accompanied by e-mails to my colleagues asking if they knew that I had signed the brief, and modifications made to my Wikipedia page, highlighting my signing.Given the extent of the social media posts, it also reached a number of HSPH students.The brief and the JAMA Psychiatry commentary were then further linked, as being considered problematic writing, to one of my blog posts in Psychology Today about the decline in well-being among youth [7], which had upset some students the prior fall.Two sentences in that blog post raised the question of whether introducing issues of gender identity in the general curriculum as early as kindergarten was conducive to well-being.A number of students at HSPH were very upset by these writings and some seemed to view them as threats to their identity.Within a short timeframe, some students were calling for my tenure to be revoked and that I be fired; or that I be removed from my teaching position of a required quantitative methods course; or that the School take positions on these various issues.Some students indicated that if they had known my views, then they would have refused to attend my quantitative methods class, and, instead, would have organized to protest.On March 17th, in my first conversation with the HSPH administration on these matters, the chair of the Department of Epidemiology indicated that what I had written and signed was within the bounds of academic freedom and that the HSPH Academic Dean had affirmed the same. During the week of March 20-24, the Population Health Sciences (PHS) PhD Program, in which I teach, hosted a listening session for the students, as did the Department of Epidemiology.At the second of those listening sessions, some students stated that my signing should not be protected by academic freedom.The Dean of Education and Chief Diversity, Inclusion, and Belonging Officer requested a meeting, and during that meeting, asked for my participation in a restorative practices process structured around 6 questions for all participants: What happened?What were you thinking at the time?What have you thought about since?What impact has this incident had on you?What has been the hardest thing for you?What do you think can make things right?The idea was that after separate moderated dialogues with various parties addressing these questions, there would eventually be a moderated conference discussion between the parties.Acknowledging the pain and distress within the community, and the need for clarification of my actual views, I agreed to participate. During the week of March 27-31, the HSPH administration sent out a series of e-mails.The three pieces described above were collectively referred to as "VanderWeele's writings on gender identity, marriage, and abortion."My "writings" consisted of about 1300 words (I was not an author of the amicus brief, but rather one of 47 signatories; the brief itself was itself one of 149 such briefs in the case, 77 supporting the petitioners, 67 supporting the respondents, and 5 supporting neither).Emails went out from the Dean and Academic Dean to the Department Chairs and the School's Academic Council; from the Dean of Education and the Directors of the PHS PhD Program to the students; and from the Chief Diversity, Inclusion, and Belonging Officer and Dean of Education to Epidemiology students.The e-mails noted that some students were feeling harm and betrayal, and that eight 2-hour circle dialogue listening sessions would be held as part of the restorative practices (meetings that were to take place without me, so as to hear the concerns of students, staff, and faculty).The e-mail to the department chairs indicated that the chairs should meet with their faculty to discuss the matter.Following these e-mails, I asked to meet with our Dean and Academic Dean.During that meeting and through e-mail correspondence I indicated that I was indeed prepared to participate in the restorative practices.I also requested that students be reminded of Harvard's commitment to the principles of academic freedom (e.g.[8]) and of the absence of any academic misconduct on my part.I also sent the letter that the Stanford Law School Dean had written on principles of academic freedom [9], following the turmoil that had just occurred there.The HSPH Academic Dean at least found the letter "very compelling."I proposed various other approaches to promote civil discourse and intellectual diversity within HSPH.The Deans indicated they would consider the proposals. Several further e-mails were sent out by Department chairs and Program Directors.Some of these e-mails referred to my views as "reprehensible", as being such as to "cause deep hurt, undermine the culture of belonging, and make other members of the community feel less free and less safe," as having been "condemned" by "many students, faculty and staff," and as "in conflict with our Department's and the School's stated goals of advancing Equity, Diversity, Inclusion, and Belonging as well as our commitment to sound public health policy," with the incident itself described as an "extremely corrosive situation," and the restorative practices as "redress" and "reparative justice."These e-mails were sent to large listservs of students, faculty, and staff.Following two of the more strongly worded e-mails, I wrote to the respective department chair and PhD program director.In one instance I received a kind and apologetic note, followed by a public apology to the entire department.In the other case, the language that was used was defended.None of these various e-mails made any reference to the absence of academic misconduct, or that my writings were protected by Harvard's policies on freedom of expression. Perhaps in part because of this lack of clarity, the situation continued to escalate.Students raised concerns with faculty in courses unconnected to these issues.It was clear that there was deep pain or a feeling of being offended among many within the HSPH community, sometimes over perceived threats to identity.Some of this seemed to be that the incident and subsequent discussion had allowed a long series of past hurts and harms within the LGBTQ+ community to resurface.Some of it seemed to be a sense from members of the LGBTQ+ community that what they had thought they had come to as a "safe environment" was in fact not so.There was perhaps also a sense that my writings had violated HSPH community norms and values.This is a complex matter, which I will return to later, insofar as the values, norms, and systems of moral understanding present within HSPH are somewhat less uniform than the School often projects, though the majority positions on these issues are indeed quite clear.Throughout this time, I agreed to every request from faculty, staff, or students to meet, either individually or in small groups, to talk through the various issues, or for them to share their pain.Faculty, including the Epidemiology Department chair, who defended me, or the principles of academic freedom, sometimes themselves came under criticism.Several faculty and students, including some who strongly disagreed with my views, nevertheless wrote to affirm support, and the importance of the free exchange of ideas.It also became apparent that a number of students and some faculty agreed with my views, but felt silenced by, and concerned about, what was taking place.Students also expressed concern that the way the circle dialogues were being handled suppressed alternative viewpoints. Some students and faculty expressed the view that even if I did have the right to academic freedom, it was nevertheless problematic that I had signed the amicus brief with my academic affiliation.I tried to clarify with faculty and students that (i) the brief itself stated that affiliations were for purposes of identification only; (ii) this was in line with Harvard policy ( [35], Section II.2); (iii) this was standard practice for academics signing briefs.Some faculty (including those involved in the national movement to protect academic freedom) expressed concern about the restorative practices, and advised me not to participate, especially in light of the fact that the administration had not clarified that there had been no misconduct, had not affirmed principles of academic freedom to students, and that words like "redress" and "reparative justice" had been used in some of the e-mails.I defended the restorative practices process, provided that proper clarification was given, on the grounds that its framing in terms of the six questions above was reasonable, and that it was important for all parties to seek the restoration of relationships and trust. At the Epidemiology departmental faculty meeting on April 5th, a central agenda item was "Discussion on matters related to Tyler Van-derWeele's views."The Department chair allowed me to make some remarks prior to this discussion.I commented that I had real sorrow over the pain and distress in the community; that my view at the time of my signing was similar to that President Obama held upon his election and until 2012; that I had been sent the amicus brief, asked if it corresponded to my views and, if so, if I were willing to sign.Since it did correspond to my views, as a member of our democracy and as a matter of conscience, I thought it was important to sign.However, I further noted that, as a member of that democracy, I had also accepted that a different view of marriage had prevailed in law, and that I had not addressed the matter since.I noted that I worked hard to treat all students respectfully.I also said that my experience of the events made me feel that HSPH, as a community, was not particularly strong on dealing with matters of academic freedom, intellectual diversity, and civil discourse.After my remarks, I departed from the meeting to allow for freer discussion among the faculty. During the week of April 10-14, six more 2-hour circle dialogue listening sessions were scheduled.I again met with the Dean who affirmed my academic freedom, but defended a decentralized approach to the incident so as not to upset the students and so as to let the situation quiet down.There was also to be a transition of Deans in the new T.J. VanderWeele academic year and I was told further work on supporting academic freedom would likely take place then.The University's Vice-Provost wrote to me stating, "I know I speak for all in the University's administration when I write that we respect you and your opinions, and your rights to free expression."I subsequently met with her and she affirmed the same.On April 14th, I requested that the Deans communicate to the Department chairs both the University-Wide Statement on Rights and Responsibilities concerning free speech [8], and also that, as per the comments above, my signing with my academic affiliation was within bounds of Harvard policy ( [35], Section II.2), and that the chairs then distribute this material to the faculty who could then clarify matters with the students.I argued that this would be in keeping with her proposed decentralized approach. A week later, on April 21st, I was notified by the Dean that there was to be a cessation of scheduling additional circle dialogues, after the fourteen 2-hour sessions that had occurred; and that the University-wide statement on academic freedom, and University policies on using one's affiliation, had been distributed the Academic Council and Department chairs; the e-mail also expressed an expectation that in the new academic year there would be additional teaching and learning modules on academic freedom.To the best of my knowledge, however, no departmental-level communication and clarification was made to either faculty or students during the week that followed, or thereafter. In May, a faculty colleague mentioned that she had sent one of my statistical methodology papers to a collaborator, and that it had been dismissed because of my signing the amicus brief.It seems there were similar dismissals of my methodological work on such grounds on Twitter, and by some HSPH students.Throughout this time, I felt uncomfortable entering the HSPH buildings or using my office there, and only in early May did I return.I had previously spoken with several people who had said that they were uncertain whether, if I entered, there would be organized attempts to surround me.In the final week of the semester, I attended a major departmental event to move towards greater re-integration, and also participated in discussions with PHS doctoral students, facilitated by the Associate Director of the PHS program, to better understand different experiences and viewpoints concerning marriage, rights, and other moral questions.On May 8th, nine days before the date set for the formal concluding joint restorative practices conference, the moderator of that conference informed me that the final two of the framing questions, "What has been the hardest thing for you?What do you think can make things right?" would only be asked of the other participants, not of me.This created an asymmetry in the process, effectively considering the feelings of hurt, and placing blame for the situation, in only one direction, thereby arguably reinforcing the concerns of my faculty colleagues who suggested that I withdraw.Nevertheless, in hope of some relational restoration, I decided to see the process through.I certainly do not think faculty with unpopular minority viewpoints should be subjected to this, nor do I feel I was forced to participate, but out of desire to engage with those who felt most affected by what had taken place I went ahead.The spring semester of the academic year concluded without any formal clarification from the administration to students along the lines I had requested. The message that, to my mind, was implicitly conveyed by the administration to the HSPH community, often by way of innuendo and what was not said, was that my views either perhaps were not, or perhaps ought not be, protected by academic freedom. An analysis of the response Throughout March and April, I was often spending six or more hours per day dealing with the matters above.However, little of that time was spent in clarification of, or discussion of, my views.Almost all of it was devoted to managing the situation.I learned later that some thought my views constituted a threat to human rights.I have, in the Appendix of this essay, tried to provide greater clarification of my viewpoints on each of the three written pieces.This seems important in considering the question of what people and viewpoints should be considered admissible in academic public health, which I will turn to in the next section.In this section, I would like to address aspects of the response to the events that I think were not conducive to academic life within a university context.I will offer my interpretation of the events as I experienced them, though I certainly acknowledge that others may well interpret them differently and that there are undoubtedly numerous details of the events themselves concerning which I am not aware. In almost all cases, I do not think the actions of the HSPH administrators were ill-intentioned.Essentially all of my interactions with the Deans, my Department Chair, and the Chief Diversity, Equity, and Inclusion Officer were interpersonally positive and supportive, and I am grateful for this.I would also speculate that these events would likely have played out similarly, at perhaps not all, but at most, other schools of public health in the United States.I certainly do believe that students, staff, and faculty have every right, as part of their own freedom of expression, to be upset about, and to criticize, my published writings.However, I believe the way that this was handled by the administration, or in some cases by faculty or students, has detracted from academic life within the community. First, although the administration acknowledged that my opinions were protected by freedom of expression and that I had not committed any academic misconduct, they seemed unwilling to formally communicate this to students, staff, or faculty.I proposed several different ways the clarification could be made -a letter from the Deans, communications from Department chairs, clarity by the Chief Diversity, Inclusion, and Belonging Officer-but these points were never publicly made to the HSPH community.Without having the necessary clarification, students requested that my tenure be revoked and that I be fired, or that I be removed from my teaching position, requests that would be unlawful for the School to carry out. Second and relatedly, I was told that some of the students stated that my signing the amicus brief should not be protected at all.This is tantamount to denying my right to participate in our democracy.The School not affirming that my signing was protected perpetuates such beliefs. Third, the two instances of which I am aware in which a department chair or PhD Program Director publicly condemned my views to an entire department or program, in their role as University administrators, constitute violations of Harvard's University-wide Statement on Rights and Responsibilities [8]. Fourth, the lack of clarification also had a chilling effect on freedom of expression of others.Some students expressed concern about there being a double standard on freedom of speech.They felt that members of the HSPH community were only free to hold and express opinions so long as they aligned with the vocal members of the academy.If this was how a tenured professor was being treated for occasionally writing about his views, what would happen to an untenured professor, or a postdoc, or a student? Fifth, in many cases, there seemed to be condemnation of my views before inquiry and understanding, not only from students but also from a department chair.Certainly not in all, but in some cases, the logic seemed to be that since the political case was won, the intellectual case must also be considered settled, and that one could thus condemn.This likewise does not facilitate an environment conducive to the free exchange of ideas. Sixth, for a number of people, there seemed to be a reliance on information from social media posts, rather than a reading of the actual documents.The Twitter posts reached over 40,000 individuals within a few days.While the conversations I have had I think have been helpful in clarifying viewpoints, and sometimes in the restoration of relationships, I simply cannot meet individually with 400, or 40,000, persons.I do not think anything of the scale of what took place would have been possible without Twitter, which seems to now exert undue and unhealthy influence on academic discourse.The Twitter posts suggested that I was homophobic, racist, and unfit to study flourishing.(For what it is worth, I am open to, and have, friendships with people across a diverse range of ideological viewpoints and identities, and value these friendships both in and of themselves and with regard to what I learn from them; it is also the case that there are a range of perspectives on the above controversial issues within the Human Flourishing Program at Harvard that I direct, and I welcome that diversity).Sadly, these ad hominem accusations were perpetuated by academicsmy colleagues in public health.Unfortunately, the abortion and mental health commentary was behind a paywall, as are all JAMA Network commentaries, and this allowed those who published the Twitter posts to make various innuendos as to the content of the commentary, without the commentary being read or easily accessible.The HSPH administration likewise did not distribute the actual article, but instead linked to the website behind the paywall. Seventh, even when the pieces were read, the judgements that were made were often on my implied or assumed views, rather than on what was actually written.It was surprising how quickly people would sometimes jump to conclusions about my beliefs concerning positions that I simply do not hold.Such practices of guessing, assuming, scrutinizing, and condemning someone's unstated views seem inappropriate.Such practices I think also tend to lead to a greater propensity to attack the person, rather than the ideas. Eighth, there was effectively no attempt to properly attribute responsibility for the pain within the HSPH community.Certainly, I signed the amicus brief and for that I take full responsibility.However, I was not myself seeking to disseminate either the brief, or my views on this matter.With regard to the law, I had accepted the outcome of the Obergefell vs. Hodges Supreme Court case.Others may see things differently, but from my perspective, for some group, possibly involving members of HSPH, there was an attempt to use my signing of the brief to harm my professional reputation, which was apparently more important to them than the wellbeing of the HSPH LGBTQ+ students, my own wellbeing, and the fabric of the School community. Ninth, lack of public affirmation for academic freedom will inhibit empirical research being carried out from those with diverse viewpoints.Beyond the concerns noted over my signing the amicus brief with my academic affiliation, there were additional concerns over my signing without having carried out original empirical research on the topic.While the topics of the amicus brief were not, and are not, my primary topic of research, I had read through much of the related empirical literature through 2015 (though I have not closely followed developments since), and I had provided critical feedback on a review of that literature, mostly concerning methodological critique of study designs.This work, along with my carrying out related conceptual, philosophical, and cultural readings on the topic, which put me in a position to reasonably sign the brief, were an exercise of academic freedom, though the signing itself was an act of freedom of expression, arguably protected by academic freedom [13,14].I have in fact additionally been invited to participate in original empirical research on each of the three controversial topics mentioned above.I have declined all of those invitations.In an academic environment more supportive of free inquiry I may well have pursued one or more of the aforementioned requests.While I think high-quality research is important on these topics, and should be protected by academic freedom, and I believe I could have contributed through my methodological expertise as I do on many other topics, the professional hazard in the current environment seemed too high.My experience with the remarks that I have made on these issues seems to have validated my prior concerns. Finally, the way this was handled consumed an enormous amount of time, not only for me, but for many in the HSPH community.I accept that actions have consequences, but those consequences depend also on the response of the community, and the academic health of that community.An alternative for those who disagreed with me would have been to respond with, "I believe his viewpoint is wrong, and I am glad that the position that he was defending lost," and then either move on, or otherwise engage with me individually, or in groups, on the viewpoints themselves.That my positions were not simply treated as minority, though potentially intellectually defensible, viewpoints, but were instead effectively broadcast as being problematic (with the types of expressions described above) to what I believe was nine departments and perhaps up to over a thousand students, faculty, and staff, very much altered, for me and others, the amount of time required to move forward, the nature of the exchanges that took place, and the understanding of the institution.As noted above, in spite of all of the time spent, for nearly two months there was remarkably little exchange of ideas, or trying to understand diverse viewpoints, and learn from one another. To my mind these aspects of the response to my writings do not constitute, nor do they foster, a healthy academic community.Until the leadership and administration publicly and actively affirm and defend academic freedom and freedom of expression, incidents of the sort that I have experienced will inhibit the free exchange of ideas, understanding, and the pursuit of knowledge. Academic public health The challenges to academic freedom in this case were somewhat convoluted.The Vice-Provost, the Dean, and my Department Chair all affirmed my freedom of expression, but there was a reluctance on the part of the School's leadership to publicly acknowledge this.The message that I felt was often being conveyed to me was that my views, while perhaps formally protected, should not in fact be present within academic public health.Fundamentally, I think there is a lack of respect for the intellectual diversity within our public health community.In this section, I will argue that academic freedom and freedom of expression need to be supported to create a respect for viewpoint diversity, and this diversity, when engaged with through rational civil discourse, has tremendous value for knowledge and understanding, for societal engagement, and for population health. I believe much of what occurred took place because of different systems of moral understanding within the School.The majority positions at HSPH on abortion, marriage, and gender identity are relatively clear.My views, occasional writings, and signing the amicus brief were seen by some as violating the norms and values of the School.It is the case that I am a Catholic, and the positions that I hold follow the teachings of the Catholic Church [17].I have not in any way hidden my Catholic faith; indeed my being received within the Catholic Church was described in the HSPH Magazine [40].I assume Harvard faculty members are allowed to be Catholic, and that Harvard is not supposed to discriminate in the treatment of its members on the basis of religion.However, in the legal counsel I have received, there were questions as to whether, in the administration's behavior towards me, Harvard is meeting those obligations to not thus discriminate.With respect to the three issues that caused controversy, although my views follow the teachings of the Catholic Church [17], it is also the case that Catholic teaching is that various moral positions can also be derived on the grounds of reason [43].I believe it is good to uncover and make use of those grounds so as to present more generally accessible arguments.In my engagement with these issues, I have put forward arguments that are accessible in a secular context.Although I do not think it is necessary to do so, I tend to think that a democracy functions best when the grounds of the arguments put forward are accessible to as broad a group as possible.Similar viewpoints to mine are also held by many others, on, or apart from, the grounds of faith. A couple of years ago, a student asked me how I could survive as a committed Catholic at an institution like HSPH.My response then was that this had not been an issue for me; that I realized that the vast majority of the faculty disagreed with a number of my views, but this hadn't inhibited my work, and, on the whole, the School had been a supportive environment.My perspective on this question is now rather different.The student's question, and my experience these past months, raise further concerns regarding who is welcome to participate in academic public health, and in what manner. A university ought to be a place in which a broad range of viewpoints are welcome, even those which may be strongly at odds with one another [6,67].The members of an academic community have a responsibility to put forward reasoned arguments, but we come to various topics with different starting points and presuppositions.The process of rational discourse is in part meant to uncover those presuppositions, and to evaluate the extent to which logic and evidence support a given conclusion.All research and scholarshipmy own and othersis influenced by a person's commitments, identities, and positions.By interacting with those of other viewpoints we are made better aware of those influences and are together better able to try to discern truth.We should conduct our discussions and arguments respectfully, with the recognition that others will often disagree with us and may do so passionately.Through civil discussion, our understanding of alternative viewpoints becomes stronger.Our understanding of our own views can often also be sharpened; and we can sometimes find common ground.Civil discourse and viewpoint diversity are the means; knowledge and understanding are the ends. Not everyone may agree with these ideas.Some may view the notion of rational discourse as one of many attempts to seize power.Others may disparage the notion of respectful civil discourse.One thread of the Twitter posts put forward the notion that my being "fastidiously interpersonally kind" was itself potentially problematic in that "fastidiously interpersonally kind oppression" is common in spaces of privilege.The reaction of some people to my abortion and mental health commentary was to reject the notion that there is potential common ground for those on different sides. From the perspective of public health advocacy with a particular agenda, these alternative viewpoints are themselves perhaps understandable.An approach which rejects reasoned engagement, civil discourse, and finding common ground may sometimes be the fastest way to one's end.But, as discussed further below, I do not think it provides much hope for the future of a pluralistic democracy.Moreover, such an approach detracts from a university's purpose to create, preserve, and disseminate knowledge; it instead alters that purpose for different political ends. These issues raise the question as to whether a school of public health, situated in a university, is, or should be, more akin to a public health advocacy organization, or to a university of the nature described above?The answer to that question is what is at stake with regard to the events at HSPH, and how the situation was handled by the administration.If HSPH is viewed principally as a public health advocacy organization with a particular agenda, then I have violated community norms and values and some form of redress seems necessary.If HSPH is viewed principally as an academic institution, then my views, provided I can defend them, should be a welcome part of dialogue, allowing for deeper understanding of one another's viewpoints. There has been, and likely always will be, a dual nature to academic schools of public health.They function both as academic units, and as public health advocacy organizations.But decisions need to be made as to how to treat their members.Can the viewpoints and convictions of a Catholic who is faithful to the teachings of the Church, or an Orthodox Jew, or a devout Muslim in an analogous situation, or a conservative, or others, be openly expressed, and that person still be treated civilly?If the answer is no, then there is a real loss with respect to the School's academic nature.If the answer is yes, then I think we have a long way to go. There needs to be greater clarity as to what views are admissible in public health discussions, and which are to be considered unacceptable.Should it be permissible, for example, to silence or exclude minority viewpoints that are held by 10% or 30% or nearly 50% of the American population?Clarity on such issues would help address the question of the extent to which the university considers it acceptable for me to share my viewpoints on moral controversies, or to carry out related empirical research, or both, or neither.The answer to these and related questions are not at present entirely clear at Harvard [3].Beyond the question of what views are admissible, there is an additional question as to whether diverse viewpoints should in fact be sought out.The research at many schools of public health is predominantly supported by federal grants, publicly funded by taxpayers.To what extent should the diversity of viewpoints within the general public not be only permitted, but even actively represented, within academic public health? These concerns are not merely academic or theoretical.Schools of public health train and shape our nation's future leaders.On the various controversial issues noted above, roughly 30% to 50% (or more) of the United States' population hold positions similar to my own [29,51].Such groups thus constitute 100 million or more people, just in the United States.There is not the same distribution in viewpoints within this country (or the rest of the world) as one finds at HSPH.To what extent are we equipping future public health leaders and academics to deal with this diversity of viewpoints?To what extent are we providing an environment in which to even understand different viewpoints? Encounter with diverse viewpoints can be challenging and threatening; and indeed there were claims that students did not feel safe.That all students are safe is critical; that all students feel safe seems beyond the capacity of any institution, and making such feelings a central goal is likely to compromise learning.Excessive protection from ideas and people with whom one disagrees can make a person weaker emotionally and psychologically [4], weaker in understanding and knowledge, less able to find common ground, and less able to serve the entirety of one's country and world.If public health becomes, and is viewed as, overly partisan -as not even capable of understanding the concerns of othersthen trust in public health institutions will likely continue to erode.This, I believe, will often gravely compromise the capacity of these institutions to promote population health. Freedom of expression can be abused and there are risks to granting these freedoms [71], but by treating one another civilly and respectfully we can and should try to prevent those abuses.There are also complexities around differentials in power within the university, and the reach that the speech of a particular person is able to have, though I believe that most faculty are genuinely motivated to try to empower students so that they too, as their career develops, have the capacity to communicate their work also to the general public.However, without taking the risk of guaranteeing freedom of expression for everyone, as best we can, there is potentially a severe loss with regard to our own capacity to seek truth, and also a severe danger with regard to our capacity to work together towards promoting well-being.The loss is wellcharacterized by John Stuart Mill, in his work On Liberty [54]: "He who knows only his own side of the case knows little of that.His reasons may be good, and no one may have been able to refute them.But if he is equally unable to refute the reasons on the opposite side, if he does not so much as know what they are, he has no ground for preferring either opinion…" The danger is perhaps well-characterized in an address of Frederick Douglass [25], delivered in Boston, not far from HSPH: "Liberty is meaningless where the right to utter one's thoughts and opinions has ceased to exist.That, of all rights, is the dread of tyrants.It is the right which they first of all strike down." The only way that we can have true inclusion and belonging for everyone is a radical openness to the free exchange of ideas, carried out respectfully and civilly, accepting that others will disagree with us, accepting that we have different moral understandings about right and wrong, and accepting that we may find some ideas painful and hurtful.Moral understandings are diverse, and most nontrivial ideas about policy will likely be hurtful or offensive to at least some.Many students found my signing the amicus brief hurtful, and many likely view my signing as morally wrong.Conversely, I likewise view advocacy aimed at intentionally increasing abortions as hurtful and morally wrong.However, both of these actions are protected within our constitutional order, and are also within the bounds of academic freedom.Our democracy and universities should be able to sustain such diversity and disagreement.This does not mean that various moral views, or values, or identities shouldn't come under scrutiny.On the contrary, I think there should be open disclosure and debate of moral systems, values, identities, and their grounds, including religious grounds.This again allows for a better understanding of others' and our own perspectives, and also opportunities both for reasoned persuasion and for finding common ground. The alternative for academic public health to a more radical openness to a free exchange of ideas is to exclude, or silence, or suppress, alternative viewpoints.One might take the position that Christians, Jews, Muslims, conservatives, and others are welcome so long as they in fact agree with the majority viewpoint, or remain silent on certain issues.That may work, and perhaps to some extent has worked, at schools of public health.However, it is not similarly an option for our society.Within society, it seems that we are faced with only the options of increasingly vitriolic fighting, or alternatively of attempting greater civil discourse, attempting to find common ground among our pluralistic perspectives, and accepting that the democratic process will sometimes not turn out as we like.Without that acceptance, polarization and hatred are likely to continue to increase.The question then arguably arises as to which of these two approaches to society will schools of public health ultimately contribute.The relative balance of its contributions could make a great deal of difference for the future and well-being of our democracy. We need a robust free exchange of diverse viewpoints so that we can engage civilly and thoughtfully in society.Civil discourse need not exclude the expression of anger over offense.However, anger and hurt do not in general entail a right to silence the speech of others [6,27]; nor do they constitute a refutation of rationally grounded arguments; nor do experiences of anger and hurt necessarily entail evidence of wrongdoing or injustice.While we can acknowledge anger and hurt, we also need to make these principles of discourse clear to one another, and to train ourselves to be able to engage with those with whom we disagree even amidst anger.Moreover, it also needs to be acknowledged that, due to differing values and presuppositions, there will often be anger and hurt that extend in both directions.Such recognition can again foster a freer exchange of ideas, even amidst passionately held views.Among the proposals I made to our Deans were the following: (i) implementation of further training on the positive value of academic freedom and the free exchange of ideas; (ii) regular data collection on whether students, staff, and faculty feel comfortable sharing what they really think about controversial issues, both inside and outside of the classroom; and (iii) the introduction of a new seminar series on understanding diverse intellectual viewpoints, which would bring together two speakers on different sides of an issue to model civil discourse, to help us uncover differing presuppositions and values, and to hopefully find common ground.These practical steps, among others, would help foster a healthier academic community, and one more respectful of intellectual diversity.I believe there would be benefit from adopting such practices throughout all schools of public health. Different communitieswhether LGBTQ+ communities, or different religious communities, or different political communitieswill have different values, and different understandings of what is good.Questions concerning means and the efficacy of policies can, to a certain extent, be addressed by empirical research.But questions concerning values, and the nature of well-being, cannot.Within a pluralistic society, we can try to structure life and policies so each of our distinctive communities is empowered to try to also pursue the values and ends that they deem most important.These distinctive values will, however, inevitably sometimes come into conflict.Our democratic system provides a way to adjudicate between differing viewpoints.However, there also needs to be a realism as to what political action will, and will not, accomplish.A policy or change in law can of course grant new freedoms, and rights, and responsibilities, and can restrain or enable action and behavior in various ways.However, its effects on beliefs and values are more complex.Policy and law will influence beliefs and values, but law cannot force such change, and it will often not alter the beliefs and values of a particular community.Shame has sometimes been used to try to bring about such alterations, and this can sometimes be effective in altering more loosely held values and beliefs.But it can also be resented, and it sometimes only alters what people are willing to say they believe, rather than what they actually believe.Moreover, shame is less likely to alter values and beliefs that are firmly held and rationally grounded, or values embedded within a community's life.For those to change, rational discourse and persuasion, as well as consideration of a community's lived experience, are ultimately needed. An overemphasis and focus on our disagreements, which to my mind is what much of the culture wars have brought us, will lead to greater conflict.It is not that these disagreements do not matterthey do matter but there is a question as to how much emphasis they are given.Are they the central focus of our political energies, or are these important but auxiliary topics with respect to our interactions with others, and a source of genuine mutual respectful acknowledgement that we do not agree on all things?Through civil discourse and a free exchange of ideas we can understand each other's values and notions of well-being more fully.We can come to understand that reasonable people of goodwill can disagree on important matters.We can also see where there might be common values.I have argued elsewhere that such common values extend to a number of aspects of flourishing including happiness, health, meaning, character strengths, relationships, and financial stability; and that we can meaningfully work together to pursue policies that promote various aspects of flourishing held in common [7,76,81].This is what much of the work of the Human Flourishing Program at Harvard is trying to accomplish (and is also where I try to focus my own energies; though when controversial issues are presented to me, I will continue to speak my mind, as I hope will others regardless of their viewpoints).I truly believe that a healthier more robust free exchange of ideas, values, and viewpoints, carried out civilly, has the capacity to highlight our agreements and common pursuits, and to respectfully acknowledge and try to navigate our disagreements.Academic institutions should view the advancement of skills to work together, across differences in moral systems, values, and identities as a critical part of preparing leaders and academics to promote the common good.My colleagues certainly might see a number of these issues differently and I would welcome them to share their alternative perspectives. I conclude with discussion of the relation of these issues to a few other specific aspects of my own life and workpast, present, and future.During the 2004-2005 academic year, when I was a doctoral student at HSPH and serving as President of the Student Christian Fellowship at the School, a member of that fellowship indicated a desire to start a pro-life group.With some fear and trepidation, I agreed to help her.The School administration was in fact supportive.When posters advertising events were pulled down, the School put them behind glass.In the end, we hosted joint events with the Student Reproductive Rights group at HSPH, both to better understand each others' perspectives and also to try to find common ground.It is not clear to what extent we are positioned to hold similar joint undertakings today. One can, in one's discussions, at least still point towards examples of partnerships navigating disagreements.In the elective course I teach at HSPH on religion and public health [82], I discuss the partnership between Brazil's National AIDS Program and the Catholic Church [57].That partnership persisted, in spite of deep and irreconcilable disagreements, including highly pertinent ones regarding advertisements for contraception, because both groups believed they could better advance their shared goals by working together than by working separately.That sort of difficult partnership could be taken as a model as to how to move forward towards common ends, even when there is deep disagreement over values.There are of course numerous other such examples [16,33,41,42,48].However, without this sort of difficult work together, I think progress towards societal well-being will be impeded.There are approximately 2.4 billion Christians world-wide, 1.9 billion Muslims, and billions of people of other faiths [61].Their viewpoints are diverse, but many hold positions similar to the positions I hold that were found problematic at HSPH.Schools of public health have the option of working to oppose, suppress, and silence those views; or may hope to change or convert their views; or may acknowledge the disagreements and nevertheless find ways to work together in our various societies across the globe.The distribution of views of academics within schools of public health on the three issues above are not representative of the diversity one finds worldwide, and it is not clear that this is likely to change.Some projections suggest that the proportion globally who identify as religious will increase over the coming decades [62].It seems worthwhile to have conversation on what might be the best way forward. I will, next year, be publishing a book entitled A Theology of Health [81], from a distinctively Catholic perspective.Had it not been for the events that were ignited by the Twitter posts, it is possible that the book would have mostly slipped under the radar of the public health community.With the events that have taken place these past months, the book may now come under much greater scrutiny.While the book lays out a distinctively Catholic understanding of health, it also engages with the empirical literature, and it concludes with a "non-theological postscript" which attempts to bring some of the insights of a Catholic or Christian understanding of health to a more pluralistic context.I am sure that there will be critique, and in fact, I welcome it.But I hope that the criticism will be productive, that it will allow me to understand the views of others, and challenge mine in helpful ways.I likewise hope the book provides similar opportunities to others to have a better understanding of my views, and to have their views challenged and sharpened; and that ultimately it may help us work together. The events recounted above at HSPH also happened to coincide with two other significant undertakings.First, in the fall of 2022, long before I had any idea these issues would arise so personally for me, the Associate Director of the Human Flourishing Program at Harvard which I direct, began to help organize a faculty-led Council on Academic Freedom.He had my full participation and support, though he began this work on his own initiative.That Council was formally constituted in March of 2023 [63], not long after the Twitter posts began.Part of the mission of the Council is to support faculty attacked for speech.Given my early involvement in the Council's formation, I did not, however, particularly want to be the Council's first case.I have received helpful advice from the Council's co-Presidents, but asked them not to collectively act until the beginning of the new academic year.My hope is that over the longterm, the Council will be able to help strengthen the School with regard to dealing with intellectual diversity, civil discourse, and freedom of expression.I would invite other faculty at HSPH, and within the broader Harvard community, to become members and join in these efforts [22]. Second, in April of 2023, the Human Flourishing Program, in collaboration with Harvard's Memorial Church and other organizations, hosted a conference on forgiveness that had been over a year in planning.I see forgiveness as replacing ill-will towards someone you believe has harmed or wronged you with goodwill [38,77].There is perhaps a sense of moral injury, both for me, and for those who feel hurt by my writings and action.For them, it may be partially constituted by my holding a view of traditional marriage that they feel threatens identity and human rights, and of HSPH not being the community for which they had hoped.For me, it is partially the attitude my colleagues in public health now seem to have towards me, and partially being within an institution, and perhaps a discipline, in which my views seem not to be welcome, in what I had previously understood as a principally academic context.My Psychology Today blog post on the topics of forgiveness [80], the conference, and our randomized forgiveness workbook trial [37,88], was strongly shaped by what had been taking place at HSPH.I recognize that there is real pain on both sides, and that we each view the others' actions as having been harmful.I recognize the challenge of forgiveness when one or both sides believe they have not done anything wrong.While I hold my views with conviction, I have genuine sorrow for the pain and distress within the HSPH community and I hope that the various discussions that have taken place eventually bring greater restoration of relationships and trust.Forgiveness is not sufficient for healing, restoration, and rebuilding; one needs also understanding, accountability, mourning over loss, and new ways forward; however, I do think that forgivenessreplacing ill-will with good-willhelps move in this direction.I have been working towards forgiveness of those who have, either unintentionally or intentionally, hurt me and my family.While, from my perspective, I believe I have done no wrong in holding or acting upon my views, I acknowledge that others see things differently, and I hope, over time, they too, from their perspective, might see forgiveness as an appropriate response.I believe that through civil discourse, and through forgiveness, we can bring some healing to our HSPH community and I hope also, in the long-run, to our world. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.their interconnection.It serves dual related purposes.I believe that there are at least two ways in which this definition can be modified while still retaining something meaningful.One might alter the "vow of permanent union" or alter its being "between a man and a woman."I believe these two alterations are in principle independent of one another.The first alteration gained greater cultural prominence with no-fault divorce; the second with growing acceptance of same-sex marriage.With both alterations in place the definition of marriage becomes something along the lines of "an expression of intention of long-term union so long as the partners involved desire it."However, once again, the two alterations are in principle independent; same-sex partners may make vows of permanent union; conversely, many opposite-sex weddings today do not involve vows of permanent union.Even with both alterations in place, this is still a meaningful understanding of marriage, but it places more exclusive emphasis on the personal fulfillment of spouses, and it severs the direct connection of marriage with reproduction and children.It does not exclude it, but it becomes a secondary, rather than a coprimary, purpose of marriage. The argument of the amicus brief was that since what was at stake was the definition of marriage, it seemed better to allow the American people to deliberate about the definition through a democratic process within each state than to entrust the decision to nine unelected Supreme Court justices.There is certainly no guarantee that the democratic process within states will necessarily be representative either, but it does in general seem more feasible to attain a more substantial consensus on many issues at the state level than at the federal level.In any case, with the Supreme Court decision, the latter definition effectively prevailed in law.With marriage thus redefined, I can affirm a "right to marry," with marriage thus understood, but once again, my view was that with respect to overall societal well-being, it would have been better to use the term "civil union" for the latter definition and to retain the traditional definition of "marriage" itself.However, I respect that others disagree with that position.In any case, in whatever way marriage is defined, each person has a right to enter marriage, though with the traditional definition, it would be unusual, though not inconceivable, for a same-sex attracted person to marry under that understanding.Understandings and definitions of marriage change over time [36].Our dominant cultural understanding has considerably evolved, but I do not believe that the redefinition of marriage is without impact on the welfare of children.Marriage has not been the only institution oriented towards the welfare of childreneducation is as wellbut marriage has been a primary one.The redefinition does not completely sever the link with children, but it subordinates it.That the redefinition takes one of the primary institutions oriented towards the welfare of children and reorients more exclusively to the fulfillment of spouses I believe has a number of consequences. First, as argued in the brief, the traditional definition of marriage best protects Article 7 of the United Nations' Convention on the Rights of the Child [74] that the child shall have, "as far as possible, the right to know and be cared for by his or her parents."I believe this proposed right is directly connected to childhood well-being [21,45,46,52,53,55,60,64,84,85], as are other parts of the Convention's statement.This point also in some sense subsumes the point in the brief concerning sex-complementarity within marriage, since children living with their biological parents would naturally entail such complementarity (though, of course, even independent of the question of same-sex marriage, this will not always be attained in the raising of children as, for instance, with single parents, divorce, widows/widowers, or some cases of adoption). Second, a revised definition of marriage, more disconnected from children, also alters the understanding of opposite-sex marriage as well.If marriage is principally about the fulfillment of spouses, this will then alter a person's willingness to stay in a marriage for the sake of the children.For a number of people, it will no longer make sense to continue with a marriage if the marriage itself is not personally fulfilling.However, the impact of this on children is not inconsiderable. With an altered understanding of marriage, focused on fulfillment of spouses, and allowing for no-fault divorce, divorce rates sky-rocketed [60,83].Research suggests negative effects on average of divorce on the health and well-being of spouses [15,20,70,76,89] and perhaps even more notable negative effects on children [10][11][12]18,24,34,87].This does not mean that divorce is never the right answer; there are faults of unfaithfulness or abuse, for example, which may render divorce the best option; and single parents and their children should be strongly supported, like all others, and can be enabled to flourish, especially with such strong support.However, an altered understanding of marriage, focused more exclusively on the bond between spouses, makes it far more common for divorce to seem like the right way to proceed.The redefinition thereby effectively alters opposite-sex spouses' understanding of marriage as well.A person's willingness to stay in marriage may become more focused upon personal fulfillment, and less focused upon children.Moreover, with a conception of marriage more detached from children, it may become difficult for some to even understand the point of getting married, and the rate of opposite-sex marriage may itself then decline, and the openness to children outside of the context of marriage may increase [68].It is not clear, if marriage is just about love between partners, why it is important to regulate marriage at all [31].As a complex illustration of several of these points, and their interplay, some time ago, a colleague of mine was engaged to her partner, who was the father of their two children; during the course of their engagement, he told her that he was no longer in love and was leaving.If marriage is principally about the fulfillment of spouses, his response is understandable; his response was not necessitated by such an understanding, but it coheres with it. Finally, the effective removal of reproduction and care for children from being embedded within the definition of marriage itself I think also shifts societal focus away from children's welfare and towards adult fulfillment.This shift arguably affects a host of issues ranging from teacher pay, to tax policies, parental leave and childcare policies, policies concerning children's use of social media, and decisions concerning various COVID response options and how these affect different age groups ([32,49,50,56,59,73]).These shifts are not inevitable, and can be resisted, and there are undoubtedly numerous other societal forces at play in shaping these policies, but I believe an altered definition of marriage focused more exclusively on spouses naturally shifts societal focus away from children, and the priority given to the welfare of children.The question is not so much how marriage can or might function in any given relationship (same-sex couples can certainly lovingly care for children), but rather how marriage is functioning as an institution. I do not see how these three points above could be completely irrelevant to childhood well-being, but it is nevertheless still conceivable that their corresponding effect sizes might be negligible.Some of the views above are thus at least partially open to empirical refutation.To gain insight as to how these three considerations above may relate to childhood well-being, one might attempt to collect data over time oriented towards questions like: What proportion of children are living in marriages with both biological parents?To what extent are spouses committed to remaining in a marriage for the sake of children?To what extent is society giving priority to the well-being of children within its policies?Data collection may give insight into how these matters are changing over time.However, as I have commented elsewhere, traditional cohort-based methods for causal inference are not particularly well-suited to provide evidence for the effects of cultural movements, and other methods of cultural or historical analysis may be more appropriate [75]. I do not believe that the question of same-sex marriage is completely responsible for the various cultural changes indicated above, nor do I think that these cultural changes were intended by many of those advocating for same-sex marriage.Moreover, as noted above, I do think each of the two alterations to the definition of marriage were in principle independent.However, I also think that each independently weakens the link of marriage with reproduction and the care of children.In many ways, same-sex marriage was simply the conclusion of alterations, that began with no-fault divorce, towards a definition more exclusively focused on the fulfillment of spouses.Nevertheless, I still do not see how the traditional definition of marriage can be modified without, in one way or another, re-orienting it away from children.And I do believe that this re-orientation has consequences for children's well-being. The amicus brief did, however, also acknowledge that a revised definition would provide various practical and social benefits to those in same-sex relationships, that could be weighed against the alleged claims concerning the welfare of children, and that there was room for genuine disagreement.The brief further acknowledged the long and tragic history of cruelty towards same-sex attracted persons; and I certainly think such cruelty is wrong.The brief further discussed the need to affirm equal dignity of all people, and the possibilities of ensuring civil rights, and a respect of the importance of the relationships of same-sex attracted people, regardless of the ultimate societal definition of marriage.However, given the various trade-offs at play, it was again argued that the weighing of these considerations ought to be left to the states, so as to adjudicate between the various matters under discussion.I do not expect readers to agree with my positions, but I hope that the exposition above at least helps with understanding how I can hold my position without having an animosity towards same-sex attracted people.I believe LGBTQ+ people should be treated civilly, and should be supported and cared for, as should all people; their human rights and personhood should be respected. I will now, far more briefly, consider the positions put forward in the Psychology Today blog post [79] and the JAMA Psychiatry commentary [2].The Psychology Today blog post raised the question of whether introducing matters of gender identity in kindergarten was conducive to well-being.When my wife and I were touring kindergarten and prekindergarten classrooms in the Cambridge public schools, we were surprised to observe a teacher reading the book, "Who Are You?A Kid's Guide to Gender Identity," to pre-kindergarten students at circle time, suggesting to the students that their parents may have wrongly guessed the gender of their child, and providing a gender wheel in the back of the book to help the students explore a number of different options.Personal distress over gender identity is a real and difficult issue, and I do not pretend to know the right care approaches or interventions for those suffering from this at various life stages.I do, however, have concerns about the age at which these matters are being addressed in the general curriculum.It is not clear to me that including this book in the prekindergarten curriculum, at the age of 4, is more conducive to societal well-being than dealing with questions of gender identity on an individual, or classroom, as-needed basis, if or when required.I have concerns about whether pre-kindergarten curricula on gender identity might be creating gender dysphoria, rather than alleviating it, and whether similar phenomena might be at play in other contexts as well [47].It does not seem unreasonable to raise these concerns, and that is what the blog post did.I think that there needs to be more open discussion in academia, and in society, about these matters.Most people, even those who are deeply concerned, seem very uneasy discussing these issues, for fear of being attacked for simply raising them.Colleagues at Harvard, ranging from an expert in child development to a clinician providing mental health care for teenage girls, have told me that they are uncomfortable sharing their concerns on these matters in many or most settings at Harvard.An evolutionary biologist at Harvard likewise recently came under attack because she explicitly stated that sex was biological and binary [3], even though she also noted that we can nevertheless respect a person's gender identity.The attack was sufficiently severe, and the administration's response sufficiently weak, that she eventually felt she had no choice but to resign.Rather than open discussion, it seems we are often now relying on anonymous articles [26], or brave, and subsequently vilified, authors [69,72] and whistleblowers [66] to raise alternative viewpoints.One may strongly disagree with their positions, but it is not unreasonable to raise the questions.I think that there are real and reasonable concerns about the welfare of children embedded in these questions. In the JAMA Psychiatry commentary [2], I argued that the abortion and mental health literature had been weaponized by both sides of the abortion policy debate; that the moral contours of the policy debate lay elsewhere; and that the abortion and mental health literature should thus be more oriented towards providing for the mental health needs of women regardless of their views.I had pro-choice colleagues write to me indicating, "I find nothing even to disagree with" or to say that the commentary was "thoughtful, beautifully written, and very well balanced."The commentary was sufficiently centrist that the Harvard Gazette solicited and ran an interview article with me on it [65].It may well still come, but no one at HSPH, or within the rest of Harvard, has to date noted to me anything in the commentary with which they disagree.I had correspondence with an epidemiologist at a different institution who took issue with the sentence, "The one meta-analysis on abortion and depression has come under reasonable critique; yet critics have not produced an alternate meta-analysis and the 10-year-old study may still be the best quantitative-synthesis estimate available."To the best of my knowledge, this is still the only meta-analysis, but the argument in the commentary in no way hinges upon this point.Nevertheless, in spite of trying to find common ground in the commentary, and, with some colleagues at least, evidently succeeding at this, the article was regularly referenced in the Twitter posts; and it was among the writings that were deemed as problematic by some of the HSPH students, faculty, and administrators.As best as I can tell, the reasoning was that because I did not explicitly affirm a pro-choice position or because one can read between the lines to infer my position on abortion, that this, rather than my actual words in the commentary, was problematic.It is true that I believe that abortion typically involves someone acting, as an individual, to end human life as the intended result, and thus constitutes action that is wrong, and a violation of human rightsthe right to life of the fetus [44].This not infrequently occurs because women's financial, relational, and emotional needs are not met, and so I believe there is also a societal culpability for this as well.With regard to policy, while I certainly do not think the laws are irrelevant, I believe more work should be oriented towards creating a positive culture of life that welcomes and sees the value of all children and all life, and also towards structuring our societal life so as to better provide for the economic, emotional, and social needs of women so that pregnancies are less often unwanted [28,81,86]. My hope in this section was to explicate my positions in slightly greater detail and give some of the reasons for those positions.I do not expect that the explications above will necessarily persuade, but I hope they will at least help colleagues understand why I hold the positions I do.Ultimately, I hold each of these positions on account of the welfare of children: what I see as unborn children, children in schools, and children within families.These are of course not the only issues that threaten child welfare.The Catholic Church is itself sadly culpable for a long history of abuse.I strongly believe that addressing this too should also be a public health concern [39,78], and that while considerable progress has been made in prevention [30], greater accountability for those who perpetrated and covered up these incidents is still needed.In any case, likely due to a myriad of causes, the data indicate that the well-being of young people has been in notable decline [19,23,58].Children are among the most vulnerable in society and I do not think their well-being has received adequate attention in public health.During the twenty years since I began as a graduate student at HSPH, the Department of Maternal and Child Health was dissolved (and its faculty mostly subsumed into what is now the Department of Social and Behavioral Science) and during this time the Sexual Orientation and Gender Identity and Expression Health Equity Research Collaborative was established.Research on both of these broad topics should arguably be well represented at a school of public health.However, the decreasing prominence of one, along with the increasing prominence of the other, I think is indicative of the shifting priorities in academic public health.I believe Contents lists available at ScienceDirect Global Epidemiology journal homepage: www.sciencedirect.com/journal/global-epidemiologyhttps://doi.org/10.1016/j.gloepi.2023.100119Received 14 August 2023; Accepted 14 August 2023
16,171.4
2023-08-01T00:00:00.000
[ "Education", "Sociology", "Philosophy" ]
Social Entrepreneurship Policy: Evidences from the Italian Reform ocial entrepreneurship (third sector) is an increasingly important global economic phenomenon that is squarely under the academic lens. Social entrepreneurship represents an interesting opportunity for policy makers to explore new frontiers of economic growth and implement innovation in a potentially growing services sector with possible job opportunities coming from new job creation in the upcoming decades. Based on evidence from Italy, this paper considers the broader picture of this phenomenon. Addressing the need to better understand the drivers of social entrepreneurship policy, we propose a model for interpreting the impact of the recent Italian reform of the third sector at various levels of the ecosystem, which favors innovation, technology adaptation H istorically, social enterprises have always created economic and social value in the under-developed countries and in situations of economic and social hardship.In recent years, the general conditions of welfare systems in economically advanced countries and the development of new affordable technologies have increased the number of social enterprises, giving birth to new forms of enterprises that not only promote useful services for the community, but also represent interesting new forms of employment.Social entrepreneurship represents an interesting opportunity for policy makers to explore new frontiers of economic growth and implement innovation in a section with great growth potential and possible job opportunities coming from job creation in upcoming decades.For this reason, the authors provide an indepth case study of the Italian reform of the third sector, which was introduced in 2017, to demonstrate how entrepreneurial policies can be implemented to favor the development of a field with tremendous growth potential. The main purpose of this study is to explore the main drivers of social entrepreneurship policy in order to innovate an established field, favor technological adaptation, and provide a greater employability.This paper is structured as follows: the first section offers a background on the definition of social entrepreneurship and related concepts in the academic literature; the second section describes the methodology and research; the third introduces the results of the study; and the final section presents the contributions of the study to academic literature and further research opportunities. Literature Review on Social Entrepreneurship Social entrepreneurship, generally defined as ''entrepreneurial activity with an embedded social purpose'' [Austin et al., 2012], has become an important global economic phenomenon [Dacin et al., 2010;Mair, Marti, 2006;Santos, 2012;Zahra et al., 2008].Without reproducing a comprehensive analysis of the literature on the definition of social entrepreneurship and its attendant terms, social enterprise and social entrepreneur, we propose a review of the major contributors to this endeavor, which evidences both the areas of consensus and the areas where different definitions might coexist.Although social entrepreneurship has been squarely under the academic lens for several decades, many researchers find that the field still lacks a comprehensive, universal definition of what social entrepreneurship is [Weerawardena, Mort, 2006;Short et al., 2009;Hoogendoorn et al., 2010;Nicholls, 2010;Bacq, Janssen, 2011;Abu-Saifan, 2012].This is, in part, due to the fact that many definitions were driven by practice rather than theory [Mair, Marti, 2006;Santos, 2012], and in part due to the wide range of interpretations of what both "social" and "entrepreneurship" mean, marked by the differing emphases on the prominence of social goals or the salient features of entrepreneurship [Martin, Osberg, 2007;Peredo, McLean, 2006].However, despite the differences in interpretations and approaches, the variety of definitions associated with social entrepreneurship in the literature point to a focus on four key factors: the characteristics of social entrepreneurs, the sector in which they operate, the processes and resources used, and the primary mission and outcomes associated with social entrepreneurship [Dacin et al., 2010].Seen through this lens, despite differences in focus, a consensus does emerge.Social entrepreneurship can be thought of as an activity that: (a) addresses social problems as its primary objective; (b) uses market mechanisms (e.g.sale of goods and services) to generate the resources needed to accomplish a social goal [Dees, 2001;Johnson, 2003], even if the goods or services are paid for by a third party [Thompson, Doherty, 2006]; and (c) there is an element of innovation in the way resources are combined and social issues are addressed [Mair, Marti, 2006;Nicholls, 2010].Within these very broad definitions, there is a multiplicity of views on how these terms are interpreted, depending on the researchers' different perspectives.Hoogendoorn et al look at these differences by organizing them along the lines of four distinct schools of thought (Table 1).The authors compare and contrast differences in approaches with regards to: what the unit of observation is in the literature (the individual or the enterprise); the centrality of the link between the mission and goods and services sold; the type of legal structure; the degree to which innovation is a defining feature; the presence of constraints on the distribution of profits; the importance of raising commercial income; and the extent of involvement in the governance of direct and indirect stakeholders [Hoogendoorn et al., 2010].Some of the differences observed in defining social entrepreneurship spill over to the definition of social enterprise.Again, central to most definitions is the notion that social enterprises seek to solve social problems.However, the national differences in welfare, labor markets, and ideology together with researchers' own worldviews, have led to the creation of many different kinds of enterprise [Zahra et al., 2009;Chell et al., 2010].While acknowledging the 'untidiness' of social entrepreneurship, Peredo and McLean offer an interesting insight into the loci of social entrepreneurship depending on the place of social goals and the role of commercial exchange in different perspectives [Peredo, McLean, 2006].The authors delineate a continuum in which, at one end, one finds the social goal as the exclusive aim of a social entrepreneur, locating social entrepreneurship firmly within the nonprofit domain; at the other end, however, the authors are open to the possibility of including even primarily for profit organizations with some social component to their mission, citing the well-known case of Ben & Jerry's, and concluding that "Indeed, one thing that emerges from a look at the range of uses given to "social entrepreneurship" is the clear suggestion that the distinctions among public, private, and NFP sectors become attenuated" [Peredo, McLean, 2006, p. 64].More recently, Abu-Saifan has attempted to put some boundaries around this continuum, which he contains between the confines of non-profit organizations with earned-income strategies to for-profit organizations with mission-driven strategies [Abu-Saifan, 2012].Saebi et al. 's typology of social entrepreneurship is another attempt at bracketing the continuum, focusing on the recipients of both the social and economic missions; the authors see these two dimensions in terms of differentiated/integrated strategy (cross-subsidization or beneficiaries as the paying customers) and in terms of the beneficiaries being passive recipients or active participants in the process [Saebi et al., 2019].Moreover, several authors have stressed the relationship between context and entrepreneurship [Shane, Venkataraman, 2000;Atamer, Torres, 2008].This relationship is further elaborated upon by Mair, who views social entrepreneurship as a context-specific, socially constructed phenomenon [Mair, 2010].For Mair, the purpose of social entrepreneurship is to bring about social change, modifying the social, political and economic reality at the local level.Thus, it is the local context that shapes the strategies and tactics employed by the social entrepreneur, including the choice of for-profit or nonprofit models.Even within the geographical boundaries of a single nation, social entrepreneurship can be the outcome of community work, in the form of voluntary associations or public organizations, as well as private firms working towards social objectives alongside profit goals [Shaw, Carter, 2007].Bacq and Janssen have contributed to the definitional issues based on geographical and thematic criteria, stating that "two types of definitions appear in the European literature: conceptual and legal" [Bacq, Janssen, 2011, p. 381].The EMES conceptual definition of "social entrepreneurship", characterized by a distinctive collective aspect, is accompanied by legal definitions given by national governments to provide a clear legal framework.Some of the examples cited include the social cooperatives in Italy, the Community Interest Companies in the UK, and the social purpose company in Belgium [Bacq, Janssen, 2011].The case of Italy is of particular interest, as the economic weight of social enterprises is heavily felt, with thousands of social enterprises that provide a range of social services [Borzaga, Defourny, 2001].A number of prominent scholars highlighted the importance of developing multilevel theories in organizational research [e.g., House et al. 1995, Klein et al. 1999], especially in social entrepreneurship [Tracey et al. 2011].Traditionally studies have focused on micro or macro levels of analysis, ignoring the relationship among those levels or just exploring dynamics within the same level.The complexity of the social entrepreneurship phenomenon requires a multi-level approach, given that social entrepreneurship means different things to different people.It also means different things to people in different places.The field of social entrepreneurship has consequently become a large tent [Martin, Osberg, 2007] where different activities find a home under a broad umbrella of ''activities and processes to enhance social wealth'' [Zahra et al. 2009] or ''entrepreneurship with a social purpose'' [Austin et al., 2012].This complexity offers space to different actors with multiple functions that can operate within the field of social enterprises.Social venturing, nonprofit organizations adopting commercial strategies, social cooperative enterprises, and community entrepreneurship are just some of the distinct phenomena discussed and analyzed under the 'umbrella construct' of social entrepreneurship, which deliberately emphasize 'distinct' phenomena since a great many factors can trigger or facilitate entrepreneurship.Inspired by Painter (2006), Brouard and Larivet provide a framework that throws light on the interconnections between social enterprise, social entrepreneur, and social entrepreneurship (Figure 1).In their model, "the social entrepreneur is the individual or group of individuals who act(s) as social change agent(s) using his (their) entrepreneurial skills for social value creation" [Brouard, Larivet, 2010, p. 32].Social enterprise is defined here as any organization focused on public service or common interest but does not necessarily include the entrepreneurial element.In the central part of the Figure 1, the authors illustrate the various contexts in which social enterprises may be found, and in which social entrepreneurs may operate.The left-hand side of the figure distinguishes the range of sectors that harbor such enterprises, from private to public, with the social economy sector in particular evidence.In this representation, the social economy (also known as Third Sector) comprises for profits, nonprofits, and hybrid organizations that have a social mission as well as an economic one.Brouard and Larivet's framework maps the relationship among the concepts of social entrepreneur, social enterprise, social economy, and social entrepreneurship, paving the way for a structured interpretation of the impact of the Italian reform under study at various levels -at in- Source: compiled by the authors using [Hoogendoorn et al., 2010]. dividual enterprise level, at context, or ecosystem level, and in terms of overall social impact. The multi-level framework proposed by Brouard and Larivet's is an important model that serves to build a general reading of social enterprises, trying to tie also the figure of the entrepreneur and the sectors in which social enterprises create social value.The framework proposes an overall view of the phenomenon and therefore becomes a useful tool to build new policies, in particular, to find any structural gaps in a complex sector such as that of social enterprises. Research Context The term "third sector" indicates a group of organizations that produce goods/services and manage activities outside the market or, if they operate on the market, act with a non-lucrative purpose (generically defined nonprofit), without distributing profits to any of its members or employees but, on the contrary, they use these profits to increase the quantity and improve the quality of services provided.Such nonprofit organizations are characterized by a pursuit of the welfare of the community or a part of it.These organizations can be defined as social solidarity organizations that are specialized in the production of goods or services based on altruism, gift, trust, and reciprocity.The definition of the third sector generically indicates all forms of organization that try to solve social challenges, through a variety of vehicles.Thus, this term embraces a very large reality, which includes, for example, voluntary associations and civil service, nonprofit organizations, non-governmental organizations, and social enterprises (in various forms).In other words, all bodies that pursue nonprofit solidarity or social purposes.In Italy, the third sector represents an evolving field [Venturi, Zandonai, 2014], with many job opportunities offering new roles and new professional figures. "Social enterprise is among the most functional organizational forms for the promotion and creation of new jobs and "good" employment. The motivation and passion towards the social cause together with an efficient business organization model and a vision of work based on precise objectives and economic sustainability are the main ingredients that characterize it." (CIT Serena Porcari, Chairman Dynamo Academy Social Enterprise) 1 This is confirmed by the Italian National Institute of Statistics (ISTAT), which in its latest census (2017) showed an 11% increase in nonprofit institutions operating in Italy compared to 2011.It also showed a total of 5 million volunteers and 780,000 employees, an increase of 16.2% and 15.8%, respectively, compared to the 2011 census.However, the census also indicates another important issue: the evident lack of technical professional expertise, with 50,000 people expected to retire in the short term, without a clear plan to replace them.Moreover, in the general Italian economic scenario, the third sector currently performs six times better than the rest of the country's economic actors [ISTAT,201].We can therefore say that the social economy is solid despite the general crisis that has plagued Italy and the whole of Europe.This is particularly important in the context of a nonprofit sector that has the same need for innovation as the for-profit sector, Source: [Brouard, Larivet, 2010]. Government organizations Near-government organizations (NGOs) Non-profit organizations Hybrid organizations For-profit organizations Public Sector Other NGOs Social economy sector Private sector Social enterprise • Оriented on public service • Оriented on common interest Social entrepreneur but with fewer resources to invest.Indeed, the third sector emerges as an area within the nonprofit sector that particularly values those soft skills that build fundamental human capital (and that are unlikely to be replaced by new technologies): interpersonal skills, stakeholder management, medical and personal assistance, fundraising, and so on. "The fact that the technological and digital revolution is destined to have a significant impact on how to produce, work and consume is a subject that is now widely discussed on a global scale.(…) Certainly, this revolution will not only affect individuals, but our own social and human relations, and even in these fields political action will not be limited to assisting but will have to play an active role in adapting to the present concepts and models now outdated: in the way of doing [social] business, in the way of training and educating, and in the way of designing welfare services." (CIT Claudio Cominardi, Undersecretary of State for Labor and Social Policies)2 .Digitalization is an opportunity that plays out in many different aspects, because it can help better define the new identity of social enterprises, increase the impact of internal communication, develop fundraising in an innovative way, through the use of platforms, direct communication channels and reporting systems, and provide better services to people with disabilities.It is necessary to affirm the professionalizing elements of the third sector, rethinking the model of collaboration between profit and nonprofit, and favoring the sharing of skills.It is also important to think about a governance system that brings together the different actors and embraces the use of technology to enhance impact.Digitalization applied to the third sector is a tool that can be used to plan and improve the possible outcomes of activities, better profiling stakeholders and recipients of such activities.However, it is not always easy to convey the strategic nature of these investments to the actors that operate in the field.In recent years, the third sector has seen rapid evolution, but there is still an important gap in knowledge concerning the poten-tial of digitalization.Hence it is also vital for nonprofits to invest in digital technology. "Technological innovation is one of the challenges facing the Third Sector" (CIT Giuseppe Guzzetti, Chairman of the Association of Foundations and Banks)3 .Given the limited propensity of single organizations or entrepreneurs to make investments in digitization, in 2017 an important reform of the third sector came into force in Italy, which aimed to boost the potential of innovation drivers. Methodology Considering the exploratory nature of our study, we adopted an inductive, qualitative approach following the principles of grounded theory [Glaser, Strauss, 2017;Strauss, Corbin, 1990].Using an open-ended design, themes and theoretical trajectories emerged from the data [Corbin, Strauss, 2008].In terms of a theoretical sampling strategy, we concentrated on the recent reform of the Italian third sector introduced in 2017.This research is based on a wide database that we developed over the last year of investigation (2018), which covers the reactions to the reform of the main Italian experts in social entrepreneurship and is based on both archival and journalistic interview data (see Table 2).The authors independently codified the data and worked together for the triangulation used to moderate possible biases in understanding the purpose of the reform.One of the authors is an expert of the Italian third sector and actively participated in meetings and conferences relating to the new policy introduced in 2017.The data analysis was conducted following the inductive grounded theory methodology [Strauss, Corbin, 1998;Gioia et al., 2013].The analysis stages are represented at Figure 2. The first step of the data analysis is based on descriptive and open coding (to identify first-order categories) following [Gioia et al., 2013] been conducted with a qualitative software (NVivo 11) used to codify earlier categories and to visualize relationships between codes.During the second step of analysis, we completed axial coding [Strauss, Corbin, 1998], collapsing first order categories into theoretical constructs [Eisenhardt, 1989].During the third and last phase of our analysis, we refined second order categories into aggregate dimensions. Findings One of the main drivers introduced with the reform aims to widen the spectrum of action of social enterprises.The explanation of the findings highlights the main points of the reform, which will empower a sec-tor that, in itself, is structurally characterized by an internal transformation process aimed at supporting growing trends in terms of economic growth and future employability.Our coding analysis showed three main drivers of the reform that can be adapted at the individual, organizational and field levels of analysis introduced by the Brouard and Larivet's framework [Brouard, Larivet, 2010]. Institution Building The introduction of entrepreneurial mechanisms should increase the efficiency and effectiveness of projects with a high social impact.When discussing the development of an entrepreneurial mindset, it is important to understand the full potential for social entrepreneurship in Italy.This potential is not limited to the 'pure' social enterprise basin, rather the reform purposefully broadens the field of observation, including a plurality of legal forms and organizational categories for which the "social" aspect is a strategic asset with respect to operational management [Venturi, Puccio, 2018;Maiolini et al, 2019].The mindset is developed by opening up to new business forms (and consequently new business models), such as benefitcorporations or innovative startups with a social vocation.The innovative startups with a social vocation operate exclusively in the sectors indicated by the reform and must implement a social impact methodology into their strategic plan.Interestingly, in addition to traditional sectors such as fair trade, social agriculture, microcredit and so on, the reform expands the reach of social The construction of networks and new partnership models bring into play the extraordinary internal biodiversity of the third sector.The new associative networks go beyond the traditional networks through which similar subjects hold dialogue with institutional "counterparts".These networks reach into communities of people and organizations that include new typologies of actors called asset-holders, in other words, all participants in the creation of economic and social value introduce a new perspective, in which different players identify innovative solutions in different ways, encouraging a harmonious coexistence of cooperative and competitive relationships.Third sector institutions and social enterprises are first and foremost entities that can be used by citizens interested in pursuing the common good.Such citizens are, in logical order, though not necessarily in terms of importance, the first stakeholders of the third sector [Fici, 2018].An ecosystem is therefore formed by many actors who perform different activities, have different objectives, and can make different kinds of contributions.For this reason, it is important to recognize an important role for those actors able to play the role of mediators and orchestrators [Giudici et al., 2018] in the processes of the identification, production, and implementation of solutions.Given the complexity of actions collectively put into play, it is necessary to understand the strategic importance of actors who manage the transmission of information and act as the platform or marketplace by which all the actors interact with each other.So, the how and where open innovation processes and orchestration of resources are selected and distributed within collaborative communities or networks become strategically relevant. Social Impact The reform was designed to introduce the concept of social impact, including tools such as methodological guidelines and metrics, to define a new process for identifying the third sector.In order to exploit the results of a social enterprise, it is necessary to associate social outcomes with the measurement of economic efficiency and understand which benefits a particular solution has created in a community.A social enterprise is distinguished from a traditional enterprise by its ability to show the transformation it produces in terms of the creation and distribution of both economic and social value. "We do not know what will happen in the future, but we know with certainty that the social dimension is changing the economy and the way to value is produced, so we must equip ourselves with a new paradigm where cohesion and sustainability will weigh more."(Paolo Venturi)8 The new models introduced by the reform require innovative startups with a social vocation to simultaneously impact market innovation and show benefits produced for the beneficiaries.The most relevant solution is to measure social impact, defined as the metric that becomes the main tool for qualifying and measuring the sociality of the entrepreneurial action. "The social [dimension] enters as a characterizing factor in traditional supply chains producing a new generation of services (social agriculture, social housing, cultural welfare, social tourism, etc.); technology and new skills are significantly modifying the organizational models and the life cycle of new social enterprises; lastly, the social purpose is increasingly measured in terms of impact."(Vincenzo Algeri, Offical Report on Impact Investing UBI Banca -2018) [UBI Banca, 2018].The reform emphasizes how today it is impossible for any kind of company to omit the identification of social outcomes in the definition of a long-term economic strategy.Efficiency alone is no longer sufficient to build competitiveness and sustainability.The social dimension, understood as the quality of value, sustainability, and care of its stakeholders [Porter, Kramer, 2011] is no longer an externality or an effect of economic action, nor an element that can only be used to heal the "failures" of the state and the markets.Thus, it becomes necessary to understand how to measure it and how to aggregate performance measurement systems of economic sustainability and the creation of social value.The social dimension is no longer relegated to being an output of the redistribution process implemented by public institutions, but becomes a generative mechanism, an input, within the model of integral human development [Venturi, Puccio, 2018].The social dimension as an input allows one to trigger and accelerate processes of hybridization and convergence, bringing about systemic innovation.In addition, starting from the perimeter of the enterprise, they also modify the external dimension of it, giving life to new forms of participation and territorial democracy better able to respond to requests from communities and territories. Discussion The analysis of the data shows how, referring back to Brouard and Larivet's framework mentioned earlier, the Italian reform impacts the social entrepreneur, the social enterprise, and the social economy, effectively supporting the drivers of social entrepreneurship (Table 3).From an individual perspective, the reform aims to encourage the use of new organizational models that allow social entrepreneurs to use new forms of business as their vehicle for social action [Mair, 2010].The institution building process takes place thanks to the use of governance tools and the development of a new awareness in building a social enterprise (through the development of innovative entrepreneurial mindsets).It favors the implementation of a generative driver of new forms of hybrid organizations, so-called "second generation hybrids" [Rago, Venturi, 2014, p.1], such as start-up enterprises with a social purpose, community enterprises, or cooperative platforms.Hybrid organizations bring a transformative systemic innovation [Mulgan, Leadbeater, 2013] able to involve other forms of organizations (both profit and nonprofit) in a complementary manner.In terms of ecosystem development, the main need is to encourage the development of partnerships and networks between social enterprises and other ecosystem actors.There are two players that have an important role to play in this: the investors, through the launch of new impact investing tools and new forms of hybrid social media, and representative organizations (meta-organizations) that must find new tools and services to offer to associated organizations.By encouraging the development of collaborative networks between different actors, it is possible to increase the impact that social enterprises can create.The greater the number of subjects involved, the greater the ability to produce economic and social value [Brouard, Larivet, 2010] because this value creation is distributed among a variety of sectors thanks to a process of cross-sector partnership and smart relocation along the entire value chain. Last but not least is the impact produced by the new forms of social enterprises.The enlargement of a network of actors allows the system to expand opportunities to create economic value and to involve a greater number of workers.The new forms of welfare in Italy today represents a real "industry" that is worth 109.3 billion euro, equal to 6.5% of GDP.For Italian families, it is now the third item of expenditure after food and housing.On average, family spending on welfare accounts for 14.6% of net income [Tucci, 2017].These elements are important to promote employability in two ways: on the one hand, the creation of new jobs, thanks to the growth and scaling that social enterprises can do and, on the other hand, the development of networks of companies and ecosystems favors the creation of new professional figures and allows the allocation of new skills in the world of the third sector.Similarly, new forms of collaboration and networks of companies allow one to innovate the services offered to the beneficiaries, allocating them in a new way in the value chain of the enterprises [Venturi, Zandonai, 2014].In any case, it is also important to find new ways of measuring outcomes of these innovative forms of job creation.The ability to measure becomes the real challenge to be solved in order to demonstrate the effectiveness and efficiency of this new model of social enterprise.Directly related to the implications of the reform in terms of job creation are the implications for the skills needed to be effective in a growing and more complex third sector.Without opening a whole new front on a detailed analysis of a broad range of skills, we feel a strong focus should be placed on the development of entrepreneurial skills, both because they constitute a large subset of the broader range and because it is where, in Italy, there may be the largest gap.An entrepreneurial mindset and the entrepreneurial skills that go with it are essential for social entrepreneurs as they work on building social enterprises and collaborative networks.However, while individuals may be able to chart an educational path that develops entrepreneurial skills, policy makers cannot leave this development to chance.On the contrary, they will have to become experts at entrepreneurial skills and foster their development at all levels.This will involve, first of all, acknowledging the importance of entrepreneurial skills.The 2019 Global Talent Competitiveness Index (GTCI) clearly establishes the importance of entrepreneurial talent in creating new jobs at startup level, as well the vital role it can play in larger organizations and even governments.It further stresses that entrepreneurial skills should "be fully reflected in the curricula and practices of existing educational institutions, including business schools" [Lanvin, Monteiro, 2019, p. 8].In the GTCI index, Italy ranks 38 th overall but 23 rd among European countries [Lanvin, Monteiro, 2019].In a study involving 170 entrepreneurs and prospective entrepreneurs, Elmuti et al. find that there are causal linkages between entrepreneurial education and ventures' effectiveness [Elmuti et al., 2012].The research carried out by Charney and Libecap shows that an entrepreneurial education produces self-sufficient, enterprising individuals, who contribute to growth and wealth creation and become champions of innovation.In particular, they found that "on average, emerging companies that were owned by or employed entrepreneurship graduates had greater than five times the sales and employment growth than those that employed non-entrepreneurship graduates" [Charney, Libecap, 2000].Secondly, it will involve identifying the key entrepreneurial skills to foster.In this regard, the policy maker can rely on the significant work performed by the European Union, which first identified entrepreneurship and sense of initiative as one of the eight key competences necessary for all citizens to thrive and then developed the EntreComp framework, which proposes a shared definition of entrepreneurship as a competence [Bacigalupo et al., 2016].The EntreComp framework is articulated into three interrelated competence areas (Ideas and Opportunities, Resources, Into Action), which in turn consist of five competences each.The framework further outlines an eight-level progression model that can be of great value for curriculum development.Third, it will involve identifying the multiple areas of intervention, which go beyond a purely academic curriculum.Research shows that the development of entrepreneurial skills stems from a combination of varied experiences, rather than depth in any specific type of Carnini Pulino S., Maiolini R.,Venturi P., experience or education [Stuetzer et al., 2013].This has significant implications for curriculum design and argues for the incorporation of greater flexibility in the activities in which students can to take part.Huq and Gilbert specifically look at the benefits of work-based learning in social entrepreneurship with findings that strongly advocate for the inclusion of work-based learning to develop the mindset and the skills that social entrepreneurs will need [Huq, Gilbert, 2013].Tixier et al provide further guidance by analyzing entrepreneurship education at three different levels: the fostering of a widely spread entrepreneurial mindset, the development of entrepreneurial knowledge and skills that will lead to entrepreneurial action, and creating more exposure to entrepreneurial situations [Tixier et al., 2018].The policy maker may intervene at all of these levels to foster the culture and the skills needed to support the growth of social entrepreneurship (as well as for profit entrepreneurship). Conclusions and Directions for Future Research In this paper we have focused our analysis on the innovation introduced by the reform of the Italian third sector introduced in 2017, presenting the first results that demonstrate the drivers of development for Italian social enterprises.The new policies introduced seek to find a way to ensure the greater efficiency of the system of Italian social enterprises.The Italian third sector is expanding and growing.To foster growth, it was necessary to introduce suitable tools: new organizational models, new forms of governance, multidisciplinary sector, new forms of investment, and the possibility of creating partnership and effective alliances.Making social enterprises more effective means allowing these organizations to grow and produce greater social and economic value.In this way, it is possible to envisage the greater economic sustainability of companies through the development of new employability and new forms of work (technological and not) that can accompany the development and innovation of social enterprises. The model presented in this paper further examines the issue of entrepreneurial policy theory as a main driver of innovation for a specific typology of organizations (social enterprises) or a specific field (the third sector or social economy in general).Relying on the Mair and Marti's conceptualization of social enterprises [Mair, Marti, 2006], it provides new guidelines to study the evolution of a specific typology of organization that provides tools and policy instruments that favor the adoption of innovation at all organizations.By doing so, our research contributes to setting up foundations for the development of a theory of policy entrepreneurship [Autio, Rannikko, 2016] applied to social enterprises and the third sector.The development of this theory is all the more important because it will render social entrepreneurship theory more actionable by explaining how, in some situations, institutions may shape organizations and not the opposite.Finally, considering the three-level model provided by [Brouard, Larivet, 2010], this approach aims to explore the interactions that exist between the different levels of analysis and provide empirical evidence of how individuals can use organizations to innovate sectors. Bringing the individual level of analysis together with the organizational and sectoral levels opens up new paths of research on entrepreneurial policy.First of all, our study is an exploratory study case for the purpose of theory building.The validity of the case study is solid as it goes into a case of an industry that introduces a reform to build the foundations for the development of innovation within it with a multi-level approach that takes into consideration what happens at the level of individuals, organizations, and field.Future studies will be able to generalize the multilevel approach in other sectors and try to understand whether the dynamics are the same or if there are significant differences or similarities.In addition, further studies should concentrate on the development of a framework that measures the impact of entrepreneurial policy on employability and on the creation of new job opportunities. Figure 1 . Figure 1.The Three Levels of Analysis: Social Economy, Enterprise, and Entrepreneur Schools of Thought in Social Entrepreneurship Таble 1. . The analysis has Таble 2. Data Sources Source: authors.FORESIGHT AND STI GOVERNANCE Vol. 13 No 3 2019 Fellow of the Human Development and Capability Association (HDCA), and President of the Pontifical Academy of Social Sciences) 6 . Characteristics of the Third Sector Reform Drivers of the reform Level of the impact Activities required Expected outcomes Таble 3.
7,868
2019-09-25T00:00:00.000
[ "Economics", "Business", "Political Science" ]